Running Suite: Kubernetes e2e suite =================================== Random Seed: 1636775537 - Will randomize all specs Will run 5770 specs Running in parallel across 10 nodes Nov 13 03:52:19.521: INFO: >>> kubeConfig: /root/.kube/config Nov 13 03:52:19.526: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Nov 13 03:52:19.550: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Nov 13 03:52:19.607: INFO: The status of Pod cmk-init-discover-node1-vkj2s is Succeeded, skipping waiting Nov 13 03:52:19.607: INFO: The status of Pod cmk-init-discover-node2-5f4hp is Succeeded, skipping waiting Nov 13 03:52:19.607: INFO: 40 / 42 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Nov 13 03:52:19.607: INFO: expected 8 pod replicas in namespace 'kube-system', 8 are Running and Ready. Nov 13 03:52:19.607: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Nov 13 03:52:19.624: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'cmk' (0 seconds elapsed) Nov 13 03:52:19.624: INFO: 5 / 5 pods ready in namespace 'kube-system' in daemonset 'kube-flannel' (0 seconds elapsed) Nov 13 03:52:19.624: INFO: 0 / 0 pods ready in namespace 'kube-system' in daemonset 'kube-flannel-ds-arm' (0 seconds elapsed) Nov 13 03:52:19.624: INFO: 0 / 0 pods ready in namespace 'kube-system' in daemonset 'kube-flannel-ds-arm64' (0 seconds elapsed) Nov 13 03:52:19.624: INFO: 0 / 0 pods ready in namespace 'kube-system' in daemonset 'kube-flannel-ds-ppc64le' (0 seconds elapsed) Nov 13 03:52:19.624: INFO: 0 / 0 pods ready in namespace 'kube-system' in daemonset 'kube-flannel-ds-s390x' (0 seconds elapsed) Nov 13 03:52:19.624: INFO: 5 / 5 pods ready in namespace 'kube-system' in daemonset 'kube-multus-ds-amd64' (0 seconds elapsed) Nov 13 03:52:19.624: INFO: 5 / 5 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Nov 13 03:52:19.624: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'node-feature-discovery-worker' (0 seconds elapsed) Nov 13 03:52:19.624: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'sriov-net-dp-kube-sriov-device-plugin-amd64' (0 seconds elapsed) Nov 13 03:52:19.624: INFO: e2e test version: v1.21.5 Nov 13 03:52:19.625: INFO: kube-apiserver version: v1.21.1 Nov 13 03:52:19.626: INFO: >>> kubeConfig: /root/.kube/config Nov 13 03:52:19.632: INFO: Cluster IP family: ipv4 SSSSSS ------------------------------ Nov 13 03:52:19.630: INFO: >>> kubeConfig: /root/.kube/config Nov 13 03:52:19.646: INFO: Cluster IP family: ipv4 SSSSS ------------------------------ Nov 13 03:52:19.631: INFO: >>> kubeConfig: /root/.kube/config Nov 13 03:52:19.652: INFO: Cluster IP family: ipv4 SSS ------------------------------ Nov 13 03:52:19.633: INFO: >>> kubeConfig: /root/.kube/config Nov 13 03:52:19.655: INFO: Cluster IP family: ipv4 Nov 13 03:52:19.633: INFO: >>> kubeConfig: /root/.kube/config Nov 13 03:52:19.655: INFO: Cluster IP family: ipv4 Nov 13 03:52:19.632: INFO: >>> kubeConfig: /root/.kube/config Nov 13 03:52:19.655: INFO: Cluster IP family: ipv4 SSSSSSS ------------------------------ Nov 13 03:52:19.641: INFO: >>> kubeConfig: /root/.kube/config Nov 13 03:52:19.659: INFO: Cluster IP family: ipv4 SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ Nov 13 03:52:19.658: INFO: >>> kubeConfig: /root/.kube/config Nov 13 03:52:19.678: INFO: Cluster IP family: ipv4 SSSSSSSSSSSSSSS ------------------------------ Nov 13 03:52:19.664: INFO: >>> kubeConfig: /root/.kube/config Nov 13 03:52:19.684: INFO: Cluster IP family: ipv4 SSSSSSS ------------------------------ Nov 13 03:52:19.667: INFO: >>> kubeConfig: /root/.kube/config Nov 13 03:52:19.688: INFO: Cluster IP family: ipv4 SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] Loadbalancing: L7 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 03:52:19.894: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename ingress W1113 03:52:19.915033 28 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Nov 13 03:52:19.915: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Nov 13 03:52:19.917: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Loadbalancing: L7 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/ingress.go:69 Nov 13 03:52:19.924: INFO: Found ClusterRoles; assuming RBAC is enabled. [BeforeEach] [Slow] Nginx /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/ingress.go:688 Nov 13 03:52:20.030: INFO: Only supported for providers [gce gke] (not local) [AfterEach] [Slow] Nginx /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/ingress.go:706 STEP: No ingress created, no cleanup necessary [AfterEach] [sig-network] Loadbalancing: L7 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 03:52:20.033: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "ingress-1276" for this suite. S [SKIPPING] in Spec Setup (BeforeEach) [0.147 seconds] [sig-network] Loadbalancing: L7 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 [Slow] Nginx /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/ingress.go:685 should conform to Ingress spec [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/ingress.go:722 Only supported for providers [gce gke] (not local) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/ingress.go:689 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] ESIPP [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 03:52:20.181: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename esipp W1113 03:52:20.202817 32 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Nov 13 03:52:20.203: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Nov 13 03:52:20.204: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] ESIPP [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/loadbalancer.go:858 Nov 13 03:52:20.206: INFO: Only supported for providers [gce gke] (not local) [AfterEach] [sig-network] ESIPP [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 03:52:20.208: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "esipp-1364" for this suite. [AfterEach] [sig-network] ESIPP [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/loadbalancer.go:866 S [SKIPPING] in Spec Setup (BeforeEach) [0.036 seconds] [sig-network] ESIPP [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should work for type=NodePort [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/loadbalancer.go:927 Only supported for providers [gce gke] (not local) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/loadbalancer.go:860 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 03:52:19.853: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services W1113 03:52:19.875960 29 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Nov 13 03:52:19.876: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Nov 13 03:52:19.877: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746 [It] should be rejected when no endpoints exist /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1968 STEP: creating a service with no endpoints STEP: creating execpod-noendpoints on node node1 Nov 13 03:52:19.890: INFO: Creating new exec pod Nov 13 03:52:29.907: INFO: waiting up to 30s to connect to no-pods:80 STEP: hitting service no-pods:80 from pod execpod-noendpoints on node node1 Nov 13 03:52:29.908: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-649 exec execpod-noendpointsff5h2 -- /bin/sh -x -c /agnhost connect --timeout=3s no-pods:80' Nov 13 03:52:31.514: INFO: rc: 1 Nov 13 03:52:31.514: INFO: error contained 'REFUSED', as expected: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-649 exec execpod-noendpointsff5h2 -- /bin/sh -x -c /agnhost connect --timeout=3s no-pods:80: Command stdout: stderr: + /agnhost connect '--timeout=3s' no-pods:80 REFUSED command terminated with exit code 1 error: exit status 1 [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 03:52:31.514: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-649" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750 • [SLOW TEST:11.670 seconds] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should be rejected when no endpoints exist /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1968 ------------------------------ {"msg":"PASSED [sig-network] Services should be rejected when no endpoints exist","total":-1,"completed":1,"skipped":44,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] KubeProxy /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 03:52:20.236: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kube-proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should set TCP CLOSE_WAIT timeout [Privileged] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/kube_proxy.go:53 Nov 13 03:52:20.272: INFO: The status of Pod e2e-net-exec is Pending, waiting for it to be Running (with Ready = true) Nov 13 03:52:22.277: INFO: The status of Pod e2e-net-exec is Pending, waiting for it to be Running (with Ready = true) Nov 13 03:52:24.278: INFO: The status of Pod e2e-net-exec is Pending, waiting for it to be Running (with Ready = true) Nov 13 03:52:26.280: INFO: The status of Pod e2e-net-exec is Pending, waiting for it to be Running (with Ready = true) Nov 13 03:52:28.277: INFO: The status of Pod e2e-net-exec is Running (Ready = true) STEP: Launching a server daemon on node node2 (node ip: 10.10.190.208, image: k8s.gcr.io/e2e-test-images/agnhost:2.32) Nov 13 03:52:28.292: INFO: The status of Pod e2e-net-server is Pending, waiting for it to be Running (with Ready = true) Nov 13 03:52:30.295: INFO: The status of Pod e2e-net-server is Pending, waiting for it to be Running (with Ready = true) Nov 13 03:52:32.297: INFO: The status of Pod e2e-net-server is Pending, waiting for it to be Running (with Ready = true) Nov 13 03:52:34.296: INFO: The status of Pod e2e-net-server is Pending, waiting for it to be Running (with Ready = true) Nov 13 03:52:36.296: INFO: The status of Pod e2e-net-server is Pending, waiting for it to be Running (with Ready = true) Nov 13 03:52:38.295: INFO: The status of Pod e2e-net-server is Pending, waiting for it to be Running (with Ready = true) Nov 13 03:52:40.295: INFO: The status of Pod e2e-net-server is Running (Ready = true) STEP: Launching a client connection on node node1 (node ip: 10.10.190.207, image: k8s.gcr.io/e2e-test-images/agnhost:2.32) Nov 13 03:52:42.314: INFO: The status of Pod e2e-net-client is Pending, waiting for it to be Running (with Ready = true) Nov 13 03:52:44.322: INFO: The status of Pod e2e-net-client is Pending, waiting for it to be Running (with Ready = true) Nov 13 03:52:46.321: INFO: The status of Pod e2e-net-client is Running (Ready = true) STEP: Checking conntrack entries for the timeout Nov 13 03:52:46.323: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kube-proxy-6598 exec e2e-net-exec -- /bin/sh -x -c conntrack -L -f ipv4 -d 10.10.190.208 | grep -m 1 'CLOSE_WAIT.*dport=11302' ' Nov 13 03:52:46.593: INFO: stderr: "+ conntrack -L -f ipv4 -d 10.10.190.208\n+ grep -m 1 CLOSE_WAIT.*dport=11302\nconntrack v1.4.5 (conntrack-tools): 6 flow entries have been shown.\n" Nov 13 03:52:46.593: INFO: stdout: "tcp 6 3597 CLOSE_WAIT src=10.244.3.47 dst=10.10.190.208 sport=55372 dport=11302 src=10.10.190.208 dst=10.10.190.207 sport=11302 dport=59779 [ASSURED] mark=0 secctx=system_u:object_r:unlabeled_t:s0 use=1\n" Nov 13 03:52:46.593: INFO: conntrack entry for node 10.10.190.208 and port 11302: tcp 6 3597 CLOSE_WAIT src=10.244.3.47 dst=10.10.190.208 sport=55372 dport=11302 src=10.10.190.208 dst=10.10.190.207 sport=11302 dport=59779 [ASSURED] mark=0 secctx=system_u:object_r:unlabeled_t:s0 use=1 [AfterEach] [sig-network] KubeProxy /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 03:52:46.593: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kube-proxy-6598" for this suite. • [SLOW TEST:26.366 seconds] [sig-network] KubeProxy /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should set TCP CLOSE_WAIT timeout [Privileged] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/kube_proxy.go:53 ------------------------------ {"msg":"PASSED [sig-network] KubeProxy should set TCP CLOSE_WAIT timeout [Privileged]","total":-1,"completed":1,"skipped":158,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] Firewall rule /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 03:52:46.701: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename firewall-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Firewall rule /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/firewall.go:61 Nov 13 03:52:46.724: INFO: Only supported for providers [gce] (not local) [AfterEach] [sig-network] Firewall rule /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 03:52:46.725: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "firewall-test-6491" for this suite. S [SKIPPING] in Spec Setup (BeforeEach) [0.033 seconds] [sig-network] Firewall rule /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should have correct firewall rules for e2e cluster [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/firewall.go:204 Only supported for providers [gce] (not local) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/firewall.go:62 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 03:52:19.657: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename nettest W1113 03:52:19.684339 36 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Nov 13 03:52:19.684: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Nov 13 03:52:19.688: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:83 STEP: Executing a successful http request from the external internet [It] should function for pod-Service: udp /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:168 STEP: Performing setup for networking test in namespace nettest-784 STEP: creating a selector STEP: Creating the service pods in kubernetes Nov 13 03:52:19.795: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Nov 13 03:52:19.833: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Nov 13 03:52:21.838: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Nov 13 03:52:23.839: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Nov 13 03:52:25.838: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Nov 13 03:52:27.842: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Nov 13 03:52:29.838: INFO: The status of Pod netserver-0 is Running (Ready = false) Nov 13 03:52:31.842: INFO: The status of Pod netserver-0 is Running (Ready = false) Nov 13 03:52:33.837: INFO: The status of Pod netserver-0 is Running (Ready = false) Nov 13 03:52:35.838: INFO: The status of Pod netserver-0 is Running (Ready = false) Nov 13 03:52:37.838: INFO: The status of Pod netserver-0 is Running (Ready = false) Nov 13 03:52:39.837: INFO: The status of Pod netserver-0 is Running (Ready = false) Nov 13 03:52:41.838: INFO: The status of Pod netserver-0 is Running (Ready = true) Nov 13 03:52:41.844: INFO: The status of Pod netserver-1 is Running (Ready = false) Nov 13 03:52:43.848: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods Nov 13 03:52:47.870: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2 STEP: Getting node addresses Nov 13 03:52:47.870: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Nov 13 03:52:47.877: INFO: Requires at least 2 nodes (not -1) [AfterEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 03:52:47.879: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "nettest-784" for this suite. S [SKIPPING] [28.231 seconds] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 Granular Checks: Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:151 should function for pod-Service: udp [It] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:168 Requires at least 2 nodes (not -1) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/network/utils.go:782 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 03:52:20.264: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename nettest W1113 03:52:20.284255 35 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Nov 13 03:52:20.284: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Nov 13 03:52:20.286: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:83 STEP: Executing a successful http request from the external internet [It] should function for node-Service: udp /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:212 STEP: Performing setup for networking test in namespace nettest-2813 STEP: creating a selector STEP: Creating the service pods in kubernetes Nov 13 03:52:20.399: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Nov 13 03:52:20.436: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Nov 13 03:52:22.439: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Nov 13 03:52:24.439: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Nov 13 03:52:26.441: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Nov 13 03:52:28.439: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Nov 13 03:52:30.440: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Nov 13 03:52:32.441: INFO: The status of Pod netserver-0 is Running (Ready = false) Nov 13 03:52:34.440: INFO: The status of Pod netserver-0 is Running (Ready = false) Nov 13 03:52:36.441: INFO: The status of Pod netserver-0 is Running (Ready = false) Nov 13 03:52:38.440: INFO: The status of Pod netserver-0 is Running (Ready = false) Nov 13 03:52:40.440: INFO: The status of Pod netserver-0 is Running (Ready = false) Nov 13 03:52:42.441: INFO: The status of Pod netserver-0 is Running (Ready = true) Nov 13 03:52:42.447: INFO: The status of Pod netserver-1 is Running (Ready = false) Nov 13 03:52:44.450: INFO: The status of Pod netserver-1 is Running (Ready = false) Nov 13 03:52:46.452: INFO: The status of Pod netserver-1 is Running (Ready = false) Nov 13 03:52:48.451: INFO: The status of Pod netserver-1 is Running (Ready = false) Nov 13 03:52:50.450: INFO: The status of Pod netserver-1 is Running (Ready = false) Nov 13 03:52:52.451: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods Nov 13 03:52:58.489: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2 STEP: Getting node addresses Nov 13 03:52:58.489: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Nov 13 03:52:58.497: INFO: Requires at least 2 nodes (not -1) [AfterEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 03:52:58.499: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "nettest-2813" for this suite. S [SKIPPING] [38.241 seconds] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 Granular Checks: Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:151 should function for node-Service: udp [It] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:212 Requires at least 2 nodes (not -1) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/network/utils.go:782 ------------------------------ SSSSS ------------------------------ [BeforeEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 03:52:19.983: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename nettest W1113 03:52:20.002705 27 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Nov 13 03:52:20.003: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Nov 13 03:52:20.004: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:83 STEP: Executing a successful http request from the external internet [It] should be able to handle large requests: http /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:451 STEP: Performing setup for networking test in namespace nettest-1734 STEP: creating a selector STEP: Creating the service pods in kubernetes Nov 13 03:52:20.120: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Nov 13 03:52:20.158: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Nov 13 03:52:22.163: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Nov 13 03:52:24.162: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Nov 13 03:52:26.163: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Nov 13 03:52:28.161: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Nov 13 03:52:30.161: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Nov 13 03:52:32.163: INFO: The status of Pod netserver-0 is Running (Ready = false) Nov 13 03:52:34.162: INFO: The status of Pod netserver-0 is Running (Ready = false) Nov 13 03:52:36.162: INFO: The status of Pod netserver-0 is Running (Ready = false) Nov 13 03:52:38.162: INFO: The status of Pod netserver-0 is Running (Ready = false) Nov 13 03:52:40.163: INFO: The status of Pod netserver-0 is Running (Ready = true) Nov 13 03:52:40.169: INFO: The status of Pod netserver-1 is Running (Ready = false) Nov 13 03:52:42.173: INFO: The status of Pod netserver-1 is Running (Ready = false) Nov 13 03:52:44.172: INFO: The status of Pod netserver-1 is Running (Ready = false) Nov 13 03:52:46.174: INFO: The status of Pod netserver-1 is Running (Ready = false) Nov 13 03:52:48.171: INFO: The status of Pod netserver-1 is Running (Ready = false) Nov 13 03:52:50.172: INFO: The status of Pod netserver-1 is Running (Ready = false) Nov 13 03:52:52.174: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods Nov 13 03:53:02.198: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2 STEP: Getting node addresses Nov 13 03:53:02.199: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Nov 13 03:53:02.205: INFO: Requires at least 2 nodes (not -1) [AfterEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 03:53:02.207: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "nettest-1734" for this suite. S [SKIPPING] [42.232 seconds] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 Granular Checks: Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:151 should be able to handle large requests: http [It] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:451 Requires at least 2 nodes (not -1) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/network/utils.go:782 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 03:52:20.338: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename nettest W1113 03:52:20.359213 39 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Nov 13 03:52:20.359: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Nov 13 03:52:20.361: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:83 STEP: Executing a successful http request from the external internet [It] should update nodePort: udp [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:397 STEP: Performing setup for networking test in namespace nettest-9530 STEP: creating a selector STEP: Creating the service pods in kubernetes Nov 13 03:52:20.475: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Nov 13 03:52:20.510: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Nov 13 03:52:22.513: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Nov 13 03:52:24.513: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Nov 13 03:52:26.514: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Nov 13 03:52:28.513: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Nov 13 03:52:30.513: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Nov 13 03:52:32.514: INFO: The status of Pod netserver-0 is Running (Ready = false) Nov 13 03:52:34.514: INFO: The status of Pod netserver-0 is Running (Ready = false) Nov 13 03:52:36.515: INFO: The status of Pod netserver-0 is Running (Ready = false) Nov 13 03:52:38.513: INFO: The status of Pod netserver-0 is Running (Ready = false) Nov 13 03:52:40.513: INFO: The status of Pod netserver-0 is Running (Ready = false) Nov 13 03:52:42.514: INFO: The status of Pod netserver-0 is Running (Ready = true) Nov 13 03:52:42.519: INFO: The status of Pod netserver-1 is Running (Ready = false) Nov 13 03:52:44.522: INFO: The status of Pod netserver-1 is Running (Ready = false) Nov 13 03:52:46.526: INFO: The status of Pod netserver-1 is Running (Ready = false) Nov 13 03:52:48.523: INFO: The status of Pod netserver-1 is Running (Ready = false) Nov 13 03:52:50.523: INFO: The status of Pod netserver-1 is Running (Ready = false) Nov 13 03:52:52.523: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods Nov 13 03:53:02.559: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2 STEP: Getting node addresses Nov 13 03:53:02.559: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Nov 13 03:53:02.567: INFO: Requires at least 2 nodes (not -1) [AfterEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 03:53:02.569: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "nettest-9530" for this suite. S [SKIPPING] [42.239 seconds] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 Granular Checks: Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:151 should update nodePort: udp [Slow] [It] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:397 Requires at least 2 nodes (not -1) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/network/utils.go:782 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 03:52:31.573: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename nettest STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:83 STEP: Executing a successful http request from the external internet [It] should function for node-Service: http /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:198 STEP: Performing setup for networking test in namespace nettest-4062 STEP: creating a selector STEP: Creating the service pods in kubernetes Nov 13 03:52:31.712: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Nov 13 03:52:31.745: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Nov 13 03:52:33.749: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Nov 13 03:52:35.748: INFO: The status of Pod netserver-0 is Running (Ready = false) Nov 13 03:52:37.750: INFO: The status of Pod netserver-0 is Running (Ready = false) Nov 13 03:52:39.749: INFO: The status of Pod netserver-0 is Running (Ready = false) Nov 13 03:52:41.750: INFO: The status of Pod netserver-0 is Running (Ready = false) Nov 13 03:52:43.749: INFO: The status of Pod netserver-0 is Running (Ready = false) Nov 13 03:52:45.748: INFO: The status of Pod netserver-0 is Running (Ready = false) Nov 13 03:52:47.749: INFO: The status of Pod netserver-0 is Running (Ready = false) Nov 13 03:52:49.749: INFO: The status of Pod netserver-0 is Running (Ready = false) Nov 13 03:52:51.748: INFO: The status of Pod netserver-0 is Running (Ready = false) Nov 13 03:52:53.750: INFO: The status of Pod netserver-0 is Running (Ready = true) Nov 13 03:52:53.755: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods Nov 13 03:53:03.791: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2 STEP: Getting node addresses Nov 13 03:53:03.791: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Nov 13 03:53:03.799: INFO: Requires at least 2 nodes (not -1) [AfterEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 03:53:03.800: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "nettest-4062" for this suite. S [SKIPPING] [32.236 seconds] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 Granular Checks: Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:151 should function for node-Service: http [It] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:198 Requires at least 2 nodes (not -1) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/network/utils.go:782 ------------------------------ SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 03:52:58.520: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746 [It] should allow pods to hairpin back to themselves through services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:986 STEP: creating a TCP service hairpin-test with type=ClusterIP in namespace services-2470 Nov 13 03:52:58.547: INFO: hairpin-test cluster ip: 10.233.32.51 STEP: creating a client/server pod Nov 13 03:52:58.560: INFO: The status of Pod hairpin is Pending, waiting for it to be Running (with Ready = true) Nov 13 03:53:00.564: INFO: The status of Pod hairpin is Pending, waiting for it to be Running (with Ready = true) Nov 13 03:53:02.564: INFO: The status of Pod hairpin is Running (Ready = true) STEP: waiting for the service to expose an endpoint STEP: waiting up to 3m0s for service hairpin-test in namespace services-2470 to expose endpoints map[hairpin:[8080]] Nov 13 03:53:02.572: INFO: successfully validated that service hairpin-test in namespace services-2470 exposes endpoints map[hairpin:[8080]] STEP: Checking if the pod can reach itself Nov 13 03:53:03.574: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2470 exec hairpin -- /bin/sh -x -c echo hostName | nc -v -t -w 2 hairpin-test 8080' Nov 13 03:53:04.195: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 hairpin-test 8080\nConnection to hairpin-test 8080 port [tcp/http-alt] succeeded!\n" Nov 13 03:53:04.195: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" Nov 13 03:53:04.195: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2470 exec hairpin -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.233.32.51 8080' Nov 13 03:53:04.540: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 10.233.32.51 8080\nConnection to 10.233.32.51 8080 port [tcp/http-alt] succeeded!\n" Nov 13 03:53:04.540: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 03:53:04.540: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-2470" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750 • [SLOW TEST:6.031 seconds] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should allow pods to hairpin back to themselves through services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:986 ------------------------------ {"msg":"PASSED [sig-network] Services should allow pods to hairpin back to themselves through services","total":-1,"completed":1,"skipped":215,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 03:52:47.951: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename nettest STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:83 STEP: Executing a successful http request from the external internet [It] should function for client IP based session affinity: http [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:416 STEP: Performing setup for networking test in namespace nettest-5917 STEP: creating a selector STEP: Creating the service pods in kubernetes Nov 13 03:52:48.059: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Nov 13 03:52:48.093: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Nov 13 03:52:50.096: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Nov 13 03:52:52.098: INFO: The status of Pod netserver-0 is Running (Ready = false) Nov 13 03:52:54.096: INFO: The status of Pod netserver-0 is Running (Ready = false) Nov 13 03:52:56.098: INFO: The status of Pod netserver-0 is Running (Ready = false) Nov 13 03:52:58.098: INFO: The status of Pod netserver-0 is Running (Ready = false) Nov 13 03:53:00.097: INFO: The status of Pod netserver-0 is Running (Ready = false) Nov 13 03:53:02.097: INFO: The status of Pod netserver-0 is Running (Ready = false) Nov 13 03:53:04.098: INFO: The status of Pod netserver-0 is Running (Ready = false) Nov 13 03:53:06.099: INFO: The status of Pod netserver-0 is Running (Ready = false) Nov 13 03:53:08.096: INFO: The status of Pod netserver-0 is Running (Ready = false) Nov 13 03:53:10.097: INFO: The status of Pod netserver-0 is Running (Ready = true) Nov 13 03:53:10.102: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods Nov 13 03:53:16.123: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2 STEP: Getting node addresses Nov 13 03:53:16.124: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Nov 13 03:53:16.130: INFO: Requires at least 2 nodes (not -1) [AfterEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 03:53:16.132: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "nettest-5917" for this suite. S [SKIPPING] [28.189 seconds] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 Granular Checks: Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:151 should function for client IP based session affinity: http [LinuxOnly] [It] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:416 Requires at least 2 nodes (not -1) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/network/utils.go:782 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] Netpol API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 03:53:16.225: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename netpol STEP: Waiting for a default service account to be provisioned in namespace [It] should support creating NetworkPolicy API operations /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/netpol/network_policy_api.go:48 STEP: getting /apis STEP: getting /apis/networking.k8s.io STEP: getting /apis/networking.k8s.iov1 STEP: creating STEP: getting STEP: listing STEP: watching Nov 13 03:53:16.270: INFO: starting watch STEP: cluster-wide listing STEP: cluster-wide watching Nov 13 03:53:16.275: INFO: starting watch STEP: patching STEP: updating Nov 13 03:53:16.287: INFO: waiting for watch events with expected annotations Nov 13 03:53:16.287: INFO: missing expected annotations, waiting: map[string]string{"patched":"true"} Nov 13 03:53:16.287: INFO: saw patched and updated annotations STEP: deleting STEP: deleting a collection [AfterEach] [sig-network] Netpol API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 03:53:16.303: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "netpol-344" for this suite. • ------------------------------ {"msg":"PASSED [sig-network] Netpol API should support creating NetworkPolicy API operations","total":-1,"completed":1,"skipped":75,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 03:53:03.857: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746 [It] should preserve source pod IP for traffic thru service cluster IP [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:903 Nov 13 03:53:03.891: INFO: The status of Pod kube-proxy-mode-detector is Pending, waiting for it to be Running (with Ready = true) Nov 13 03:53:05.894: INFO: The status of Pod kube-proxy-mode-detector is Pending, waiting for it to be Running (with Ready = true) Nov 13 03:53:07.894: INFO: The status of Pod kube-proxy-mode-detector is Running (Ready = true) Nov 13 03:53:07.896: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6746 exec kube-proxy-mode-detector -- /bin/sh -x -c curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode' Nov 13 03:53:08.374: INFO: stderr: "+ curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode\n" Nov 13 03:53:08.374: INFO: stdout: "iptables" Nov 13 03:53:08.374: INFO: proxyMode: iptables Nov 13 03:53:08.383: INFO: Waiting for pod kube-proxy-mode-detector to disappear Nov 13 03:53:08.385: INFO: Pod kube-proxy-mode-detector no longer exists STEP: creating a TCP service sourceip-test with type=ClusterIP in namespace services-6746 Nov 13 03:53:08.390: INFO: sourceip-test cluster ip: 10.233.17.72 STEP: Picking 2 Nodes to test whether source IP is preserved or not STEP: Creating a webserver pod to be part of the TCP service which echoes back source ip Nov 13 03:53:08.406: INFO: The status of Pod echo-sourceip is Pending, waiting for it to be Running (with Ready = true) Nov 13 03:53:10.409: INFO: The status of Pod echo-sourceip is Pending, waiting for it to be Running (with Ready = true) Nov 13 03:53:12.411: INFO: The status of Pod echo-sourceip is Pending, waiting for it to be Running (with Ready = true) Nov 13 03:53:14.410: INFO: The status of Pod echo-sourceip is Pending, waiting for it to be Running (with Ready = true) Nov 13 03:53:16.410: INFO: The status of Pod echo-sourceip is Running (Ready = true) STEP: waiting up to 3m0s for service sourceip-test in namespace services-6746 to expose endpoints map[echo-sourceip:[8080]] Nov 13 03:53:16.419: INFO: successfully validated that service sourceip-test in namespace services-6746 exposes endpoints map[echo-sourceip:[8080]] STEP: Creating pause pod deployment Nov 13 03:53:16.425: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:0, Replicas:0, UpdatedReplicas:0, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:0, Conditions:[]v1.DeploymentCondition(nil), CollisionCount:(*int32)(nil)} Nov 13 03:53:18.429: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:2, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:2, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63772372396, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63772372396, loc:(*time.Location)(0x9e12f00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63772372396, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63772372396, loc:(*time.Location)(0x9e12f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"pause-pod-5975c87cb\" is progressing."}}, CollisionCount:(*int32)(nil)} Nov 13 03:53:20.428: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:2, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63772372396, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63772372396, loc:(*time.Location)(0x9e12f00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63772372399, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63772372396, loc:(*time.Location)(0x9e12f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"pause-pod-5975c87cb\" is progressing."}}, CollisionCount:(*int32)(nil)} Nov 13 03:53:22.437: INFO: Waiting up to 2m0s to get response from 10.233.17.72:8080 Nov 13 03:53:22.437: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6746 exec pause-pod-5975c87cb-6zjls -- /bin/sh -x -c curl -q -s --connect-timeout 30 10.233.17.72:8080/clientip' Nov 13 03:53:22.804: INFO: stderr: "+ curl -q -s --connect-timeout 30 10.233.17.72:8080/clientip\n" Nov 13 03:53:22.804: INFO: stdout: "10.244.3.59:43774" STEP: Verifying the preserved source ip Nov 13 03:53:22.804: INFO: Waiting up to 2m0s to get response from 10.233.17.72:8080 Nov 13 03:53:22.804: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6746 exec pause-pod-5975c87cb-kfhbw -- /bin/sh -x -c curl -q -s --connect-timeout 30 10.233.17.72:8080/clientip' Nov 13 03:53:23.073: INFO: stderr: "+ curl -q -s --connect-timeout 30 10.233.17.72:8080/clientip\n" Nov 13 03:53:23.073: INFO: stdout: "10.244.4.177:43114" STEP: Verifying the preserved source ip Nov 13 03:53:23.073: INFO: Deleting deployment Nov 13 03:53:23.081: INFO: Cleaning up the echo server pod Nov 13 03:53:23.088: INFO: Cleaning up the sourceip test service [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 03:53:23.096: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-6746" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750 • [SLOW TEST:19.254 seconds] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should preserve source pod IP for traffic thru service cluster IP [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:903 ------------------------------ {"msg":"PASSED [sig-network] Services should preserve source pod IP for traffic thru service cluster IP [LinuxOnly]","total":-1,"completed":2,"skipped":89,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 03:53:02.399: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename nettest STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:83 STEP: Executing a successful http request from the external internet [It] should function for client IP based session affinity: udp [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:434 STEP: Performing setup for networking test in namespace nettest-6124 STEP: creating a selector STEP: Creating the service pods in kubernetes Nov 13 03:53:02.515: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Nov 13 03:53:02.545: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Nov 13 03:53:04.549: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Nov 13 03:53:06.551: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Nov 13 03:53:08.549: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Nov 13 03:53:10.549: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Nov 13 03:53:12.550: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Nov 13 03:53:14.549: INFO: The status of Pod netserver-0 is Running (Ready = false) Nov 13 03:53:16.550: INFO: The status of Pod netserver-0 is Running (Ready = false) Nov 13 03:53:18.549: INFO: The status of Pod netserver-0 is Running (Ready = false) Nov 13 03:53:20.549: INFO: The status of Pod netserver-0 is Running (Ready = false) Nov 13 03:53:22.549: INFO: The status of Pod netserver-0 is Running (Ready = false) Nov 13 03:53:24.548: INFO: The status of Pod netserver-0 is Running (Ready = true) Nov 13 03:53:24.552: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods Nov 13 03:53:30.589: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2 STEP: Getting node addresses Nov 13 03:53:30.589: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Nov 13 03:53:30.596: INFO: Requires at least 2 nodes (not -1) [AfterEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 03:53:30.598: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "nettest-6124" for this suite. S [SKIPPING] [28.208 seconds] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 Granular Checks: Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:151 should function for client IP based session affinity: udp [LinuxOnly] [It] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:434 Requires at least 2 nodes (not -1) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/network/utils.go:782 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 03:53:02.637: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename nettest STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:83 STEP: Executing a successful http request from the external internet [It] should be able to handle large requests: udp /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:461 STEP: Performing setup for networking test in namespace nettest-2592 STEP: creating a selector STEP: Creating the service pods in kubernetes Nov 13 03:53:02.749: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Nov 13 03:53:02.779: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Nov 13 03:53:04.783: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Nov 13 03:53:06.784: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Nov 13 03:53:08.783: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Nov 13 03:53:10.784: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Nov 13 03:53:12.784: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Nov 13 03:53:14.784: INFO: The status of Pod netserver-0 is Running (Ready = false) Nov 13 03:53:16.784: INFO: The status of Pod netserver-0 is Running (Ready = false) Nov 13 03:53:18.783: INFO: The status of Pod netserver-0 is Running (Ready = false) Nov 13 03:53:20.782: INFO: The status of Pod netserver-0 is Running (Ready = false) Nov 13 03:53:22.784: INFO: The status of Pod netserver-0 is Running (Ready = false) Nov 13 03:53:24.784: INFO: The status of Pod netserver-0 is Running (Ready = true) Nov 13 03:53:24.790: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods Nov 13 03:53:30.810: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2 STEP: Getting node addresses Nov 13 03:53:30.810: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Nov 13 03:53:30.816: INFO: Requires at least 2 nodes (not -1) [AfterEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 03:53:30.818: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "nettest-2592" for this suite. S [SKIPPING] [28.189 seconds] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 Granular Checks: Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:151 should be able to handle large requests: udp [It] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:461 Requires at least 2 nodes (not -1) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/network/utils.go:782 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 03:53:30.801: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746 [It] should check NodePort out-of-range /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1494 STEP: creating service nodeport-range-test with type NodePort in namespace services-1753 STEP: changing service nodeport-range-test to out-of-range NodePort 48681 STEP: deleting original service nodeport-range-test STEP: creating service nodeport-range-test with out-of-range NodePort 48681 [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 03:53:30.872: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-1753" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750 • ------------------------------ {"msg":"PASSED [sig-network] Services should check NodePort out-of-range","total":-1,"completed":1,"skipped":276,"failed":0} SSSSS ------------------------------ [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 03:52:19.929: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services W1113 03:52:19.950329 24 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Nov 13 03:52:19.950: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Nov 13 03:52:19.952: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746 [It] should implement service.kubernetes.io/service-proxy-name /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1865 STEP: creating service-disabled in namespace services-1645 STEP: creating service service-proxy-disabled in namespace services-1645 STEP: creating replication controller service-proxy-disabled in namespace services-1645 I1113 03:52:19.963645 24 runners.go:190] Created replication controller with name: service-proxy-disabled, namespace: services-1645, replica count: 3 I1113 03:52:23.014703 24 runners.go:190] service-proxy-disabled Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1113 03:52:26.015175 24 runners.go:190] service-proxy-disabled Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1113 03:52:29.017675 24 runners.go:190] service-proxy-disabled Pods: 3 out of 3 created, 1 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1113 03:52:32.018646 24 runners.go:190] service-proxy-disabled Pods: 3 out of 3 created, 2 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1113 03:52:35.019917 24 runners.go:190] service-proxy-disabled Pods: 3 out of 3 created, 2 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1113 03:52:38.020077 24 runners.go:190] service-proxy-disabled Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady STEP: creating service in namespace services-1645 STEP: creating service service-proxy-toggled in namespace services-1645 STEP: creating replication controller service-proxy-toggled in namespace services-1645 I1113 03:52:38.034331 24 runners.go:190] Created replication controller with name: service-proxy-toggled, namespace: services-1645, replica count: 3 I1113 03:52:41.085598 24 runners.go:190] service-proxy-toggled Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1113 03:52:44.086096 24 runners.go:190] service-proxy-toggled Pods: 3 out of 3 created, 2 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1113 03:52:47.089054 24 runners.go:190] service-proxy-toggled Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady STEP: verifying service is up Nov 13 03:52:47.092: INFO: Creating new host exec pod Nov 13 03:52:47.106: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true) Nov 13 03:52:49.109: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true) Nov 13 03:52:51.112: INFO: The status of Pod verify-service-up-host-exec-pod is Running (Ready = true) Nov 13 03:52:51.112: INFO: Creating new exec pod STEP: verifying service has 3 reachable backends Nov 13 03:52:57.132: INFO: Executing cmd "for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.43.221:80 2>&1 || true; echo; done" in pod services-1645/verify-service-up-host-exec-pod Nov 13 03:52:57.132: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1645 exec verify-service-up-host-exec-pod -- /bin/sh -x -c for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.43.221:80 2>&1 || true; echo; done' Nov 13 03:52:57.944: INFO: stderr: "+ seq 1 150\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n" Nov 13 03:52:57.944: INFO: stdout: "service-proxy-toggled-pzkkf\nservice-proxy-toggled-g474z\nservice-proxy-toggled-g474z\nservice-proxy-toggled-kpk7r\nservice-proxy-toggled-kpk7r\nservice-proxy-toggled-kpk7r\nservice-proxy-toggled-pzkkf\nservice-proxy-toggled-pzkkf\nservice-proxy-toggled-pzkkf\nservice-proxy-toggled-kpk7r\nservice-proxy-toggled-pzkkf\nservice-proxy-toggled-kpk7r\nservice-proxy-toggled-g474z\nservice-proxy-toggled-pzkkf\nservice-proxy-toggled-kpk7r\nservice-proxy-toggled-kpk7r\nservice-proxy-toggled-g474z\nservice-proxy-toggled-kpk7r\nservice-proxy-toggled-kpk7r\nservice-proxy-toggled-kpk7r\nservice-proxy-toggled-pzkkf\nservice-proxy-toggled-kpk7r\nservice-proxy-toggled-kpk7r\nservice-proxy-toggled-g474z\nservice-proxy-toggled-pzkkf\nservice-proxy-toggled-pzkkf\nservice-proxy-toggled-pzkkf\nservice-proxy-toggled-pzkkf\nservice-proxy-toggled-g474z\nservice-proxy-toggled-g474z\nservice-proxy-toggled-g474z\nservice-proxy-toggled-g474z\nservice-proxy-toggled-pzkkf\nservice-proxy-toggled-g474z\nservice-proxy-toggled-pzkkf\nservice-proxy-toggled-pzkkf\nservice-proxy-toggled-pzkkf\nservice-proxy-toggled-pzkkf\nservice-proxy-toggled-kpk7r\nservice-proxy-toggled-kpk7r\nservice-proxy-toggled-g474z\nservice-proxy-toggled-kpk7r\nservice-proxy-toggled-kpk7r\nservice-proxy-toggled-pzkkf\nservice-proxy-toggled-pzkkf\nservice-proxy-toggled-g474z\nservice-proxy-toggled-g474z\nservice-proxy-toggled-pzkkf\nservice-proxy-toggled-pzkkf\nservice-proxy-toggled-g474z\nservice-proxy-toggled-g474z\nservice-proxy-toggled-kpk7r\nservice-proxy-toggled-pzkkf\nservice-proxy-toggled-pzkkf\nservice-proxy-toggled-pzkkf\nservice-proxy-toggled-g474z\nservice-proxy-toggled-g474z\nservice-proxy-toggled-g474z\nservice-proxy-toggled-g474z\nservice-proxy-toggled-kpk7r\nservice-proxy-toggled-g474z\nservice-proxy-toggled-pzkkf\nservice-proxy-toggled-g474z\nservice-proxy-toggled-pzkkf\nservice-proxy-toggled-g474z\nservice-proxy-toggled-kpk7r\nservice-proxy-toggled-pzkkf\nservice-proxy-toggled-pzkkf\nservice-proxy-toggled-kpk7r\nservice-proxy-toggled-g474z\nservice-proxy-toggled-kpk7r\nservice-proxy-toggled-g474z\nservice-proxy-toggled-kpk7r\nservice-proxy-toggled-kpk7r\nservice-proxy-toggled-pzkkf\nservice-proxy-toggled-kpk7r\nservice-proxy-toggled-g474z\nservice-proxy-toggled-g474z\nservice-proxy-toggled-kpk7r\nservice-proxy-toggled-pzkkf\nservice-proxy-toggled-kpk7r\nservice-proxy-toggled-g474z\nservice-proxy-toggled-kpk7r\nservice-proxy-toggled-g474z\nservice-proxy-toggled-kpk7r\nservice-proxy-toggled-pzkkf\nservice-proxy-toggled-pzkkf\nservice-proxy-toggled-kpk7r\nservice-proxy-toggled-pzkkf\nservice-proxy-toggled-kpk7r\nservice-proxy-toggled-kpk7r\nservice-proxy-toggled-g474z\nservice-proxy-toggled-kpk7r\nservice-proxy-toggled-g474z\nservice-proxy-toggled-pzkkf\nservice-proxy-toggled-kpk7r\nservice-proxy-toggled-kpk7r\nservice-proxy-toggled-pzkkf\nservice-proxy-toggled-g474z\nservice-proxy-toggled-pzkkf\nservice-proxy-toggled-kpk7r\nservice-proxy-toggled-pzkkf\nservice-proxy-toggled-kpk7r\nservice-proxy-toggled-kpk7r\nservice-proxy-toggled-pzkkf\nservice-proxy-toggled-kpk7r\nservice-proxy-toggled-pzkkf\nservice-proxy-toggled-pzkkf\nservice-proxy-toggled-pzkkf\nservice-proxy-toggled-pzkkf\nservice-proxy-toggled-pzkkf\nservice-proxy-toggled-g474z\nservice-proxy-toggled-pzkkf\nservice-proxy-toggled-pzkkf\nservice-proxy-toggled-g474z\nservice-proxy-toggled-pzkkf\nservice-proxy-toggled-kpk7r\nservice-proxy-toggled-pzkkf\nservice-proxy-toggled-kpk7r\nservice-proxy-toggled-g474z\nservice-proxy-toggled-kpk7r\nservice-proxy-toggled-kpk7r\nservice-proxy-toggled-g474z\nservice-proxy-toggled-g474z\nservice-proxy-toggled-g474z\nservice-proxy-toggled-g474z\nservice-proxy-toggled-g474z\nservice-proxy-toggled-g474z\nservice-proxy-toggled-g474z\nservice-proxy-toggled-g474z\nservice-proxy-toggled-g474z\nservice-proxy-toggled-pzkkf\nservice-proxy-toggled-pzkkf\nservice-proxy-toggled-kpk7r\nservice-proxy-toggled-g474z\nservice-proxy-toggled-pzkkf\nservice-proxy-toggled-g474z\nservice-proxy-toggled-g474z\nservice-proxy-toggled-kpk7r\nservice-proxy-toggled-g474z\nservice-proxy-toggled-kpk7r\nservice-proxy-toggled-kpk7r\nservice-proxy-toggled-kpk7r\nservice-proxy-toggled-pzkkf\nservice-proxy-toggled-g474z\nservice-proxy-toggled-pzkkf\nservice-proxy-toggled-g474z\nservice-proxy-toggled-pzkkf\nservice-proxy-toggled-g474z\nservice-proxy-toggled-pzkkf\n" Nov 13 03:52:57.944: INFO: Executing cmd "for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.43.221:80 2>&1 || true; echo; done" in pod services-1645/verify-service-up-exec-pod-l9dtt Nov 13 03:52:57.945: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1645 exec verify-service-up-exec-pod-l9dtt -- /bin/sh -x -c for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.43.221:80 2>&1 || true; echo; done' Nov 13 03:52:58.691: INFO: stderr: "+ seq 1 150\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n" Nov 13 03:52:58.691: INFO: stdout: "service-proxy-toggled-g474z\nservice-proxy-toggled-g474z\nservice-proxy-toggled-pzkkf\nservice-proxy-toggled-g474z\nservice-proxy-toggled-kpk7r\nservice-proxy-toggled-kpk7r\nservice-proxy-toggled-g474z\nservice-proxy-toggled-pzkkf\nservice-proxy-toggled-pzkkf\nservice-proxy-toggled-kpk7r\nservice-proxy-toggled-pzkkf\nservice-proxy-toggled-kpk7r\nservice-proxy-toggled-kpk7r\nservice-proxy-toggled-g474z\nservice-proxy-toggled-pzkkf\nservice-proxy-toggled-pzkkf\nservice-proxy-toggled-pzkkf\nservice-proxy-toggled-pzkkf\nservice-proxy-toggled-pzkkf\nservice-proxy-toggled-kpk7r\nservice-proxy-toggled-pzkkf\nservice-proxy-toggled-g474z\nservice-proxy-toggled-kpk7r\nservice-proxy-toggled-kpk7r\nservice-proxy-toggled-kpk7r\nservice-proxy-toggled-kpk7r\nservice-proxy-toggled-pzkkf\nservice-proxy-toggled-pzkkf\nservice-proxy-toggled-pzkkf\nservice-proxy-toggled-kpk7r\nservice-proxy-toggled-pzkkf\nservice-proxy-toggled-pzkkf\nservice-proxy-toggled-g474z\nservice-proxy-toggled-pzkkf\nservice-proxy-toggled-kpk7r\nservice-proxy-toggled-pzkkf\nservice-proxy-toggled-kpk7r\nservice-proxy-toggled-pzkkf\nservice-proxy-toggled-pzkkf\nservice-proxy-toggled-kpk7r\nservice-proxy-toggled-g474z\nservice-proxy-toggled-pzkkf\nservice-proxy-toggled-kpk7r\nservice-proxy-toggled-kpk7r\nservice-proxy-toggled-g474z\nservice-proxy-toggled-g474z\nservice-proxy-toggled-kpk7r\nservice-proxy-toggled-kpk7r\nservice-proxy-toggled-g474z\nservice-proxy-toggled-pzkkf\nservice-proxy-toggled-kpk7r\nservice-proxy-toggled-kpk7r\nservice-proxy-toggled-pzkkf\nservice-proxy-toggled-kpk7r\nservice-proxy-toggled-g474z\nservice-proxy-toggled-g474z\nservice-proxy-toggled-g474z\nservice-proxy-toggled-pzkkf\nservice-proxy-toggled-g474z\nservice-proxy-toggled-g474z\nservice-proxy-toggled-g474z\nservice-proxy-toggled-g474z\nservice-proxy-toggled-g474z\nservice-proxy-toggled-kpk7r\nservice-proxy-toggled-kpk7r\nservice-proxy-toggled-g474z\nservice-proxy-toggled-kpk7r\nservice-proxy-toggled-pzkkf\nservice-proxy-toggled-pzkkf\nservice-proxy-toggled-kpk7r\nservice-proxy-toggled-kpk7r\nservice-proxy-toggled-kpk7r\nservice-proxy-toggled-g474z\nservice-proxy-toggled-kpk7r\nservice-proxy-toggled-kpk7r\nservice-proxy-toggled-pzkkf\nservice-proxy-toggled-pzkkf\nservice-proxy-toggled-kpk7r\nservice-proxy-toggled-pzkkf\nservice-proxy-toggled-kpk7r\nservice-proxy-toggled-g474z\nservice-proxy-toggled-kpk7r\nservice-proxy-toggled-g474z\nservice-proxy-toggled-kpk7r\nservice-proxy-toggled-g474z\nservice-proxy-toggled-pzkkf\nservice-proxy-toggled-kpk7r\nservice-proxy-toggled-g474z\nservice-proxy-toggled-kpk7r\nservice-proxy-toggled-g474z\nservice-proxy-toggled-g474z\nservice-proxy-toggled-pzkkf\nservice-proxy-toggled-pzkkf\nservice-proxy-toggled-g474z\nservice-proxy-toggled-g474z\nservice-proxy-toggled-g474z\nservice-proxy-toggled-pzkkf\nservice-proxy-toggled-kpk7r\nservice-proxy-toggled-g474z\nservice-proxy-toggled-pzkkf\nservice-proxy-toggled-pzkkf\nservice-proxy-toggled-pzkkf\nservice-proxy-toggled-kpk7r\nservice-proxy-toggled-kpk7r\nservice-proxy-toggled-pzkkf\nservice-proxy-toggled-g474z\nservice-proxy-toggled-pzkkf\nservice-proxy-toggled-pzkkf\nservice-proxy-toggled-pzkkf\nservice-proxy-toggled-kpk7r\nservice-proxy-toggled-kpk7r\nservice-proxy-toggled-kpk7r\nservice-proxy-toggled-g474z\nservice-proxy-toggled-kpk7r\nservice-proxy-toggled-g474z\nservice-proxy-toggled-g474z\nservice-proxy-toggled-pzkkf\nservice-proxy-toggled-pzkkf\nservice-proxy-toggled-kpk7r\nservice-proxy-toggled-pzkkf\nservice-proxy-toggled-pzkkf\nservice-proxy-toggled-kpk7r\nservice-proxy-toggled-pzkkf\nservice-proxy-toggled-g474z\nservice-proxy-toggled-g474z\nservice-proxy-toggled-pzkkf\nservice-proxy-toggled-g474z\nservice-proxy-toggled-kpk7r\nservice-proxy-toggled-g474z\nservice-proxy-toggled-pzkkf\nservice-proxy-toggled-pzkkf\nservice-proxy-toggled-g474z\nservice-proxy-toggled-g474z\nservice-proxy-toggled-kpk7r\nservice-proxy-toggled-kpk7r\nservice-proxy-toggled-g474z\nservice-proxy-toggled-kpk7r\nservice-proxy-toggled-pzkkf\nservice-proxy-toggled-g474z\nservice-proxy-toggled-kpk7r\nservice-proxy-toggled-g474z\nservice-proxy-toggled-pzkkf\nservice-proxy-toggled-g474z\nservice-proxy-toggled-kpk7r\nservice-proxy-toggled-g474z\nservice-proxy-toggled-g474z\nservice-proxy-toggled-kpk7r\nservice-proxy-toggled-g474z\nservice-proxy-toggled-g474z\nservice-proxy-toggled-kpk7r\n" STEP: Deleting pod verify-service-up-host-exec-pod in namespace services-1645 STEP: Deleting pod verify-service-up-exec-pod-l9dtt in namespace services-1645 STEP: verifying service-disabled is not up Nov 13 03:52:58.704: INFO: Creating new host exec pod Nov 13 03:52:58.715: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true) Nov 13 03:53:00.721: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true) Nov 13 03:53:02.720: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true) Nov 13 03:53:04.719: INFO: The status of Pod verify-service-down-host-exec-pod is Running (Ready = true) Nov 13 03:53:04.719: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1645 exec verify-service-down-host-exec-pod -- /bin/sh -x -c curl -g -s --connect-timeout 2 http://10.233.22.110:80 && echo service-down-failed' Nov 13 03:53:09.701: INFO: rc: 28 Nov 13 03:53:09.701: INFO: error while kubectl execing "curl -g -s --connect-timeout 2 http://10.233.22.110:80 && echo service-down-failed" in pod services-1645/verify-service-down-host-exec-pod: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1645 exec verify-service-down-host-exec-pod -- /bin/sh -x -c curl -g -s --connect-timeout 2 http://10.233.22.110:80 && echo service-down-failed: Command stdout: stderr: + curl -g -s --connect-timeout 2 http://10.233.22.110:80 command terminated with exit code 28 error: exit status 28 Output: STEP: Deleting pod verify-service-down-host-exec-pod in namespace services-1645 STEP: adding service-proxy-name label STEP: verifying service is not up Nov 13 03:53:09.722: INFO: Creating new host exec pod Nov 13 03:53:09.733: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true) Nov 13 03:53:11.740: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true) Nov 13 03:53:13.738: INFO: The status of Pod verify-service-down-host-exec-pod is Running (Ready = true) Nov 13 03:53:13.738: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1645 exec verify-service-down-host-exec-pod -- /bin/sh -x -c curl -g -s --connect-timeout 2 http://10.233.43.221:80 && echo service-down-failed' Nov 13 03:53:16.065: INFO: rc: 28 Nov 13 03:53:16.065: INFO: error while kubectl execing "curl -g -s --connect-timeout 2 http://10.233.43.221:80 && echo service-down-failed" in pod services-1645/verify-service-down-host-exec-pod: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1645 exec verify-service-down-host-exec-pod -- /bin/sh -x -c curl -g -s --connect-timeout 2 http://10.233.43.221:80 && echo service-down-failed: Command stdout: stderr: + curl -g -s --connect-timeout 2 http://10.233.43.221:80 command terminated with exit code 28 error: exit status 28 Output: STEP: Deleting pod verify-service-down-host-exec-pod in namespace services-1645 STEP: removing service-proxy-name annotation STEP: verifying service is up Nov 13 03:53:16.080: INFO: Creating new host exec pod Nov 13 03:53:16.092: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true) Nov 13 03:53:18.096: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true) Nov 13 03:53:20.096: INFO: The status of Pod verify-service-up-host-exec-pod is Running (Ready = true) Nov 13 03:53:20.096: INFO: Creating new exec pod STEP: verifying service has 3 reachable backends Nov 13 03:53:28.117: INFO: Executing cmd "for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.43.221:80 2>&1 || true; echo; done" in pod services-1645/verify-service-up-host-exec-pod Nov 13 03:53:28.118: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1645 exec verify-service-up-host-exec-pod -- /bin/sh -x -c for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.43.221:80 2>&1 || true; echo; done' Nov 13 03:53:28.581: INFO: stderr: "+ seq 1 150\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n" Nov 13 03:53:28.581: INFO: stdout: "service-proxy-toggled-pzkkf\nservice-proxy-toggled-g474z\nservice-proxy-toggled-pzkkf\nservice-proxy-toggled-pzkkf\nservice-proxy-toggled-g474z\nservice-proxy-toggled-g474z\nservice-proxy-toggled-g474z\nservice-proxy-toggled-kpk7r\nservice-proxy-toggled-kpk7r\nservice-proxy-toggled-kpk7r\nservice-proxy-toggled-g474z\nservice-proxy-toggled-pzkkf\nservice-proxy-toggled-kpk7r\nservice-proxy-toggled-pzkkf\nservice-proxy-toggled-g474z\nservice-proxy-toggled-kpk7r\nservice-proxy-toggled-pzkkf\nservice-proxy-toggled-pzkkf\nservice-proxy-toggled-g474z\nservice-proxy-toggled-g474z\nservice-proxy-toggled-g474z\nservice-proxy-toggled-pzkkf\nservice-proxy-toggled-pzkkf\nservice-proxy-toggled-kpk7r\nservice-proxy-toggled-g474z\nservice-proxy-toggled-g474z\nservice-proxy-toggled-pzkkf\nservice-proxy-toggled-pzkkf\nservice-proxy-toggled-g474z\nservice-proxy-toggled-pzkkf\nservice-proxy-toggled-kpk7r\nservice-proxy-toggled-g474z\nservice-proxy-toggled-kpk7r\nservice-proxy-toggled-pzkkf\nservice-proxy-toggled-g474z\nservice-proxy-toggled-kpk7r\nservice-proxy-toggled-g474z\nservice-proxy-toggled-g474z\nservice-proxy-toggled-kpk7r\nservice-proxy-toggled-g474z\nservice-proxy-toggled-g474z\nservice-proxy-toggled-kpk7r\nservice-proxy-toggled-kpk7r\nservice-proxy-toggled-kpk7r\nservice-proxy-toggled-pzkkf\nservice-proxy-toggled-g474z\nservice-proxy-toggled-pzkkf\nservice-proxy-toggled-pzkkf\nservice-proxy-toggled-g474z\nservice-proxy-toggled-pzkkf\nservice-proxy-toggled-pzkkf\nservice-proxy-toggled-g474z\nservice-proxy-toggled-kpk7r\nservice-proxy-toggled-g474z\nservice-proxy-toggled-pzkkf\nservice-proxy-toggled-kpk7r\nservice-proxy-toggled-g474z\nservice-proxy-toggled-g474z\nservice-proxy-toggled-kpk7r\nservice-proxy-toggled-g474z\nservice-proxy-toggled-pzkkf\nservice-proxy-toggled-kpk7r\nservice-proxy-toggled-kpk7r\nservice-proxy-toggled-g474z\nservice-proxy-toggled-kpk7r\nservice-proxy-toggled-pzkkf\nservice-proxy-toggled-kpk7r\nservice-proxy-toggled-pzkkf\nservice-proxy-toggled-kpk7r\nservice-proxy-toggled-g474z\nservice-proxy-toggled-g474z\nservice-proxy-toggled-kpk7r\nservice-proxy-toggled-g474z\nservice-proxy-toggled-kpk7r\nservice-proxy-toggled-pzkkf\nservice-proxy-toggled-g474z\nservice-proxy-toggled-kpk7r\nservice-proxy-toggled-g474z\nservice-proxy-toggled-kpk7r\nservice-proxy-toggled-kpk7r\nservice-proxy-toggled-g474z\nservice-proxy-toggled-g474z\nservice-proxy-toggled-kpk7r\nservice-proxy-toggled-g474z\nservice-proxy-toggled-pzkkf\nservice-proxy-toggled-kpk7r\nservice-proxy-toggled-g474z\nservice-proxy-toggled-g474z\nservice-proxy-toggled-kpk7r\nservice-proxy-toggled-pzkkf\nservice-proxy-toggled-kpk7r\nservice-proxy-toggled-pzkkf\nservice-proxy-toggled-g474z\nservice-proxy-toggled-pzkkf\nservice-proxy-toggled-g474z\nservice-proxy-toggled-g474z\nservice-proxy-toggled-pzkkf\nservice-proxy-toggled-kpk7r\nservice-proxy-toggled-g474z\nservice-proxy-toggled-g474z\nservice-proxy-toggled-kpk7r\nservice-proxy-toggled-kpk7r\nservice-proxy-toggled-g474z\nservice-proxy-toggled-kpk7r\nservice-proxy-toggled-kpk7r\nservice-proxy-toggled-pzkkf\nservice-proxy-toggled-g474z\nservice-proxy-toggled-g474z\nservice-proxy-toggled-g474z\nservice-proxy-toggled-kpk7r\nservice-proxy-toggled-pzkkf\nservice-proxy-toggled-kpk7r\nservice-proxy-toggled-kpk7r\nservice-proxy-toggled-kpk7r\nservice-proxy-toggled-kpk7r\nservice-proxy-toggled-pzkkf\nservice-proxy-toggled-pzkkf\nservice-proxy-toggled-kpk7r\nservice-proxy-toggled-pzkkf\nservice-proxy-toggled-g474z\nservice-proxy-toggled-kpk7r\nservice-proxy-toggled-kpk7r\nservice-proxy-toggled-g474z\nservice-proxy-toggled-kpk7r\nservice-proxy-toggled-kpk7r\nservice-proxy-toggled-kpk7r\nservice-proxy-toggled-pzkkf\nservice-proxy-toggled-g474z\nservice-proxy-toggled-pzkkf\nservice-proxy-toggled-pzkkf\nservice-proxy-toggled-kpk7r\nservice-proxy-toggled-pzkkf\nservice-proxy-toggled-g474z\nservice-proxy-toggled-pzkkf\nservice-proxy-toggled-pzkkf\nservice-proxy-toggled-pzkkf\nservice-proxy-toggled-kpk7r\nservice-proxy-toggled-pzkkf\nservice-proxy-toggled-pzkkf\nservice-proxy-toggled-pzkkf\nservice-proxy-toggled-pzkkf\nservice-proxy-toggled-pzkkf\nservice-proxy-toggled-pzkkf\nservice-proxy-toggled-kpk7r\nservice-proxy-toggled-pzkkf\nservice-proxy-toggled-pzkkf\nservice-proxy-toggled-pzkkf\nservice-proxy-toggled-kpk7r\nservice-proxy-toggled-pzkkf\nservice-proxy-toggled-g474z\n" Nov 13 03:53:28.582: INFO: Executing cmd "for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.43.221:80 2>&1 || true; echo; done" in pod services-1645/verify-service-up-exec-pod-2mxwm Nov 13 03:53:28.582: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1645 exec verify-service-up-exec-pod-2mxwm -- /bin/sh -x -c for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.43.221:80 2>&1 || true; echo; done' Nov 13 03:53:29.052: INFO: stderr: "+ seq 1 150\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.43.221:80\n+ echo\n" Nov 13 03:53:29.053: INFO: stdout: "service-proxy-toggled-g474z\nservice-proxy-toggled-pzkkf\nservice-proxy-toggled-g474z\nservice-proxy-toggled-g474z\nservice-proxy-toggled-kpk7r\nservice-proxy-toggled-g474z\nservice-proxy-toggled-pzkkf\nservice-proxy-toggled-g474z\nservice-proxy-toggled-kpk7r\nservice-proxy-toggled-kpk7r\nservice-proxy-toggled-pzkkf\nservice-proxy-toggled-g474z\nservice-proxy-toggled-kpk7r\nservice-proxy-toggled-kpk7r\nservice-proxy-toggled-g474z\nservice-proxy-toggled-kpk7r\nservice-proxy-toggled-g474z\nservice-proxy-toggled-pzkkf\nservice-proxy-toggled-pzkkf\nservice-proxy-toggled-pzkkf\nservice-proxy-toggled-pzkkf\nservice-proxy-toggled-kpk7r\nservice-proxy-toggled-kpk7r\nservice-proxy-toggled-pzkkf\nservice-proxy-toggled-g474z\nservice-proxy-toggled-pzkkf\nservice-proxy-toggled-g474z\nservice-proxy-toggled-g474z\nservice-proxy-toggled-kpk7r\nservice-proxy-toggled-pzkkf\nservice-proxy-toggled-kpk7r\nservice-proxy-toggled-g474z\nservice-proxy-toggled-g474z\nservice-proxy-toggled-kpk7r\nservice-proxy-toggled-g474z\nservice-proxy-toggled-g474z\nservice-proxy-toggled-pzkkf\nservice-proxy-toggled-pzkkf\nservice-proxy-toggled-kpk7r\nservice-proxy-toggled-kpk7r\nservice-proxy-toggled-kpk7r\nservice-proxy-toggled-kpk7r\nservice-proxy-toggled-pzkkf\nservice-proxy-toggled-kpk7r\nservice-proxy-toggled-pzkkf\nservice-proxy-toggled-pzkkf\nservice-proxy-toggled-g474z\nservice-proxy-toggled-kpk7r\nservice-proxy-toggled-pzkkf\nservice-proxy-toggled-g474z\nservice-proxy-toggled-g474z\nservice-proxy-toggled-kpk7r\nservice-proxy-toggled-g474z\nservice-proxy-toggled-kpk7r\nservice-proxy-toggled-kpk7r\nservice-proxy-toggled-g474z\nservice-proxy-toggled-kpk7r\nservice-proxy-toggled-kpk7r\nservice-proxy-toggled-kpk7r\nservice-proxy-toggled-kpk7r\nservice-proxy-toggled-kpk7r\nservice-proxy-toggled-pzkkf\nservice-proxy-toggled-pzkkf\nservice-proxy-toggled-g474z\nservice-proxy-toggled-g474z\nservice-proxy-toggled-g474z\nservice-proxy-toggled-pzkkf\nservice-proxy-toggled-g474z\nservice-proxy-toggled-g474z\nservice-proxy-toggled-g474z\nservice-proxy-toggled-pzkkf\nservice-proxy-toggled-g474z\nservice-proxy-toggled-g474z\nservice-proxy-toggled-kpk7r\nservice-proxy-toggled-g474z\nservice-proxy-toggled-pzkkf\nservice-proxy-toggled-kpk7r\nservice-proxy-toggled-g474z\nservice-proxy-toggled-kpk7r\nservice-proxy-toggled-kpk7r\nservice-proxy-toggled-kpk7r\nservice-proxy-toggled-pzkkf\nservice-proxy-toggled-pzkkf\nservice-proxy-toggled-g474z\nservice-proxy-toggled-g474z\nservice-proxy-toggled-pzkkf\nservice-proxy-toggled-kpk7r\nservice-proxy-toggled-pzkkf\nservice-proxy-toggled-kpk7r\nservice-proxy-toggled-g474z\nservice-proxy-toggled-pzkkf\nservice-proxy-toggled-g474z\nservice-proxy-toggled-kpk7r\nservice-proxy-toggled-pzkkf\nservice-proxy-toggled-pzkkf\nservice-proxy-toggled-pzkkf\nservice-proxy-toggled-pzkkf\nservice-proxy-toggled-g474z\nservice-proxy-toggled-g474z\nservice-proxy-toggled-pzkkf\nservice-proxy-toggled-kpk7r\nservice-proxy-toggled-kpk7r\nservice-proxy-toggled-pzkkf\nservice-proxy-toggled-pzkkf\nservice-proxy-toggled-kpk7r\nservice-proxy-toggled-g474z\nservice-proxy-toggled-kpk7r\nservice-proxy-toggled-g474z\nservice-proxy-toggled-pzkkf\nservice-proxy-toggled-kpk7r\nservice-proxy-toggled-g474z\nservice-proxy-toggled-g474z\nservice-proxy-toggled-g474z\nservice-proxy-toggled-g474z\nservice-proxy-toggled-pzkkf\nservice-proxy-toggled-pzkkf\nservice-proxy-toggled-g474z\nservice-proxy-toggled-pzkkf\nservice-proxy-toggled-pzkkf\nservice-proxy-toggled-pzkkf\nservice-proxy-toggled-kpk7r\nservice-proxy-toggled-g474z\nservice-proxy-toggled-pzkkf\nservice-proxy-toggled-kpk7r\nservice-proxy-toggled-g474z\nservice-proxy-toggled-pzkkf\nservice-proxy-toggled-pzkkf\nservice-proxy-toggled-pzkkf\nservice-proxy-toggled-kpk7r\nservice-proxy-toggled-kpk7r\nservice-proxy-toggled-g474z\nservice-proxy-toggled-g474z\nservice-proxy-toggled-g474z\nservice-proxy-toggled-g474z\nservice-proxy-toggled-pzkkf\nservice-proxy-toggled-kpk7r\nservice-proxy-toggled-pzkkf\nservice-proxy-toggled-g474z\nservice-proxy-toggled-kpk7r\nservice-proxy-toggled-kpk7r\nservice-proxy-toggled-pzkkf\nservice-proxy-toggled-g474z\nservice-proxy-toggled-pzkkf\nservice-proxy-toggled-kpk7r\nservice-proxy-toggled-pzkkf\nservice-proxy-toggled-g474z\nservice-proxy-toggled-kpk7r\nservice-proxy-toggled-pzkkf\nservice-proxy-toggled-kpk7r\nservice-proxy-toggled-g474z\n" STEP: Deleting pod verify-service-up-host-exec-pod in namespace services-1645 STEP: Deleting pod verify-service-up-exec-pod-2mxwm in namespace services-1645 STEP: verifying service-disabled is still not up Nov 13 03:53:29.064: INFO: Creating new host exec pod Nov 13 03:53:29.079: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true) Nov 13 03:53:31.083: INFO: The status of Pod verify-service-down-host-exec-pod is Running (Ready = true) Nov 13 03:53:31.083: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1645 exec verify-service-down-host-exec-pod -- /bin/sh -x -c curl -g -s --connect-timeout 2 http://10.233.22.110:80 && echo service-down-failed' Nov 13 03:53:33.558: INFO: rc: 28 Nov 13 03:53:33.558: INFO: error while kubectl execing "curl -g -s --connect-timeout 2 http://10.233.22.110:80 && echo service-down-failed" in pod services-1645/verify-service-down-host-exec-pod: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1645 exec verify-service-down-host-exec-pod -- /bin/sh -x -c curl -g -s --connect-timeout 2 http://10.233.22.110:80 && echo service-down-failed: Command stdout: stderr: + curl -g -s --connect-timeout 2 http://10.233.22.110:80 command terminated with exit code 28 error: exit status 28 Output: STEP: Deleting pod verify-service-down-host-exec-pod in namespace services-1645 [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 03:53:33.567: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-1645" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750 • [SLOW TEST:73.648 seconds] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should implement service.kubernetes.io/service-proxy-name /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1865 ------------------------------ {"msg":"PASSED [sig-network] Services should implement service.kubernetes.io/service-proxy-name","total":-1,"completed":1,"skipped":67,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 03:53:30.863: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename nettest STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:83 STEP: Executing a successful http request from the external internet [It] should provide Internet connection for containers [Feature:Networking-IPv4] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:97 STEP: Running container which tries to connect to 8.8.8.8 Nov 13 03:53:30.984: INFO: Waiting up to 5m0s for pod "connectivity-test" in namespace "nettest-4964" to be "Succeeded or Failed" Nov 13 03:53:30.987: INFO: Pod "connectivity-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.480915ms Nov 13 03:53:32.990: INFO: Pod "connectivity-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005660447s Nov 13 03:53:34.992: INFO: Pod "connectivity-test": Phase="Pending", Reason="", readiness=false. Elapsed: 4.008150204s Nov 13 03:53:36.996: INFO: Pod "connectivity-test": Phase="Pending", Reason="", readiness=false. Elapsed: 6.011347988s Nov 13 03:53:38.998: INFO: Pod "connectivity-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.014000433s STEP: Saw pod success Nov 13 03:53:38.998: INFO: Pod "connectivity-test" satisfied condition "Succeeded or Failed" [AfterEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 03:53:38.998: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "nettest-4964" for this suite. • [SLOW TEST:8.148 seconds] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should provide Internet connection for containers [Feature:Networking-IPv4] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:97 ------------------------------ {"msg":"PASSED [sig-network] Networking should provide Internet connection for containers [Feature:Networking-IPv4]","total":-1,"completed":1,"skipped":313,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] Conntrack /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 03:52:20.443: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename conntrack STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Conntrack /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/conntrack.go:96 [It] should drop INVALID conntrack entries /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/conntrack.go:282 Nov 13 03:52:20.481: INFO: The status of Pod boom-server is Pending, waiting for it to be Running (with Ready = true) Nov 13 03:52:22.484: INFO: The status of Pod boom-server is Pending, waiting for it to be Running (with Ready = true) Nov 13 03:52:24.484: INFO: The status of Pod boom-server is Pending, waiting for it to be Running (with Ready = true) Nov 13 03:52:26.486: INFO: The status of Pod boom-server is Pending, waiting for it to be Running (with Ready = true) Nov 13 03:52:28.483: INFO: The status of Pod boom-server is Pending, waiting for it to be Running (with Ready = true) Nov 13 03:52:30.485: INFO: The status of Pod boom-server is Pending, waiting for it to be Running (with Ready = true) Nov 13 03:52:32.488: INFO: The status of Pod boom-server is Pending, waiting for it to be Running (with Ready = true) Nov 13 03:52:34.484: INFO: The status of Pod boom-server is Pending, waiting for it to be Running (with Ready = true) Nov 13 03:52:36.489: INFO: The status of Pod boom-server is Running (Ready = true) STEP: Server pod created on node node2 STEP: Server service created Nov 13 03:52:36.510: INFO: The status of Pod startup-script is Pending, waiting for it to be Running (with Ready = true) Nov 13 03:52:38.513: INFO: The status of Pod startup-script is Pending, waiting for it to be Running (with Ready = true) Nov 13 03:52:40.513: INFO: The status of Pod startup-script is Running (Ready = true) STEP: Client pod created STEP: checking client pod does not RST the TCP connection because it receives and INVALID packet Nov 13 03:53:40.551: INFO: boom-server pod logs: 2021/11/13 03:52:32 external ip: 10.244.4.151 2021/11/13 03:52:32 listen on 0.0.0.0:9000 2021/11/13 03:52:32 probing 10.244.4.151 2021/11/13 03:52:41 tcp packet: &{SrcPort:44491 DestPort:9000 Seq:1382441776 Ack:0 Flags:40962 WindowSize:29200 Checksum:11273 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.3.44 2021/11/13 03:52:41 tcp packet: &{SrcPort:44491 DestPort:9000 Seq:1382441777 Ack:357843950 Flags:32784 WindowSize:229 Checksum:27935 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.44 2021/11/13 03:52:41 connection established 2021/11/13 03:52:41 calling checksumTCP: 10.244.4.151 10.244.3.44 [35 40 173 203 21 82 189 78 82 102 99 49 80 24 0 229 0 0 0 0] [98 111 111 109 33 33 33] 2021/11/13 03:52:41 checksumer: &{sum:455604 oddByte:33 length:39} 2021/11/13 03:52:41 ret: 455637 2021/11/13 03:52:41 ret: 62427 2021/11/13 03:52:41 ret: 62427 2021/11/13 03:52:41 boom packet injected 2021/11/13 03:52:41 tcp packet: &{SrcPort:44491 DestPort:9000 Seq:1382441777 Ack:357843950 Flags:32785 WindowSize:229 Checksum:27934 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.3.44 2021/11/13 03:52:43 tcp packet: &{SrcPort:34699 DestPort:9000 Seq:694813682 Ack:0 Flags:40962 WindowSize:29200 Checksum:53936 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.3.44 2021/11/13 03:52:43 tcp packet: &{SrcPort:34699 DestPort:9000 Seq:694813683 Ack:650814331 Flags:32784 WindowSize:229 Checksum:39666 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.44 2021/11/13 03:52:43 connection established 2021/11/13 03:52:43 calling checksumTCP: 10.244.4.151 10.244.3.44 [35 40 135 139 38 201 28 219 41 106 3 243 80 24 0 229 0 0 0 0] [98 111 111 109 33 33 33] 2021/11/13 03:52:43 checksumer: &{sum:556149 oddByte:33 length:39} 2021/11/13 03:52:43 ret: 556182 2021/11/13 03:52:43 ret: 31902 2021/11/13 03:52:43 ret: 31902 2021/11/13 03:52:43 boom packet injected 2021/11/13 03:52:43 tcp packet: &{SrcPort:34699 DestPort:9000 Seq:694813683 Ack:650814331 Flags:32785 WindowSize:229 Checksum:39665 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.3.44 2021/11/13 03:52:45 tcp packet: &{SrcPort:40601 DestPort:9000 Seq:2455766882 Ack:0 Flags:40962 WindowSize:29200 Checksum:18284 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.3.44 2021/11/13 03:52:45 tcp packet: &{SrcPort:40601 DestPort:9000 Seq:2455766883 Ack:3505660930 Flags:32784 WindowSize:229 Checksum:56620 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.44 2021/11/13 03:52:45 connection established 2021/11/13 03:52:45 calling checksumTCP: 10.244.4.151 10.244.3.44 [35 40 158 153 208 242 157 98 146 96 7 99 80 24 0 229 0 0 0 0] [98 111 111 109 33 33 33] 2021/11/13 03:52:45 checksumer: &{sum:500260 oddByte:33 length:39} 2021/11/13 03:52:45 ret: 500293 2021/11/13 03:52:45 ret: 41548 2021/11/13 03:52:45 ret: 41548 2021/11/13 03:52:45 boom packet injected 2021/11/13 03:52:45 tcp packet: &{SrcPort:40601 DestPort:9000 Seq:2455766883 Ack:3505660930 Flags:32785 WindowSize:229 Checksum:56619 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.3.44 2021/11/13 03:52:47 tcp packet: &{SrcPort:45990 DestPort:9000 Seq:3500369526 Ack:0 Flags:40962 WindowSize:29200 Checksum:36149 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.3.44 2021/11/13 03:52:47 tcp packet: &{SrcPort:45990 DestPort:9000 Seq:3500369527 Ack:1149223788 Flags:32784 WindowSize:229 Checksum:2093 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.44 2021/11/13 03:52:47 connection established 2021/11/13 03:52:47 calling checksumTCP: 10.244.4.151 10.244.3.44 [35 40 179 166 68 126 60 204 208 163 102 119 80 24 0 229 0 0 0 0] [98 111 111 109 33 33 33] 2021/11/13 03:52:47 checksumer: &{sum:523241 oddByte:33 length:39} 2021/11/13 03:52:47 ret: 523274 2021/11/13 03:52:47 ret: 64529 2021/11/13 03:52:47 ret: 64529 2021/11/13 03:52:47 boom packet injected 2021/11/13 03:52:47 tcp packet: &{SrcPort:45990 DestPort:9000 Seq:3500369527 Ack:1149223788 Flags:32785 WindowSize:229 Checksum:2092 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.3.44 2021/11/13 03:52:49 tcp packet: &{SrcPort:38447 DestPort:9000 Seq:344513163 Ack:0 Flags:40962 WindowSize:29200 Checksum:60129 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.3.44 2021/11/13 03:52:49 tcp packet: &{SrcPort:38447 DestPort:9000 Seq:344513164 Ack:3034058511 Flags:32784 WindowSize:229 Checksum:41486 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.44 2021/11/13 03:52:49 connection established 2021/11/13 03:52:49 calling checksumTCP: 10.244.4.151 10.244.3.44 [35 40 150 47 180 214 136 111 20 136 218 140 80 24 0 229 0 0 0 0] [98 111 111 109 33 33 33] 2021/11/13 03:52:49 checksumer: &{sum:490048 oddByte:33 length:39} 2021/11/13 03:52:49 ret: 490081 2021/11/13 03:52:49 ret: 31336 2021/11/13 03:52:49 ret: 31336 2021/11/13 03:52:49 boom packet injected 2021/11/13 03:52:49 tcp packet: &{SrcPort:38447 DestPort:9000 Seq:344513164 Ack:3034058511 Flags:32785 WindowSize:229 Checksum:41485 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.3.44 2021/11/13 03:52:51 tcp packet: &{SrcPort:44491 DestPort:9000 Seq:1382441778 Ack:357843951 Flags:32784 WindowSize:229 Checksum:7931 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.44 2021/11/13 03:52:51 tcp packet: &{SrcPort:37702 DestPort:9000 Seq:2403272375 Ack:0 Flags:40962 WindowSize:29200 Checksum:16151 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.3.44 2021/11/13 03:52:51 tcp packet: &{SrcPort:37702 DestPort:9000 Seq:2403272376 Ack:1577549483 Flags:32784 WindowSize:229 Checksum:54695 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.44 2021/11/13 03:52:51 connection established 2021/11/13 03:52:51 calling checksumTCP: 10.244.4.151 10.244.3.44 [35 40 147 70 94 5 248 11 143 63 6 184 80 24 0 229 0 0 0 0] [98 111 111 109 33 33 33] 2021/11/13 03:52:51 checksumer: &{sum:409342 oddByte:33 length:39} 2021/11/13 03:52:51 ret: 409375 2021/11/13 03:52:51 ret: 16165 2021/11/13 03:52:51 ret: 16165 2021/11/13 03:52:51 boom packet injected 2021/11/13 03:52:51 tcp packet: &{SrcPort:37702 DestPort:9000 Seq:2403272376 Ack:1577549483 Flags:32785 WindowSize:229 Checksum:54694 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.3.44 2021/11/13 03:52:53 tcp packet: &{SrcPort:34699 DestPort:9000 Seq:694813684 Ack:650814332 Flags:32784 WindowSize:229 Checksum:19664 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.44 2021/11/13 03:52:53 tcp packet: &{SrcPort:43744 DestPort:9000 Seq:343297519 Ack:0 Flags:40962 WindowSize:29200 Checksum:21309 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.3.44 2021/11/13 03:52:53 tcp packet: &{SrcPort:43744 DestPort:9000 Seq:343297520 Ack:571887354 Flags:32784 WindowSize:229 Checksum:19871 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.44 2021/11/13 03:52:53 connection established 2021/11/13 03:52:53 calling checksumTCP: 10.244.4.151 10.244.3.44 [35 40 170 224 34 20 200 90 20 118 77 240 80 24 0 229 0 0 0 0] [98 111 111 109 33 33 33] 2021/11/13 03:52:53 checksumer: &{sum:501109 oddByte:33 length:39} 2021/11/13 03:52:53 ret: 501142 2021/11/13 03:52:53 ret: 42397 2021/11/13 03:52:53 ret: 42397 2021/11/13 03:52:53 boom packet injected 2021/11/13 03:52:53 tcp packet: &{SrcPort:43744 DestPort:9000 Seq:343297520 Ack:571887354 Flags:32785 WindowSize:229 Checksum:19870 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.3.44 2021/11/13 03:52:55 tcp packet: &{SrcPort:40601 DestPort:9000 Seq:2455766884 Ack:3505660931 Flags:32784 WindowSize:229 Checksum:36618 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.44 2021/11/13 03:52:55 tcp packet: &{SrcPort:44461 DestPort:9000 Seq:560167675 Ack:0 Flags:40962 WindowSize:29200 Checksum:3751 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.3.44 2021/11/13 03:52:55 tcp packet: &{SrcPort:44461 DestPort:9000 Seq:560167676 Ack:2092219893 Flags:32784 WindowSize:229 Checksum:14238 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.44 2021/11/13 03:52:55 connection established 2021/11/13 03:52:55 calling checksumTCP: 10.244.4.151 10.244.3.44 [35 40 173 173 124 179 55 85 33 99 122 252 80 24 0 229 0 0 0 0] [98 111 111 109 33 33 33] 2021/11/13 03:52:55 checksumer: &{sum:525691 oddByte:33 length:39} 2021/11/13 03:52:55 ret: 525724 2021/11/13 03:52:55 ret: 1444 2021/11/13 03:52:55 ret: 1444 2021/11/13 03:52:55 boom packet injected 2021/11/13 03:52:55 tcp packet: &{SrcPort:44461 DestPort:9000 Seq:560167676 Ack:2092219893 Flags:32785 WindowSize:229 Checksum:14237 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.3.44 2021/11/13 03:52:57 tcp packet: &{SrcPort:45990 DestPort:9000 Seq:3500369528 Ack:1149223789 Flags:32784 WindowSize:229 Checksum:47626 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.44 2021/11/13 03:52:57 tcp packet: &{SrcPort:32996 DestPort:9000 Seq:322723472 Ack:0 Flags:40962 WindowSize:29200 Checksum:24113 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.3.44 2021/11/13 03:52:57 tcp packet: &{SrcPort:32996 DestPort:9000 Seq:322723473 Ack:2554420474 Flags:32784 WindowSize:229 Checksum:50374 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.44 2021/11/13 03:52:57 connection established 2021/11/13 03:52:57 calling checksumTCP: 10.244.4.151 10.244.3.44 [35 40 128 228 152 63 214 90 19 60 94 145 80 24 0 229 0 0 0 0] [98 111 111 109 33 33 33] 2021/11/13 03:52:57 checksumer: &{sum:474079 oddByte:33 length:39} 2021/11/13 03:52:57 ret: 474112 2021/11/13 03:52:57 ret: 15367 2021/11/13 03:52:57 ret: 15367 2021/11/13 03:52:57 boom packet injected 2021/11/13 03:52:57 tcp packet: &{SrcPort:32996 DestPort:9000 Seq:322723473 Ack:2554420474 Flags:32785 WindowSize:229 Checksum:50373 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.3.44 2021/11/13 03:52:59 tcp packet: &{SrcPort:38447 DestPort:9000 Seq:344513165 Ack:3034058512 Flags:32784 WindowSize:229 Checksum:21482 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.44 2021/11/13 03:52:59 tcp packet: &{SrcPort:37781 DestPort:9000 Seq:3494734813 Ack:0 Flags:40962 WindowSize:29200 Checksum:31057 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.3.44 2021/11/13 03:52:59 tcp packet: &{SrcPort:37781 DestPort:9000 Seq:3494734814 Ack:3749587293 Flags:32784 WindowSize:229 Checksum:50292 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.44 2021/11/13 03:52:59 connection established 2021/11/13 03:52:59 calling checksumTCP: 10.244.4.151 10.244.3.44 [35 40 147 149 223 124 162 189 208 77 107 222 80 24 0 229 0 0 0 0] [98 111 111 109 33 33 33] 2021/11/13 03:52:59 checksumer: &{sum:519119 oddByte:33 length:39} 2021/11/13 03:52:59 ret: 519152 2021/11/13 03:52:59 ret: 60407 2021/11/13 03:52:59 ret: 60407 2021/11/13 03:52:59 boom packet injected 2021/11/13 03:52:59 tcp packet: &{SrcPort:37781 DestPort:9000 Seq:3494734814 Ack:3749587293 Flags:32785 WindowSize:229 Checksum:50291 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.3.44 2021/11/13 03:53:01 tcp packet: &{SrcPort:37702 DestPort:9000 Seq:2403272377 Ack:1577549484 Flags:32784 WindowSize:229 Checksum:34693 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.44 2021/11/13 03:53:01 tcp packet: &{SrcPort:42141 DestPort:9000 Seq:3981632028 Ack:0 Flags:40962 WindowSize:29200 Checksum:52531 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.3.44 2021/11/13 03:53:01 tcp packet: &{SrcPort:42141 DestPort:9000 Seq:3981632029 Ack:3567735888 Flags:32784 WindowSize:229 Checksum:61547 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.44 2021/11/13 03:53:01 connection established 2021/11/13 03:53:01 calling checksumTCP: 10.244.4.151 10.244.3.44 [35 40 164 157 212 165 205 176 237 82 226 29 80 24 0 229 0 0 0 0] [98 111 111 109 33 33 33] 2021/11/13 03:53:01 checksumer: &{sum:480404 oddByte:33 length:39} 2021/11/13 03:53:01 ret: 480437 2021/11/13 03:53:01 ret: 21692 2021/11/13 03:53:01 ret: 21692 2021/11/13 03:53:01 boom packet injected 2021/11/13 03:53:01 tcp packet: &{SrcPort:42141 DestPort:9000 Seq:3981632029 Ack:3567735888 Flags:32785 WindowSize:229 Checksum:61546 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.3.44 2021/11/13 03:53:03 tcp packet: &{SrcPort:43744 DestPort:9000 Seq:343297521 Ack:571887355 Flags:32784 WindowSize:229 Checksum:65402 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.44 2021/11/13 03:53:03 tcp packet: &{SrcPort:40681 DestPort:9000 Seq:752639471 Ack:0 Flags:40962 WindowSize:29200 Checksum:4027 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.3.44 2021/11/13 03:53:03 tcp packet: &{SrcPort:40681 DestPort:9000 Seq:752639472 Ack:3445416718 Flags:32784 WindowSize:229 Checksum:41902 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.44 2021/11/13 03:53:03 connection established 2021/11/13 03:53:03 calling checksumTCP: 10.244.4.151 10.244.3.44 [35 40 158 233 205 91 92 110 44 220 93 240 80 24 0 229 0 0 0 0] [98 111 111 109 33 33 33] 2021/11/13 03:53:03 checksumer: &{sum:552912 oddByte:33 length:39} 2021/11/13 03:53:03 ret: 552945 2021/11/13 03:53:03 ret: 28665 2021/11/13 03:53:03 ret: 28665 2021/11/13 03:53:03 boom packet injected 2021/11/13 03:53:03 tcp packet: &{SrcPort:40681 DestPort:9000 Seq:752639472 Ack:3445416718 Flags:32785 WindowSize:229 Checksum:41901 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.3.44 2021/11/13 03:53:05 tcp packet: &{SrcPort:44461 DestPort:9000 Seq:560167677 Ack:2092219894 Flags:32784 WindowSize:229 Checksum:59770 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.44 2021/11/13 03:53:05 tcp packet: &{SrcPort:38761 DestPort:9000 Seq:2521964792 Ack:0 Flags:40962 WindowSize:29200 Checksum:58090 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.3.44 2021/11/13 03:53:05 tcp packet: &{SrcPort:38761 DestPort:9000 Seq:2521964793 Ack:4235991482 Flags:32784 WindowSize:229 Checksum:2370 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.44 2021/11/13 03:53:05 connection established 2021/11/13 03:53:05 calling checksumTCP: 10.244.4.151 10.244.3.44 [35 40 151 105 252 122 147 26 150 82 32 249 80 24 0 229 0 0 0 0] [98 111 111 109 33 33 33] 2021/11/13 03:53:05 checksumer: &{sum:473692 oddByte:33 length:39} 2021/11/13 03:53:05 ret: 473725 2021/11/13 03:53:05 ret: 14980 2021/11/13 03:53:05 ret: 14980 2021/11/13 03:53:05 boom packet injected 2021/11/13 03:53:05 tcp packet: &{SrcPort:38761 DestPort:9000 Seq:2521964793 Ack:4235991482 Flags:32785 WindowSize:229 Checksum:2369 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.3.44 2021/11/13 03:53:07 tcp packet: &{SrcPort:32996 DestPort:9000 Seq:322723474 Ack:2554420475 Flags:32784 WindowSize:229 Checksum:30370 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.44 2021/11/13 03:53:07 tcp packet: &{SrcPort:35091 DestPort:9000 Seq:2762856709 Ack:0 Flags:40962 WindowSize:29200 Checksum:8968 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.3.44 2021/11/13 03:53:07 tcp packet: &{SrcPort:35091 DestPort:9000 Seq:2762856710 Ack:2382972834 Flags:32784 WindowSize:229 Checksum:33306 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.44 2021/11/13 03:53:07 connection established 2021/11/13 03:53:07 calling checksumTCP: 10.244.4.151 10.244.3.44 [35 40 137 19 142 7 193 2 164 173 217 6 80 24 0 229 0 0 0 0] [98 111 111 109 33 33 33] 2021/11/13 03:53:07 checksumer: &{sum:377301 oddByte:33 length:39} 2021/11/13 03:53:07 ret: 377334 2021/11/13 03:53:07 ret: 49659 2021/11/13 03:53:07 ret: 49659 2021/11/13 03:53:07 boom packet injected 2021/11/13 03:53:07 tcp packet: &{SrcPort:35091 DestPort:9000 Seq:2762856710 Ack:2382972834 Flags:32785 WindowSize:229 Checksum:33305 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.3.44 2021/11/13 03:53:09 tcp packet: &{SrcPort:37781 DestPort:9000 Seq:3494734815 Ack:3749587294 Flags:32784 WindowSize:229 Checksum:30290 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.44 2021/11/13 03:53:09 tcp packet: &{SrcPort:40660 DestPort:9000 Seq:1485204856 Ack:0 Flags:40962 WindowSize:29200 Checksum:48426 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.3.44 2021/11/13 03:53:09 tcp packet: &{SrcPort:40660 DestPort:9000 Seq:1485204857 Ack:4185624566 Flags:32784 WindowSize:229 Checksum:24741 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.44 2021/11/13 03:53:09 connection established 2021/11/13 03:53:09 calling checksumTCP: 10.244.4.151 10.244.3.44 [35 40 158 212 249 122 9 86 88 134 109 121 80 24 0 229 0 0 0 0] [98 111 111 109 33 33 33] 2021/11/13 03:53:09 checksumer: &{sum:496869 oddByte:33 length:39} 2021/11/13 03:53:09 ret: 496902 2021/11/13 03:53:09 ret: 38157 2021/11/13 03:53:09 ret: 38157 2021/11/13 03:53:09 boom packet injected 2021/11/13 03:53:09 tcp packet: &{SrcPort:40660 DestPort:9000 Seq:1485204857 Ack:4185624566 Flags:32785 WindowSize:229 Checksum:24740 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.3.44 2021/11/13 03:53:11 tcp packet: &{SrcPort:42141 DestPort:9000 Seq:3981632030 Ack:3567735889 Flags:32784 WindowSize:229 Checksum:41543 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.44 2021/11/13 03:53:11 tcp packet: &{SrcPort:45195 DestPort:9000 Seq:1517662583 Ack:0 Flags:40962 WindowSize:29200 Checksum:23988 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.3.44 2021/11/13 03:53:11 tcp packet: &{SrcPort:45195 DestPort:9000 Seq:1517662584 Ack:565061187 Flags:32784 WindowSize:229 Checksum:15071 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.44 2021/11/13 03:53:11 connection established 2021/11/13 03:53:11 calling checksumTCP: 10.244.4.151 10.244.3.44 [35 40 176 139 33 172 159 163 90 117 177 120 80 24 0 229 0 0 0 0] [98 111 111 109 33 33 33] 2021/11/13 03:53:11 checksumer: &{sum:506107 oddByte:33 length:39} 2021/11/13 03:53:11 ret: 506140 2021/11/13 03:53:11 ret: 47395 2021/11/13 03:53:11 ret: 47395 2021/11/13 03:53:11 boom packet injected 2021/11/13 03:53:11 tcp packet: &{SrcPort:45195 DestPort:9000 Seq:1517662584 Ack:565061187 Flags:32785 WindowSize:229 Checksum:15070 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.3.44 2021/11/13 03:53:13 tcp packet: &{SrcPort:40681 DestPort:9000 Seq:752639473 Ack:3445416719 Flags:32784 WindowSize:229 Checksum:21898 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.44 2021/11/13 03:53:13 tcp packet: &{SrcPort:40159 DestPort:9000 Seq:3369864306 Ack:0 Flags:40962 WindowSize:29200 Checksum:41006 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.3.44 2021/11/13 03:53:13 tcp packet: &{SrcPort:40159 DestPort:9000 Seq:3369864307 Ack:4247301668 Flags:32784 WindowSize:229 Checksum:4653 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.44 2021/11/13 03:53:13 connection established 2021/11/13 03:53:13 calling checksumTCP: 10.244.4.151 10.244.3.44 [35 40 156 223 253 39 39 132 200 220 12 115 80 24 0 229 0 0 0 0] [98 111 111 109 33 33 33] 2021/11/13 03:53:13 checksumer: &{sum:510740 oddByte:33 length:39} 2021/11/13 03:53:13 ret: 510773 2021/11/13 03:53:13 ret: 52028 2021/11/13 03:53:13 ret: 52028 2021/11/13 03:53:13 boom packet injected 2021/11/13 03:53:13 tcp packet: &{SrcPort:40159 DestPort:9000 Seq:3369864307 Ack:4247301668 Flags:32785 WindowSize:229 Checksum:4652 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.3.44 2021/11/13 03:53:15 tcp packet: &{SrcPort:38761 DestPort:9000 Seq:2521964794 Ack:4235991483 Flags:32784 WindowSize:229 Checksum:47903 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.44 2021/11/13 03:53:15 tcp packet: &{SrcPort:45407 DestPort:9000 Seq:1052686795 Ack:0 Flags:40962 WindowSize:29200 Checksum:24738 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.3.44 2021/11/13 03:53:15 tcp packet: &{SrcPort:45407 DestPort:9000 Seq:1052686796 Ack:1999366728 Flags:32784 WindowSize:229 Checksum:5289 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.44 2021/11/13 03:53:15 connection established 2021/11/13 03:53:15 calling checksumTCP: 10.244.4.151 10.244.3.44 [35 40 177 95 119 42 99 168 62 190 185 204 80 24 0 229 0 0 0 0] [98 111 111 109 33 33 33] 2021/11/13 03:53:15 checksumer: &{sum:503042 oddByte:33 length:39} 2021/11/13 03:53:15 ret: 503075 2021/11/13 03:53:15 ret: 44330 2021/11/13 03:53:15 ret: 44330 2021/11/13 03:53:15 boom packet injected 2021/11/13 03:53:15 tcp packet: &{SrcPort:45407 DestPort:9000 Seq:1052686796 Ack:1999366728 Flags:32785 WindowSize:229 Checksum:5288 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.3.44 2021/11/13 03:53:17 tcp packet: &{SrcPort:35091 DestPort:9000 Seq:2762856711 Ack:2382972835 Flags:32784 WindowSize:229 Checksum:13302 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.44 2021/11/13 03:53:17 tcp packet: &{SrcPort:39114 DestPort:9000 Seq:2606055658 Ack:0 Flags:40962 WindowSize:29200 Checksum:36273 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.3.44 2021/11/13 03:53:17 tcp packet: &{SrcPort:39114 DestPort:9000 Seq:2606055659 Ack:2363627898 Flags:32784 WindowSize:229 Checksum:62719 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.44 2021/11/13 03:53:17 connection established 2021/11/13 03:53:17 calling checksumTCP: 10.244.4.151 10.244.3.44 [35 40 152 202 140 224 146 218 155 85 64 235 80 24 0 229 0 0 0 0] [98 111 111 109 33 33 33] 2021/11/13 03:53:17 checksumer: &{sum:570897 oddByte:33 length:39} 2021/11/13 03:53:17 ret: 570930 2021/11/13 03:53:17 ret: 46650 2021/11/13 03:53:17 ret: 46650 2021/11/13 03:53:17 boom packet injected 2021/11/13 03:53:17 tcp packet: &{SrcPort:39114 DestPort:9000 Seq:2606055659 Ack:2363627898 Flags:32785 WindowSize:229 Checksum:62718 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.3.44 2021/11/13 03:53:19 tcp packet: &{SrcPort:40660 DestPort:9000 Seq:1485204858 Ack:4185624567 Flags:32784 WindowSize:229 Checksum:4737 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.44 2021/11/13 03:53:19 tcp packet: &{SrcPort:34181 DestPort:9000 Seq:2627198285 Ack:0 Flags:40962 WindowSize:29200 Checksum:64378 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.3.44 2021/11/13 03:53:19 tcp packet: &{SrcPort:34181 DestPort:9000 Seq:2627198286 Ack:1096793523 Flags:32784 WindowSize:229 Checksum:571 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.44 2021/11/13 03:53:19 connection established 2021/11/13 03:53:19 calling checksumTCP: 10.244.4.151 10.244.3.44 [35 40 133 133 65 94 55 19 156 151 221 78 80 24 0 229 0 0 0 0] [98 111 111 109 33 33 33] 2021/11/13 03:53:19 checksumer: &{sum:445686 oddByte:33 length:39} 2021/11/13 03:53:19 ret: 445719 2021/11/13 03:53:19 ret: 52509 2021/11/13 03:53:19 ret: 52509 2021/11/13 03:53:19 boom packet injected 2021/11/13 03:53:19 tcp packet: &{SrcPort:34181 DestPort:9000 Seq:2627198286 Ack:1096793523 Flags:32785 WindowSize:229 Checksum:570 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.3.44 2021/11/13 03:53:21 tcp packet: &{SrcPort:45195 DestPort:9000 Seq:1517662585 Ack:565061188 Flags:32784 WindowSize:229 Checksum:60604 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.44 2021/11/13 03:53:21 tcp packet: &{SrcPort:44718 DestPort:9000 Seq:4023915744 Ack:0 Flags:40962 WindowSize:29200 Checksum:16304 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.3.44 2021/11/13 03:53:21 tcp packet: &{SrcPort:44718 DestPort:9000 Seq:4023915745 Ack:386604304 Flags:32784 WindowSize:229 Checksum:2458 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.44 2021/11/13 03:53:21 connection established 2021/11/13 03:53:21 calling checksumTCP: 10.244.4.151 10.244.3.44 [35 40 174 174 23 9 150 112 239 216 20 225 80 24 0 229 0 0 0 0] [98 111 111 109 33 33 33] 2021/11/13 03:53:21 checksumer: &{sum:512478 oddByte:33 length:39} 2021/11/13 03:53:21 ret: 512511 2021/11/13 03:53:21 ret: 53766 2021/11/13 03:53:21 ret: 53766 2021/11/13 03:53:21 boom packet injected 2021/11/13 03:53:21 tcp packet: &{SrcPort:44718 DestPort:9000 Seq:4023915745 Ack:386604304 Flags:32785 WindowSize:229 Checksum:2457 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.3.44 2021/11/13 03:53:23 tcp packet: &{SrcPort:40159 DestPort:9000 Seq:3369864308 Ack:4247301669 Flags:32784 WindowSize:229 Checksum:50186 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.44 2021/11/13 03:53:23 tcp packet: &{SrcPort:44632 DestPort:9000 Seq:972520981 Ack:0 Flags:40962 WindowSize:29200 Checksum:34019 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.3.44 2021/11/13 03:53:23 tcp packet: &{SrcPort:44632 DestPort:9000 Seq:972520982 Ack:1138142909 Flags:32784 WindowSize:229 Checksum:34937 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.44 2021/11/13 03:53:23 connection established 2021/11/13 03:53:23 calling checksumTCP: 10.244.4.151 10.244.3.44 [35 40 174 88 67 213 40 29 57 247 126 22 80 24 0 229 0 0 0 0] [98 111 111 109 33 33 33] 2021/11/13 03:53:23 checksumer: &{sum:477264 oddByte:33 length:39} 2021/11/13 03:53:23 ret: 477297 2021/11/13 03:53:23 ret: 18552 2021/11/13 03:53:23 ret: 18552 2021/11/13 03:53:23 boom packet injected 2021/11/13 03:53:23 tcp packet: &{SrcPort:44632 DestPort:9000 Seq:972520982 Ack:1138142909 Flags:32785 WindowSize:229 Checksum:34936 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.3.44 2021/11/13 03:53:25 tcp packet: &{SrcPort:45407 DestPort:9000 Seq:1052686797 Ack:1999366729 Flags:32784 WindowSize:229 Checksum:50822 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.44 2021/11/13 03:53:25 tcp packet: &{SrcPort:41647 DestPort:9000 Seq:1203839292 Ack:0 Flags:40962 WindowSize:29200 Checksum:55241 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.3.44 2021/11/13 03:53:25 tcp packet: &{SrcPort:41647 DestPort:9000 Seq:1203839293 Ack:1233674908 Flags:32784 WindowSize:229 Checksum:6665 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.44 2021/11/13 03:53:25 connection established 2021/11/13 03:53:25 calling checksumTCP: 10.244.4.151 10.244.3.44 [35 40 162 175 73 134 219 252 71 193 33 61 80 24 0 229 0 0 0 0] [98 111 111 109 33 33 33] 2021/11/13 03:53:25 checksumer: &{sum:532654 oddByte:33 length:39} 2021/11/13 03:53:25 ret: 532687 2021/11/13 03:53:25 ret: 8407 2021/11/13 03:53:25 ret: 8407 2021/11/13 03:53:25 boom packet injected 2021/11/13 03:53:25 tcp packet: &{SrcPort:41647 DestPort:9000 Seq:1203839293 Ack:1233674908 Flags:32785 WindowSize:229 Checksum:6664 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.3.44 2021/11/13 03:53:27 tcp packet: &{SrcPort:39114 DestPort:9000 Seq:2606055660 Ack:2363627899 Flags:32784 WindowSize:229 Checksum:42715 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.44 2021/11/13 03:53:27 tcp packet: &{SrcPort:43552 DestPort:9000 Seq:830939779 Ack:0 Flags:40962 WindowSize:29200 Checksum:56698 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.3.44 2021/11/13 03:53:27 tcp packet: &{SrcPort:43552 DestPort:9000 Seq:830939780 Ack:3108245467 Flags:32784 WindowSize:229 Checksum:64237 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.44 2021/11/13 03:53:27 connection established 2021/11/13 03:53:27 calling checksumTCP: 10.244.4.151 10.244.3.44 [35 40 170 32 185 66 137 59 49 135 34 132 80 24 0 229 0 0 0 0] [98 111 111 109 33 33 33] 2021/11/13 03:53:27 checksumer: &{sum:432575 oddByte:33 length:39} 2021/11/13 03:53:27 ret: 432608 2021/11/13 03:53:27 ret: 39398 2021/11/13 03:53:27 ret: 39398 2021/11/13 03:53:27 boom packet injected 2021/11/13 03:53:27 tcp packet: &{SrcPort:43552 DestPort:9000 Seq:830939780 Ack:3108245467 Flags:32785 WindowSize:229 Checksum:64236 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.3.44 2021/11/13 03:53:29 tcp packet: &{SrcPort:42380 DestPort:9000 Seq:718927567 Ack:0 Flags:40962 WindowSize:29200 Checksum:3234 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.3.44 2021/11/13 03:53:29 tcp packet: &{SrcPort:42380 DestPort:9000 Seq:718927568 Ack:4277981702 Flags:32784 WindowSize:229 Checksum:6757 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.44 2021/11/13 03:53:29 connection established 2021/11/13 03:53:29 calling checksumTCP: 10.244.4.151 10.244.3.44 [35 40 165 140 254 251 75 102 42 217 246 208 80 24 0 229 0 0 0 0] [98 111 111 109 33 33 33] 2021/11/13 03:53:29 checksumer: &{sum:559246 oddByte:33 length:39} 2021/11/13 03:53:29 ret: 559279 2021/11/13 03:53:29 ret: 34999 2021/11/13 03:53:29 ret: 34999 2021/11/13 03:53:29 boom packet injected 2021/11/13 03:53:29 tcp packet: &{SrcPort:42380 DestPort:9000 Seq:718927568 Ack:4277981702 Flags:32785 WindowSize:229 Checksum:6756 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.3.44 2021/11/13 03:53:29 tcp packet: &{SrcPort:34181 DestPort:9000 Seq:2627198287 Ack:1096793524 Flags:32784 WindowSize:229 Checksum:46101 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.44 2021/11/13 03:53:31 tcp packet: &{SrcPort:38514 DestPort:9000 Seq:17861739 Ack:0 Flags:40962 WindowSize:29200 Checksum:43032 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.3.44 2021/11/13 03:53:31 tcp packet: &{SrcPort:38514 DestPort:9000 Seq:17861740 Ack:3843428367 Flags:32784 WindowSize:229 Checksum:35302 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.44 2021/11/13 03:53:31 connection established 2021/11/13 03:53:31 calling checksumTCP: 10.244.4.151 10.244.3.44 [35 40 150 114 229 20 137 111 1 16 140 108 80 24 0 229 0 0 0 0] [98 111 111 109 33 33 33] 2021/11/13 03:53:31 checksumer: &{sum:418577 oddByte:33 length:39} 2021/11/13 03:53:31 ret: 418610 2021/11/13 03:53:31 ret: 25400 2021/11/13 03:53:31 ret: 25400 2021/11/13 03:53:31 boom packet injected 2021/11/13 03:53:31 tcp packet: &{SrcPort:38514 DestPort:9000 Seq:17861740 Ack:3843428367 Flags:32785 WindowSize:229 Checksum:35301 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.3.44 2021/11/13 03:53:31 tcp packet: &{SrcPort:44718 DestPort:9000 Seq:4023915746 Ack:386604305 Flags:32784 WindowSize:229 Checksum:47989 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.44 2021/11/13 03:53:33 tcp packet: &{SrcPort:38044 DestPort:9000 Seq:1211527143 Ack:0 Flags:40962 WindowSize:29200 Checksum:30587 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.3.44 2021/11/13 03:53:33 tcp packet: &{SrcPort:38044 DestPort:9000 Seq:1211527144 Ack:1334024098 Flags:32784 WindowSize:229 Checksum:24442 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.44 2021/11/13 03:53:33 connection established 2021/11/13 03:53:33 calling checksumTCP: 10.244.4.151 10.244.3.44 [35 40 148 156 79 130 17 2 72 54 111 232 80 24 0 229 0 0 0 0] [98 111 111 109 33 33 33] 2021/11/13 03:53:33 checksumer: &{sum:470827 oddByte:33 length:39} 2021/11/13 03:53:33 ret: 470860 2021/11/13 03:53:33 ret: 12115 2021/11/13 03:53:33 ret: 12115 2021/11/13 03:53:33 boom packet injected 2021/11/13 03:53:33 tcp packet: &{SrcPort:38044 DestPort:9000 Seq:1211527144 Ack:1334024098 Flags:32785 WindowSize:229 Checksum:24441 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.3.44 2021/11/13 03:53:33 tcp packet: &{SrcPort:44632 DestPort:9000 Seq:972520983 Ack:1138142910 Flags:32784 WindowSize:229 Checksum:14921 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.44 2021/11/13 03:53:35 tcp packet: &{SrcPort:35804 DestPort:9000 Seq:3025518969 Ack:0 Flags:40962 WindowSize:29200 Checksum:47801 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.3.44 2021/11/13 03:53:35 tcp packet: &{SrcPort:35804 DestPort:9000 Seq:3025518970 Ack:4092432613 Flags:32784 WindowSize:229 Checksum:64824 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.44 2021/11/13 03:53:35 connection established 2021/11/13 03:53:35 calling checksumTCP: 10.244.4.151 10.244.3.44 [35 40 139 220 243 236 10 69 180 85 193 122 80 24 0 229 0 0 0 0] [98 111 111 109 33 33 33] 2021/11/13 03:53:35 checksumer: &{sum:511613 oddByte:33 length:39} 2021/11/13 03:53:35 ret: 511646 2021/11/13 03:53:35 ret: 52901 2021/11/13 03:53:35 ret: 52901 2021/11/13 03:53:35 boom packet injected 2021/11/13 03:53:35 tcp packet: &{SrcPort:35804 DestPort:9000 Seq:3025518970 Ack:4092432613 Flags:32785 WindowSize:229 Checksum:64823 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.3.44 2021/11/13 03:53:35 tcp packet: &{SrcPort:41647 DestPort:9000 Seq:1203839294 Ack:1233674909 Flags:32784 WindowSize:229 Checksum:52196 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.44 2021/11/13 03:53:37 tcp packet: &{SrcPort:32832 DestPort:9000 Seq:160742216 Ack:0 Flags:40962 WindowSize:29200 Checksum:28535 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.3.44 2021/11/13 03:53:37 tcp packet: &{SrcPort:32832 DestPort:9000 Seq:160742217 Ack:1894845749 Flags:32784 WindowSize:229 Checksum:45268 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.44 2021/11/13 03:53:37 connection established 2021/11/13 03:53:37 calling checksumTCP: 10.244.4.151 10.244.3.44 [35 40 128 64 112 239 134 149 9 148 187 73 80 24 0 229 0 0 0 0] [98 111 111 109 33 33 33] 2021/11/13 03:53:37 checksumer: &{sum:496314 oddByte:33 length:39} 2021/11/13 03:53:37 ret: 496347 2021/11/13 03:53:37 ret: 37602 2021/11/13 03:53:37 ret: 37602 2021/11/13 03:53:37 boom packet injected 2021/11/13 03:53:37 tcp packet: &{SrcPort:32832 DestPort:9000 Seq:160742217 Ack:1894845749 Flags:32785 WindowSize:229 Checksum:45267 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.3.44 2021/11/13 03:53:37 tcp packet: &{SrcPort:43552 DestPort:9000 Seq:830939781 Ack:3108245468 Flags:32784 WindowSize:229 Checksum:44231 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.44 2021/11/13 03:53:39 tcp packet: &{SrcPort:42380 DestPort:9000 Seq:718927569 Ack:4277981703 Flags:32784 WindowSize:229 Checksum:52290 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.44 2021/11/13 03:53:39 tcp packet: &{SrcPort:39143 DestPort:9000 Seq:2340047566 Ack:0 Flags:40962 WindowSize:29200 Checksum:15764 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.3.44 2021/11/13 03:53:39 tcp packet: &{SrcPort:39143 DestPort:9000 Seq:2340047567 Ack:2928448685 Flags:32784 WindowSize:229 Checksum:45581 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.44 2021/11/13 03:53:39 connection established 2021/11/13 03:53:39 calling checksumTCP: 10.244.4.151 10.244.3.44 [35 40 152 231 174 139 14 13 139 122 74 207 80 24 0 229 0 0 0 0] [98 111 111 109 33 33 33] 2021/11/13 03:53:39 checksumer: &{sum:506281 oddByte:33 length:39} 2021/11/13 03:53:39 ret: 506314 2021/11/13 03:53:39 ret: 47569 2021/11/13 03:53:39 ret: 47569 2021/11/13 03:53:39 boom packet injected 2021/11/13 03:53:39 tcp packet: &{SrcPort:39143 DestPort:9000 Seq:2340047567 Ack:2928448685 Flags:32785 WindowSize:229 Checksum:45580 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.3.44 Nov 13 03:53:40.551: INFO: boom-server OK: did not receive any RST packet [AfterEach] [sig-network] Conntrack /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 03:53:40.551: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "conntrack-5413" for this suite. • [SLOW TEST:80.117 seconds] [sig-network] Conntrack /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should drop INVALID conntrack entries /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/conntrack.go:282 ------------------------------ {"msg":"PASSED [sig-network] Conntrack should drop INVALID conntrack entries","total":-1,"completed":1,"skipped":299,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] Networking IPerf2 [Feature:Networking-Performance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 03:53:04.794: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename network-perf STEP: Waiting for a default service account to be provisioned in namespace [It] should run iperf2 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking_perf.go:188 Nov 13 03:53:04.822: INFO: deploying iperf2 server Nov 13 03:53:04.825: INFO: Waiting for deployment "iperf2-server-deployment" to complete Nov 13 03:53:04.828: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:0, Replicas:0, UpdatedReplicas:0, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:0, Conditions:[]v1.DeploymentCondition(nil), CollisionCount:(*int32)(nil)} Nov 13 03:53:06.832: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63772372384, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63772372384, loc:(*time.Location)(0x9e12f00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63772372384, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63772372384, loc:(*time.Location)(0x9e12f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"iperf2-server-deployment-59979d877\" is progressing."}}, CollisionCount:(*int32)(nil)} Nov 13 03:53:08.831: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63772372384, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63772372384, loc:(*time.Location)(0x9e12f00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63772372384, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63772372384, loc:(*time.Location)(0x9e12f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"iperf2-server-deployment-59979d877\" is progressing."}}, CollisionCount:(*int32)(nil)} Nov 13 03:53:10.832: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63772372384, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63772372384, loc:(*time.Location)(0x9e12f00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63772372384, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63772372384, loc:(*time.Location)(0x9e12f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"iperf2-server-deployment-59979d877\" is progressing."}}, CollisionCount:(*int32)(nil)} Nov 13 03:53:12.843: INFO: waiting for iperf2 server endpoints Nov 13 03:53:14.847: INFO: found iperf2 server endpoints Nov 13 03:53:14.847: INFO: waiting for client pods to be running Nov 13 03:53:16.851: INFO: all client pods are ready: 2 pods Nov 13 03:53:16.854: INFO: server pod phase Running Nov 13 03:53:16.854: INFO: server pod condition 0: {Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-11-13 03:53:04 +0000 UTC Reason: Message:} Nov 13 03:53:16.854: INFO: server pod condition 1: {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-11-13 03:53:10 +0000 UTC Reason: Message:} Nov 13 03:53:16.854: INFO: server pod condition 2: {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-11-13 03:53:10 +0000 UTC Reason: Message:} Nov 13 03:53:16.854: INFO: server pod condition 3: {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-11-13 03:53:04 +0000 UTC Reason: Message:} Nov 13 03:53:16.854: INFO: server pod container status 0: {Name:iperf2-server State:{Waiting:nil Running:&ContainerStateRunning{StartedAt:2021-11-13 03:53:09 +0000 UTC,} Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:true RestartCount:0 Image:k8s.gcr.io/e2e-test-images/agnhost:2.32 ImageID:docker-pullable://k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 ContainerID:docker://109f84f1762cab5de80cad7992baf9d23de7f6d23490685255430f0121269955 Started:0xc0004122bc} Nov 13 03:53:16.854: INFO: found 2 matching client pods Nov 13 03:53:16.857: INFO: ExecWithOptions {Command:[/bin/sh -c iperf -v || true] Namespace:network-perf-9924 PodName:iperf2-clients-djfdk ContainerName:iperf2-client Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 13 03:53:16.857: INFO: >>> kubeConfig: /root/.kube/config Nov 13 03:53:16.963: INFO: Exec stderr: "iperf version 2.0.13 (21 Jan 2019) pthreads" Nov 13 03:53:16.963: INFO: iperf version: Nov 13 03:53:16.963: INFO: attempting to run command 'iperf -e -p 6789 --reportstyle C -i 1 -c iperf2-server && sleep 5' in client pod iperf2-clients-djfdk (node node1) Nov 13 03:53:16.966: INFO: ExecWithOptions {Command:[/bin/sh -c iperf -e -p 6789 --reportstyle C -i 1 -c iperf2-server && sleep 5] Namespace:network-perf-9924 PodName:iperf2-clients-djfdk ContainerName:iperf2-client Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 13 03:53:16.966: INFO: >>> kubeConfig: /root/.kube/config Nov 13 03:53:32.313: INFO: Exec stderr: "" Nov 13 03:53:32.313: INFO: output from exec on client pod iperf2-clients-djfdk (node node1): 20211113035318.317,10.244.3.58,42218,10.233.25.240,6789,3,0.0-1.0,49283072,394264576 20211113035319.285,10.244.3.58,42218,10.233.25.240,6789,3,1.0-2.0,104202240,833617920 20211113035320.337,10.244.3.58,42218,10.233.25.240,6789,3,2.0-3.0,60424192,483393536 20211113035321.287,10.244.3.58,42218,10.233.25.240,6789,3,3.0-4.0,46006272,368050176 20211113035322.274,10.244.3.58,42218,10.233.25.240,6789,3,4.0-5.0,94896128,759169024 20211113035323.285,10.244.3.58,42218,10.233.25.240,6789,3,5.0-6.0,117833728,942669824 20211113035324.275,10.244.3.58,42218,10.233.25.240,6789,3,6.0-7.0,63963136,511705088 20211113035325.263,10.244.3.58,42218,10.233.25.240,6789,3,7.0-8.0,73138176,585105408 20211113035326.269,10.244.3.58,42218,10.233.25.240,6789,3,8.0-9.0,117047296,936378368 20211113035327.277,10.244.3.58,42218,10.233.25.240,6789,3,9.0-10.0,105512960,844103680 20211113035327.277,10.244.3.58,42218,10.233.25.240,6789,3,0.0-10.0,832307200,665708158 Nov 13 03:53:32.315: INFO: ExecWithOptions {Command:[/bin/sh -c iperf -v || true] Namespace:network-perf-9924 PodName:iperf2-clients-xx9sp ContainerName:iperf2-client Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 13 03:53:32.315: INFO: >>> kubeConfig: /root/.kube/config Nov 13 03:53:32.421: INFO: Exec stderr: "iperf version 2.0.13 (21 Jan 2019) pthreads" Nov 13 03:53:32.421: INFO: iperf version: Nov 13 03:53:32.421: INFO: attempting to run command 'iperf -e -p 6789 --reportstyle C -i 1 -c iperf2-server && sleep 5' in client pod iperf2-clients-xx9sp (node node2) Nov 13 03:53:32.424: INFO: ExecWithOptions {Command:[/bin/sh -c iperf -e -p 6789 --reportstyle C -i 1 -c iperf2-server && sleep 5] Namespace:network-perf-9924 PodName:iperf2-clients-xx9sp ContainerName:iperf2-client Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 13 03:53:32.424: INFO: >>> kubeConfig: /root/.kube/config Nov 13 03:53:47.613: INFO: Exec stderr: "" Nov 13 03:53:47.613: INFO: output from exec on client pod iperf2-clients-xx9sp (node node2): 20211113035333.547,10.244.4.176,33882,10.233.25.240,6789,3,0.0-1.0,2038038528,16304308224 20211113035334.565,10.244.4.176,33882,10.233.25.240,6789,3,1.0-2.0,1998323712,15986589696 20211113035335.553,10.244.4.176,33882,10.233.25.240,6789,3,2.0-3.0,2001207296,16009658368 20211113035336.555,10.244.4.176,33882,10.233.25.240,6789,3,3.0-4.0,1990590464,15924723712 20211113035337.560,10.244.4.176,33882,10.233.25.240,6789,3,4.0-5.0,1575878656,12607029248 20211113035338.558,10.244.4.176,33882,10.233.25.240,6789,3,5.0-6.0,1933836288,15470690304 20211113035339.557,10.244.4.176,33882,10.233.25.240,6789,3,6.0-7.0,1833435136,14667481088 20211113035340.558,10.244.4.176,33882,10.233.25.240,6789,3,7.0-8.0,1089077248,8712617984 20211113035341.564,10.244.4.176,33882,10.233.25.240,6789,3,8.0-9.0,1801977856,14415822848 20211113035342.563,10.244.4.176,33882,10.233.25.240,6789,3,9.0-10.0,1972371456,15778971648 20211113035342.563,10.244.4.176,33882,10.233.25.240,6789,3,0.0-10.0,18234736640,14587752842 Nov 13 03:53:47.613: INFO: From To Bandwidth (MB/s) Nov 13 03:53:47.613: INFO: node1 node2 79 Nov 13 03:53:47.613: INFO: node2 node2 1739 [AfterEach] [sig-network] Networking IPerf2 [Feature:Networking-Performance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 03:53:47.613: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "network-perf-9924" for this suite. • [SLOW TEST:42.829 seconds] [sig-network] Networking IPerf2 [Feature:Networking-Performance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should run iperf2 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking_perf.go:188 ------------------------------ {"msg":"PASSED [sig-network] Networking IPerf2 [Feature:Networking-Performance] should run iperf2","total":-1,"completed":2,"skipped":332,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 03:53:16.824: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename nettest STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:83 STEP: Executing a successful http request from the external internet [It] should check kube-proxy urls /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:138 STEP: Performing setup for networking test in namespace nettest-8539 STEP: creating a selector STEP: Creating the service pods in kubernetes Nov 13 03:53:16.934: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Nov 13 03:53:16.972: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Nov 13 03:53:18.975: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Nov 13 03:53:20.976: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Nov 13 03:53:22.975: INFO: The status of Pod netserver-0 is Running (Ready = false) Nov 13 03:53:24.975: INFO: The status of Pod netserver-0 is Running (Ready = false) Nov 13 03:53:26.976: INFO: The status of Pod netserver-0 is Running (Ready = false) Nov 13 03:53:28.975: INFO: The status of Pod netserver-0 is Running (Ready = false) Nov 13 03:53:30.975: INFO: The status of Pod netserver-0 is Running (Ready = false) Nov 13 03:53:32.976: INFO: The status of Pod netserver-0 is Running (Ready = false) Nov 13 03:53:34.975: INFO: The status of Pod netserver-0 is Running (Ready = false) Nov 13 03:53:36.976: INFO: The status of Pod netserver-0 is Running (Ready = false) Nov 13 03:53:38.975: INFO: The status of Pod netserver-0 is Running (Ready = true) Nov 13 03:53:38.979: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods Nov 13 03:53:49.021: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2 STEP: Getting node addresses Nov 13 03:53:49.021: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Nov 13 03:53:49.029: INFO: Requires at least 2 nodes (not -1) [AfterEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 03:53:49.031: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "nettest-8539" for this suite. S [SKIPPING] [32.215 seconds] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should check kube-proxy urls [It] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:138 Requires at least 2 nodes (not -1) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/network/utils.go:782 ------------------------------ SSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] Firewall rule /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 03:53:49.069: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename firewall-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Firewall rule /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/firewall.go:61 Nov 13 03:53:49.092: INFO: Only supported for providers [gce] (not local) [AfterEach] [sig-network] Firewall rule /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 03:53:49.093: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "firewall-test-4898" for this suite. S [SKIPPING] in Spec Setup (BeforeEach) [0.034 seconds] [sig-network] Firewall rule /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 control plane should not expose well-known ports [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/firewall.go:214 Only supported for providers [gce] (not local) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/firewall.go:62 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 03:52:19.991: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services W1113 03:52:20.010912 31 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Nov 13 03:52:20.011: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Nov 13 03:52:20.012: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746 [It] should implement service.kubernetes.io/headless /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1916 STEP: creating service-headless in namespace services-5480 STEP: creating service service-headless in namespace services-5480 STEP: creating replication controller service-headless in namespace services-5480 I1113 03:52:20.023296 31 runners.go:190] Created replication controller with name: service-headless, namespace: services-5480, replica count: 3 I1113 03:52:23.074935 31 runners.go:190] service-headless Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1113 03:52:26.076464 31 runners.go:190] service-headless Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1113 03:52:29.078350 31 runners.go:190] service-headless Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1113 03:52:32.078607 31 runners.go:190] service-headless Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady STEP: creating service in namespace services-5480 STEP: creating service service-headless-toggled in namespace services-5480 STEP: creating replication controller service-headless-toggled in namespace services-5480 I1113 03:52:32.091477 31 runners.go:190] Created replication controller with name: service-headless-toggled, namespace: services-5480, replica count: 3 I1113 03:52:35.142220 31 runners.go:190] service-headless-toggled Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1113 03:52:38.142742 31 runners.go:190] service-headless-toggled Pods: 3 out of 3 created, 1 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1113 03:52:41.144238 31 runners.go:190] service-headless-toggled Pods: 3 out of 3 created, 2 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1113 03:52:44.145068 31 runners.go:190] service-headless-toggled Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady STEP: verifying service is up Nov 13 03:52:44.147: INFO: Creating new host exec pod Nov 13 03:52:44.161: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true) Nov 13 03:52:46.165: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true) Nov 13 03:52:48.164: INFO: The status of Pod verify-service-up-host-exec-pod is Running (Ready = true) Nov 13 03:52:48.164: INFO: Creating new exec pod STEP: verifying service has 3 reachable backends Nov 13 03:52:56.182: INFO: Executing cmd "for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.58.140:80 2>&1 || true; echo; done" in pod services-5480/verify-service-up-host-exec-pod Nov 13 03:52:56.182: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5480 exec verify-service-up-host-exec-pod -- /bin/sh -x -c for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.58.140:80 2>&1 || true; echo; done' Nov 13 03:52:57.595: INFO: stderr: "+ seq 1 150\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n" Nov 13 03:52:57.596: INFO: stdout: "service-headless-toggled-rx57j\nservice-headless-toggled-9wwdh\nservice-headless-toggled-rx57j\nservice-headless-toggled-9wwdh\nservice-headless-toggled-9wwdh\nservice-headless-toggled-2ns2m\nservice-headless-toggled-9wwdh\nservice-headless-toggled-9wwdh\nservice-headless-toggled-2ns2m\nservice-headless-toggled-rx57j\nservice-headless-toggled-9wwdh\nservice-headless-toggled-rx57j\nservice-headless-toggled-rx57j\nservice-headless-toggled-rx57j\nservice-headless-toggled-2ns2m\nservice-headless-toggled-rx57j\nservice-headless-toggled-rx57j\nservice-headless-toggled-rx57j\nservice-headless-toggled-rx57j\nservice-headless-toggled-2ns2m\nservice-headless-toggled-9wwdh\nservice-headless-toggled-2ns2m\nservice-headless-toggled-9wwdh\nservice-headless-toggled-2ns2m\nservice-headless-toggled-2ns2m\nservice-headless-toggled-9wwdh\nservice-headless-toggled-rx57j\nservice-headless-toggled-2ns2m\nservice-headless-toggled-2ns2m\nservice-headless-toggled-9wwdh\nservice-headless-toggled-9wwdh\nservice-headless-toggled-9wwdh\nservice-headless-toggled-9wwdh\nservice-headless-toggled-rx57j\nservice-headless-toggled-rx57j\nservice-headless-toggled-9wwdh\nservice-headless-toggled-9wwdh\nservice-headless-toggled-9wwdh\nservice-headless-toggled-9wwdh\nservice-headless-toggled-2ns2m\nservice-headless-toggled-9wwdh\nservice-headless-toggled-2ns2m\nservice-headless-toggled-2ns2m\nservice-headless-toggled-2ns2m\nservice-headless-toggled-9wwdh\nservice-headless-toggled-2ns2m\nservice-headless-toggled-rx57j\nservice-headless-toggled-rx57j\nservice-headless-toggled-2ns2m\nservice-headless-toggled-rx57j\nservice-headless-toggled-rx57j\nservice-headless-toggled-9wwdh\nservice-headless-toggled-2ns2m\nservice-headless-toggled-9wwdh\nservice-headless-toggled-2ns2m\nservice-headless-toggled-rx57j\nservice-headless-toggled-2ns2m\nservice-headless-toggled-2ns2m\nservice-headless-toggled-9wwdh\nservice-headless-toggled-2ns2m\nservice-headless-toggled-9wwdh\nservice-headless-toggled-9wwdh\nservice-headless-toggled-rx57j\nservice-headless-toggled-9wwdh\nservice-headless-toggled-2ns2m\nservice-headless-toggled-9wwdh\nservice-headless-toggled-rx57j\nservice-headless-toggled-9wwdh\nservice-headless-toggled-rx57j\nservice-headless-toggled-2ns2m\nservice-headless-toggled-rx57j\nservice-headless-toggled-2ns2m\nservice-headless-toggled-2ns2m\nservice-headless-toggled-9wwdh\nservice-headless-toggled-2ns2m\nservice-headless-toggled-9wwdh\nservice-headless-toggled-rx57j\nservice-headless-toggled-2ns2m\nservice-headless-toggled-2ns2m\nservice-headless-toggled-rx57j\nservice-headless-toggled-rx57j\nservice-headless-toggled-rx57j\nservice-headless-toggled-2ns2m\nservice-headless-toggled-9wwdh\nservice-headless-toggled-9wwdh\nservice-headless-toggled-9wwdh\nservice-headless-toggled-2ns2m\nservice-headless-toggled-9wwdh\nservice-headless-toggled-9wwdh\nservice-headless-toggled-9wwdh\nservice-headless-toggled-rx57j\nservice-headless-toggled-9wwdh\nservice-headless-toggled-2ns2m\nservice-headless-toggled-9wwdh\nservice-headless-toggled-2ns2m\nservice-headless-toggled-2ns2m\nservice-headless-toggled-2ns2m\nservice-headless-toggled-9wwdh\nservice-headless-toggled-2ns2m\nservice-headless-toggled-9wwdh\nservice-headless-toggled-2ns2m\nservice-headless-toggled-9wwdh\nservice-headless-toggled-rx57j\nservice-headless-toggled-9wwdh\nservice-headless-toggled-9wwdh\nservice-headless-toggled-2ns2m\nservice-headless-toggled-9wwdh\nservice-headless-toggled-2ns2m\nservice-headless-toggled-9wwdh\nservice-headless-toggled-2ns2m\nservice-headless-toggled-9wwdh\nservice-headless-toggled-2ns2m\nservice-headless-toggled-9wwdh\nservice-headless-toggled-9wwdh\nservice-headless-toggled-rx57j\nservice-headless-toggled-rx57j\nservice-headless-toggled-2ns2m\nservice-headless-toggled-9wwdh\nservice-headless-toggled-9wwdh\nservice-headless-toggled-2ns2m\nservice-headless-toggled-9wwdh\nservice-headless-toggled-2ns2m\nservice-headless-toggled-rx57j\nservice-headless-toggled-9wwdh\nservice-headless-toggled-2ns2m\nservice-headless-toggled-9wwdh\nservice-headless-toggled-2ns2m\nservice-headless-toggled-9wwdh\nservice-headless-toggled-2ns2m\nservice-headless-toggled-9wwdh\nservice-headless-toggled-rx57j\nservice-headless-toggled-9wwdh\nservice-headless-toggled-2ns2m\nservice-headless-toggled-9wwdh\nservice-headless-toggled-9wwdh\nservice-headless-toggled-rx57j\nservice-headless-toggled-rx57j\nservice-headless-toggled-2ns2m\nservice-headless-toggled-rx57j\nservice-headless-toggled-9wwdh\nservice-headless-toggled-9wwdh\nservice-headless-toggled-2ns2m\nservice-headless-toggled-9wwdh\nservice-headless-toggled-rx57j\nservice-headless-toggled-rx57j\nservice-headless-toggled-9wwdh\nservice-headless-toggled-9wwdh\nservice-headless-toggled-9wwdh\nservice-headless-toggled-2ns2m\nservice-headless-toggled-9wwdh\n" Nov 13 03:52:57.596: INFO: Executing cmd "for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.58.140:80 2>&1 || true; echo; done" in pod services-5480/verify-service-up-exec-pod-qjbhz Nov 13 03:52:57.596: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5480 exec verify-service-up-exec-pod-qjbhz -- /bin/sh -x -c for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.58.140:80 2>&1 || true; echo; done' Nov 13 03:52:58.462: INFO: stderr: "+ seq 1 150\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n" Nov 13 03:52:58.463: INFO: stdout: "service-headless-toggled-9wwdh\nservice-headless-toggled-2ns2m\nservice-headless-toggled-2ns2m\nservice-headless-toggled-rx57j\nservice-headless-toggled-rx57j\nservice-headless-toggled-2ns2m\nservice-headless-toggled-9wwdh\nservice-headless-toggled-2ns2m\nservice-headless-toggled-rx57j\nservice-headless-toggled-rx57j\nservice-headless-toggled-2ns2m\nservice-headless-toggled-9wwdh\nservice-headless-toggled-rx57j\nservice-headless-toggled-2ns2m\nservice-headless-toggled-rx57j\nservice-headless-toggled-rx57j\nservice-headless-toggled-rx57j\nservice-headless-toggled-rx57j\nservice-headless-toggled-9wwdh\nservice-headless-toggled-9wwdh\nservice-headless-toggled-2ns2m\nservice-headless-toggled-rx57j\nservice-headless-toggled-9wwdh\nservice-headless-toggled-2ns2m\nservice-headless-toggled-rx57j\nservice-headless-toggled-rx57j\nservice-headless-toggled-9wwdh\nservice-headless-toggled-2ns2m\nservice-headless-toggled-9wwdh\nservice-headless-toggled-9wwdh\nservice-headless-toggled-2ns2m\nservice-headless-toggled-rx57j\nservice-headless-toggled-rx57j\nservice-headless-toggled-9wwdh\nservice-headless-toggled-2ns2m\nservice-headless-toggled-rx57j\nservice-headless-toggled-9wwdh\nservice-headless-toggled-2ns2m\nservice-headless-toggled-2ns2m\nservice-headless-toggled-rx57j\nservice-headless-toggled-9wwdh\nservice-headless-toggled-2ns2m\nservice-headless-toggled-2ns2m\nservice-headless-toggled-2ns2m\nservice-headless-toggled-9wwdh\nservice-headless-toggled-2ns2m\nservice-headless-toggled-rx57j\nservice-headless-toggled-9wwdh\nservice-headless-toggled-9wwdh\nservice-headless-toggled-2ns2m\nservice-headless-toggled-2ns2m\nservice-headless-toggled-9wwdh\nservice-headless-toggled-2ns2m\nservice-headless-toggled-2ns2m\nservice-headless-toggled-9wwdh\nservice-headless-toggled-2ns2m\nservice-headless-toggled-rx57j\nservice-headless-toggled-2ns2m\nservice-headless-toggled-2ns2m\nservice-headless-toggled-rx57j\nservice-headless-toggled-rx57j\nservice-headless-toggled-2ns2m\nservice-headless-toggled-rx57j\nservice-headless-toggled-rx57j\nservice-headless-toggled-rx57j\nservice-headless-toggled-9wwdh\nservice-headless-toggled-rx57j\nservice-headless-toggled-9wwdh\nservice-headless-toggled-9wwdh\nservice-headless-toggled-2ns2m\nservice-headless-toggled-2ns2m\nservice-headless-toggled-rx57j\nservice-headless-toggled-9wwdh\nservice-headless-toggled-rx57j\nservice-headless-toggled-rx57j\nservice-headless-toggled-9wwdh\nservice-headless-toggled-9wwdh\nservice-headless-toggled-2ns2m\nservice-headless-toggled-9wwdh\nservice-headless-toggled-2ns2m\nservice-headless-toggled-rx57j\nservice-headless-toggled-rx57j\nservice-headless-toggled-2ns2m\nservice-headless-toggled-9wwdh\nservice-headless-toggled-2ns2m\nservice-headless-toggled-9wwdh\nservice-headless-toggled-9wwdh\nservice-headless-toggled-rx57j\nservice-headless-toggled-rx57j\nservice-headless-toggled-9wwdh\nservice-headless-toggled-2ns2m\nservice-headless-toggled-2ns2m\nservice-headless-toggled-2ns2m\nservice-headless-toggled-rx57j\nservice-headless-toggled-rx57j\nservice-headless-toggled-2ns2m\nservice-headless-toggled-9wwdh\nservice-headless-toggled-9wwdh\nservice-headless-toggled-rx57j\nservice-headless-toggled-2ns2m\nservice-headless-toggled-9wwdh\nservice-headless-toggled-2ns2m\nservice-headless-toggled-rx57j\nservice-headless-toggled-2ns2m\nservice-headless-toggled-9wwdh\nservice-headless-toggled-rx57j\nservice-headless-toggled-2ns2m\nservice-headless-toggled-rx57j\nservice-headless-toggled-2ns2m\nservice-headless-toggled-2ns2m\nservice-headless-toggled-rx57j\nservice-headless-toggled-rx57j\nservice-headless-toggled-2ns2m\nservice-headless-toggled-2ns2m\nservice-headless-toggled-2ns2m\nservice-headless-toggled-9wwdh\nservice-headless-toggled-2ns2m\nservice-headless-toggled-2ns2m\nservice-headless-toggled-2ns2m\nservice-headless-toggled-9wwdh\nservice-headless-toggled-9wwdh\nservice-headless-toggled-rx57j\nservice-headless-toggled-rx57j\nservice-headless-toggled-9wwdh\nservice-headless-toggled-9wwdh\nservice-headless-toggled-9wwdh\nservice-headless-toggled-9wwdh\nservice-headless-toggled-9wwdh\nservice-headless-toggled-rx57j\nservice-headless-toggled-rx57j\nservice-headless-toggled-9wwdh\nservice-headless-toggled-9wwdh\nservice-headless-toggled-2ns2m\nservice-headless-toggled-9wwdh\nservice-headless-toggled-9wwdh\nservice-headless-toggled-rx57j\nservice-headless-toggled-2ns2m\nservice-headless-toggled-2ns2m\nservice-headless-toggled-9wwdh\nservice-headless-toggled-2ns2m\nservice-headless-toggled-9wwdh\nservice-headless-toggled-rx57j\nservice-headless-toggled-rx57j\nservice-headless-toggled-rx57j\nservice-headless-toggled-2ns2m\nservice-headless-toggled-9wwdh\nservice-headless-toggled-rx57j\nservice-headless-toggled-rx57j\nservice-headless-toggled-rx57j\nservice-headless-toggled-9wwdh\n" STEP: Deleting pod verify-service-up-host-exec-pod in namespace services-5480 STEP: Deleting pod verify-service-up-exec-pod-qjbhz in namespace services-5480 STEP: verifying service-headless is not up Nov 13 03:52:58.477: INFO: Creating new host exec pod Nov 13 03:52:58.490: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true) Nov 13 03:53:00.493: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true) Nov 13 03:53:02.494: INFO: The status of Pod verify-service-down-host-exec-pod is Running (Ready = true) Nov 13 03:53:02.494: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5480 exec verify-service-down-host-exec-pod -- /bin/sh -x -c curl -g -s --connect-timeout 2 http://10.233.52.39:80 && echo service-down-failed' Nov 13 03:53:05.742: INFO: rc: 28 Nov 13 03:53:05.742: INFO: error while kubectl execing "curl -g -s --connect-timeout 2 http://10.233.52.39:80 && echo service-down-failed" in pod services-5480/verify-service-down-host-exec-pod: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5480 exec verify-service-down-host-exec-pod -- /bin/sh -x -c curl -g -s --connect-timeout 2 http://10.233.52.39:80 && echo service-down-failed: Command stdout: stderr: + curl -g -s --connect-timeout 2 http://10.233.52.39:80 command terminated with exit code 28 error: exit status 28 Output: STEP: Deleting pod verify-service-down-host-exec-pod in namespace services-5480 STEP: adding service.kubernetes.io/headless label STEP: verifying service is not up Nov 13 03:53:05.756: INFO: Creating new host exec pod Nov 13 03:53:05.770: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true) Nov 13 03:53:07.777: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true) Nov 13 03:53:09.773: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true) Nov 13 03:53:11.775: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true) Nov 13 03:53:13.775: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true) Nov 13 03:53:15.774: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true) Nov 13 03:53:17.776: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true) Nov 13 03:53:19.773: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true) Nov 13 03:53:21.776: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true) Nov 13 03:53:23.773: INFO: The status of Pod verify-service-down-host-exec-pod is Running (Ready = true) Nov 13 03:53:23.773: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5480 exec verify-service-down-host-exec-pod -- /bin/sh -x -c curl -g -s --connect-timeout 2 http://10.233.58.140:80 && echo service-down-failed' Nov 13 03:53:26.839: INFO: rc: 28 Nov 13 03:53:26.839: INFO: error while kubectl execing "curl -g -s --connect-timeout 2 http://10.233.58.140:80 && echo service-down-failed" in pod services-5480/verify-service-down-host-exec-pod: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5480 exec verify-service-down-host-exec-pod -- /bin/sh -x -c curl -g -s --connect-timeout 2 http://10.233.58.140:80 && echo service-down-failed: Command stdout: stderr: + curl -g -s --connect-timeout 2 http://10.233.58.140:80 command terminated with exit code 28 error: exit status 28 Output: STEP: Deleting pod verify-service-down-host-exec-pod in namespace services-5480 STEP: removing service.kubernetes.io/headless annotation STEP: verifying service is up Nov 13 03:53:26.855: INFO: Creating new host exec pod Nov 13 03:53:26.871: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true) Nov 13 03:53:28.875: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true) Nov 13 03:53:30.874: INFO: The status of Pod verify-service-up-host-exec-pod is Running (Ready = true) Nov 13 03:53:30.874: INFO: Creating new exec pod STEP: verifying service has 3 reachable backends Nov 13 03:53:36.893: INFO: Executing cmd "for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.58.140:80 2>&1 || true; echo; done" in pod services-5480/verify-service-up-host-exec-pod Nov 13 03:53:36.893: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5480 exec verify-service-up-host-exec-pod -- /bin/sh -x -c for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.58.140:80 2>&1 || true; echo; done' Nov 13 03:53:37.914: INFO: stderr: "+ seq 1 150\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n" Nov 13 03:53:37.914: INFO: stdout: "service-headless-toggled-9wwdh\nservice-headless-toggled-2ns2m\nservice-headless-toggled-rx57j\nservice-headless-toggled-2ns2m\nservice-headless-toggled-rx57j\nservice-headless-toggled-rx57j\nservice-headless-toggled-2ns2m\nservice-headless-toggled-2ns2m\nservice-headless-toggled-9wwdh\nservice-headless-toggled-rx57j\nservice-headless-toggled-9wwdh\nservice-headless-toggled-2ns2m\nservice-headless-toggled-9wwdh\nservice-headless-toggled-9wwdh\nservice-headless-toggled-2ns2m\nservice-headless-toggled-rx57j\nservice-headless-toggled-9wwdh\nservice-headless-toggled-9wwdh\nservice-headless-toggled-2ns2m\nservice-headless-toggled-9wwdh\nservice-headless-toggled-2ns2m\nservice-headless-toggled-2ns2m\nservice-headless-toggled-rx57j\nservice-headless-toggled-rx57j\nservice-headless-toggled-2ns2m\nservice-headless-toggled-rx57j\nservice-headless-toggled-rx57j\nservice-headless-toggled-9wwdh\nservice-headless-toggled-9wwdh\nservice-headless-toggled-rx57j\nservice-headless-toggled-2ns2m\nservice-headless-toggled-rx57j\nservice-headless-toggled-rx57j\nservice-headless-toggled-rx57j\nservice-headless-toggled-9wwdh\nservice-headless-toggled-9wwdh\nservice-headless-toggled-2ns2m\nservice-headless-toggled-rx57j\nservice-headless-toggled-2ns2m\nservice-headless-toggled-9wwdh\nservice-headless-toggled-9wwdh\nservice-headless-toggled-rx57j\nservice-headless-toggled-2ns2m\nservice-headless-toggled-9wwdh\nservice-headless-toggled-2ns2m\nservice-headless-toggled-rx57j\nservice-headless-toggled-9wwdh\nservice-headless-toggled-rx57j\nservice-headless-toggled-2ns2m\nservice-headless-toggled-2ns2m\nservice-headless-toggled-2ns2m\nservice-headless-toggled-rx57j\nservice-headless-toggled-9wwdh\nservice-headless-toggled-9wwdh\nservice-headless-toggled-2ns2m\nservice-headless-toggled-2ns2m\nservice-headless-toggled-2ns2m\nservice-headless-toggled-rx57j\nservice-headless-toggled-2ns2m\nservice-headless-toggled-9wwdh\nservice-headless-toggled-rx57j\nservice-headless-toggled-2ns2m\nservice-headless-toggled-2ns2m\nservice-headless-toggled-9wwdh\nservice-headless-toggled-2ns2m\nservice-headless-toggled-2ns2m\nservice-headless-toggled-2ns2m\nservice-headless-toggled-9wwdh\nservice-headless-toggled-9wwdh\nservice-headless-toggled-rx57j\nservice-headless-toggled-rx57j\nservice-headless-toggled-9wwdh\nservice-headless-toggled-9wwdh\nservice-headless-toggled-rx57j\nservice-headless-toggled-rx57j\nservice-headless-toggled-9wwdh\nservice-headless-toggled-9wwdh\nservice-headless-toggled-rx57j\nservice-headless-toggled-9wwdh\nservice-headless-toggled-rx57j\nservice-headless-toggled-9wwdh\nservice-headless-toggled-rx57j\nservice-headless-toggled-2ns2m\nservice-headless-toggled-9wwdh\nservice-headless-toggled-2ns2m\nservice-headless-toggled-rx57j\nservice-headless-toggled-rx57j\nservice-headless-toggled-2ns2m\nservice-headless-toggled-2ns2m\nservice-headless-toggled-2ns2m\nservice-headless-toggled-rx57j\nservice-headless-toggled-2ns2m\nservice-headless-toggled-2ns2m\nservice-headless-toggled-2ns2m\nservice-headless-toggled-9wwdh\nservice-headless-toggled-2ns2m\nservice-headless-toggled-9wwdh\nservice-headless-toggled-rx57j\nservice-headless-toggled-9wwdh\nservice-headless-toggled-9wwdh\nservice-headless-toggled-2ns2m\nservice-headless-toggled-rx57j\nservice-headless-toggled-rx57j\nservice-headless-toggled-rx57j\nservice-headless-toggled-rx57j\nservice-headless-toggled-rx57j\nservice-headless-toggled-rx57j\nservice-headless-toggled-rx57j\nservice-headless-toggled-rx57j\nservice-headless-toggled-2ns2m\nservice-headless-toggled-2ns2m\nservice-headless-toggled-2ns2m\nservice-headless-toggled-9wwdh\nservice-headless-toggled-2ns2m\nservice-headless-toggled-2ns2m\nservice-headless-toggled-rx57j\nservice-headless-toggled-2ns2m\nservice-headless-toggled-2ns2m\nservice-headless-toggled-9wwdh\nservice-headless-toggled-2ns2m\nservice-headless-toggled-9wwdh\nservice-headless-toggled-9wwdh\nservice-headless-toggled-9wwdh\nservice-headless-toggled-9wwdh\nservice-headless-toggled-2ns2m\nservice-headless-toggled-9wwdh\nservice-headless-toggled-2ns2m\nservice-headless-toggled-rx57j\nservice-headless-toggled-2ns2m\nservice-headless-toggled-rx57j\nservice-headless-toggled-rx57j\nservice-headless-toggled-rx57j\nservice-headless-toggled-2ns2m\nservice-headless-toggled-2ns2m\nservice-headless-toggled-2ns2m\nservice-headless-toggled-rx57j\nservice-headless-toggled-rx57j\nservice-headless-toggled-2ns2m\nservice-headless-toggled-rx57j\nservice-headless-toggled-2ns2m\nservice-headless-toggled-2ns2m\nservice-headless-toggled-2ns2m\nservice-headless-toggled-9wwdh\nservice-headless-toggled-9wwdh\nservice-headless-toggled-2ns2m\nservice-headless-toggled-9wwdh\nservice-headless-toggled-9wwdh\nservice-headless-toggled-rx57j\nservice-headless-toggled-2ns2m\nservice-headless-toggled-2ns2m\n" Nov 13 03:53:37.915: INFO: Executing cmd "for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.58.140:80 2>&1 || true; echo; done" in pod services-5480/verify-service-up-exec-pod-52vft Nov 13 03:53:37.915: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5480 exec verify-service-up-exec-pod-52vft -- /bin/sh -x -c for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.58.140:80 2>&1 || true; echo; done' Nov 13 03:53:39.053: INFO: stderr: "+ seq 1 150\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.58.140:80\n+ echo\n" Nov 13 03:53:39.054: INFO: stdout: "service-headless-toggled-2ns2m\nservice-headless-toggled-rx57j\nservice-headless-toggled-2ns2m\nservice-headless-toggled-2ns2m\nservice-headless-toggled-9wwdh\nservice-headless-toggled-rx57j\nservice-headless-toggled-2ns2m\nservice-headless-toggled-2ns2m\nservice-headless-toggled-rx57j\nservice-headless-toggled-2ns2m\nservice-headless-toggled-9wwdh\nservice-headless-toggled-2ns2m\nservice-headless-toggled-2ns2m\nservice-headless-toggled-2ns2m\nservice-headless-toggled-2ns2m\nservice-headless-toggled-9wwdh\nservice-headless-toggled-9wwdh\nservice-headless-toggled-2ns2m\nservice-headless-toggled-rx57j\nservice-headless-toggled-9wwdh\nservice-headless-toggled-2ns2m\nservice-headless-toggled-rx57j\nservice-headless-toggled-9wwdh\nservice-headless-toggled-9wwdh\nservice-headless-toggled-rx57j\nservice-headless-toggled-2ns2m\nservice-headless-toggled-rx57j\nservice-headless-toggled-9wwdh\nservice-headless-toggled-rx57j\nservice-headless-toggled-9wwdh\nservice-headless-toggled-2ns2m\nservice-headless-toggled-2ns2m\nservice-headless-toggled-2ns2m\nservice-headless-toggled-2ns2m\nservice-headless-toggled-2ns2m\nservice-headless-toggled-2ns2m\nservice-headless-toggled-9wwdh\nservice-headless-toggled-2ns2m\nservice-headless-toggled-2ns2m\nservice-headless-toggled-9wwdh\nservice-headless-toggled-rx57j\nservice-headless-toggled-2ns2m\nservice-headless-toggled-rx57j\nservice-headless-toggled-9wwdh\nservice-headless-toggled-9wwdh\nservice-headless-toggled-9wwdh\nservice-headless-toggled-2ns2m\nservice-headless-toggled-rx57j\nservice-headless-toggled-9wwdh\nservice-headless-toggled-rx57j\nservice-headless-toggled-rx57j\nservice-headless-toggled-2ns2m\nservice-headless-toggled-2ns2m\nservice-headless-toggled-9wwdh\nservice-headless-toggled-rx57j\nservice-headless-toggled-2ns2m\nservice-headless-toggled-9wwdh\nservice-headless-toggled-9wwdh\nservice-headless-toggled-rx57j\nservice-headless-toggled-rx57j\nservice-headless-toggled-rx57j\nservice-headless-toggled-9wwdh\nservice-headless-toggled-9wwdh\nservice-headless-toggled-2ns2m\nservice-headless-toggled-9wwdh\nservice-headless-toggled-rx57j\nservice-headless-toggled-rx57j\nservice-headless-toggled-rx57j\nservice-headless-toggled-2ns2m\nservice-headless-toggled-2ns2m\nservice-headless-toggled-rx57j\nservice-headless-toggled-2ns2m\nservice-headless-toggled-9wwdh\nservice-headless-toggled-9wwdh\nservice-headless-toggled-9wwdh\nservice-headless-toggled-9wwdh\nservice-headless-toggled-rx57j\nservice-headless-toggled-9wwdh\nservice-headless-toggled-9wwdh\nservice-headless-toggled-2ns2m\nservice-headless-toggled-rx57j\nservice-headless-toggled-9wwdh\nservice-headless-toggled-9wwdh\nservice-headless-toggled-rx57j\nservice-headless-toggled-rx57j\nservice-headless-toggled-2ns2m\nservice-headless-toggled-2ns2m\nservice-headless-toggled-rx57j\nservice-headless-toggled-2ns2m\nservice-headless-toggled-9wwdh\nservice-headless-toggled-2ns2m\nservice-headless-toggled-rx57j\nservice-headless-toggled-2ns2m\nservice-headless-toggled-2ns2m\nservice-headless-toggled-rx57j\nservice-headless-toggled-9wwdh\nservice-headless-toggled-rx57j\nservice-headless-toggled-2ns2m\nservice-headless-toggled-rx57j\nservice-headless-toggled-9wwdh\nservice-headless-toggled-2ns2m\nservice-headless-toggled-rx57j\nservice-headless-toggled-2ns2m\nservice-headless-toggled-rx57j\nservice-headless-toggled-9wwdh\nservice-headless-toggled-9wwdh\nservice-headless-toggled-rx57j\nservice-headless-toggled-9wwdh\nservice-headless-toggled-2ns2m\nservice-headless-toggled-9wwdh\nservice-headless-toggled-rx57j\nservice-headless-toggled-9wwdh\nservice-headless-toggled-9wwdh\nservice-headless-toggled-9wwdh\nservice-headless-toggled-rx57j\nservice-headless-toggled-rx57j\nservice-headless-toggled-9wwdh\nservice-headless-toggled-9wwdh\nservice-headless-toggled-rx57j\nservice-headless-toggled-9wwdh\nservice-headless-toggled-9wwdh\nservice-headless-toggled-rx57j\nservice-headless-toggled-rx57j\nservice-headless-toggled-9wwdh\nservice-headless-toggled-9wwdh\nservice-headless-toggled-2ns2m\nservice-headless-toggled-2ns2m\nservice-headless-toggled-rx57j\nservice-headless-toggled-9wwdh\nservice-headless-toggled-9wwdh\nservice-headless-toggled-2ns2m\nservice-headless-toggled-rx57j\nservice-headless-toggled-rx57j\nservice-headless-toggled-2ns2m\nservice-headless-toggled-rx57j\nservice-headless-toggled-rx57j\nservice-headless-toggled-rx57j\nservice-headless-toggled-rx57j\nservice-headless-toggled-9wwdh\nservice-headless-toggled-rx57j\nservice-headless-toggled-2ns2m\nservice-headless-toggled-rx57j\nservice-headless-toggled-2ns2m\nservice-headless-toggled-2ns2m\nservice-headless-toggled-rx57j\nservice-headless-toggled-9wwdh\nservice-headless-toggled-2ns2m\nservice-headless-toggled-9wwdh\nservice-headless-toggled-2ns2m\nservice-headless-toggled-9wwdh\n" STEP: Deleting pod verify-service-up-host-exec-pod in namespace services-5480 STEP: Deleting pod verify-service-up-exec-pod-52vft in namespace services-5480 STEP: verifying service-headless is still not up Nov 13 03:53:39.066: INFO: Creating new host exec pod Nov 13 03:53:39.077: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true) Nov 13 03:53:41.080: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true) Nov 13 03:53:43.080: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true) Nov 13 03:53:45.081: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true) Nov 13 03:53:47.085: INFO: The status of Pod verify-service-down-host-exec-pod is Running (Ready = true) Nov 13 03:53:47.085: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5480 exec verify-service-down-host-exec-pod -- /bin/sh -x -c curl -g -s --connect-timeout 2 http://10.233.52.39:80 && echo service-down-failed' Nov 13 03:53:49.340: INFO: rc: 28 Nov 13 03:53:49.340: INFO: error while kubectl execing "curl -g -s --connect-timeout 2 http://10.233.52.39:80 && echo service-down-failed" in pod services-5480/verify-service-down-host-exec-pod: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5480 exec verify-service-down-host-exec-pod -- /bin/sh -x -c curl -g -s --connect-timeout 2 http://10.233.52.39:80 && echo service-down-failed: Command stdout: stderr: + curl -g -s --connect-timeout 2 http://10.233.52.39:80 command terminated with exit code 28 error: exit status 28 Output: STEP: Deleting pod verify-service-down-host-exec-pod in namespace services-5480 [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 03:53:49.347: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-5480" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750 • [SLOW TEST:89.364 seconds] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should implement service.kubernetes.io/headless /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1916 ------------------------------ S ------------------------------ {"msg":"PASSED [sig-network] Services should implement service.kubernetes.io/headless","total":-1,"completed":1,"skipped":90,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] version v1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 03:53:49.435: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node with explicit kubelet port using proxy subresource /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:85 Nov 13 03:53:49.919: INFO: (0) /api/v1/nodes/node1:10250/proxy/logs/:
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746
[It] should prevent NodePort collisions
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1440
STEP: creating service nodeport-collision-1 with type NodePort in namespace services-515
STEP: creating service nodeport-collision-2 with conflicting NodePort
STEP: deleting service nodeport-collision-1 to release NodePort
STEP: creating service nodeport-collision-2 with no-longer-conflicting NodePort
STEP: deleting service nodeport-collision-2 in namespace services-515
[AfterEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Nov 13 03:53:50.204: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-515" for this suite.
[AfterEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750

•
------------------------------
{"msg":"PASSED [sig-network] Services should prevent NodePort collisions","total":-1,"completed":3,"skipped":216,"failed":0}

SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Nov 13 03:53:23.159: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename nettest
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:83
STEP: Executing a successful http request from the external internet
[It] should function for multiple endpoint-Services with same selector
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:289
STEP: Performing setup for networking test in namespace nettest-9086
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Nov 13 03:53:23.270: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
Nov 13 03:53:23.310: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Nov 13 03:53:25.314: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Nov 13 03:53:27.314: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Nov 13 03:53:29.314: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Nov 13 03:53:31.315: INFO: The status of Pod netserver-0 is Running (Ready = false)
Nov 13 03:53:33.314: INFO: The status of Pod netserver-0 is Running (Ready = false)
Nov 13 03:53:35.314: INFO: The status of Pod netserver-0 is Running (Ready = false)
Nov 13 03:53:37.315: INFO: The status of Pod netserver-0 is Running (Ready = false)
Nov 13 03:53:39.313: INFO: The status of Pod netserver-0 is Running (Ready = false)
Nov 13 03:53:41.314: INFO: The status of Pod netserver-0 is Running (Ready = false)
Nov 13 03:53:43.314: INFO: The status of Pod netserver-0 is Running (Ready = false)
Nov 13 03:53:45.313: INFO: The status of Pod netserver-0 is Running (Ready = false)
Nov 13 03:53:47.314: INFO: The status of Pod netserver-0 is Running (Ready = false)
Nov 13 03:53:49.314: INFO: The status of Pod netserver-0 is Running (Ready = true)
Nov 13 03:53:49.319: INFO: The status of Pod netserver-1 is Running (Ready = true)
STEP: Creating test pods
Nov 13 03:53:57.340: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2
STEP: Getting node addresses
Nov 13 03:53:57.340: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
Nov 13 03:53:57.346: INFO: Requires at least 2 nodes (not -1)
[AfterEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Nov 13 03:53:57.348: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "nettest-9086" for this suite.


S [SKIPPING] [34.198 seconds]
[sig-network] Networking
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  Granular Checks: Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:151
    should function for multiple endpoint-Services with same selector [It]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:289

    Requires at least 2 nodes (not -1)

    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/network/utils.go:782
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-network] ESIPP [Slow]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Nov 13 03:53:57.852: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename esipp
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] ESIPP [Slow]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/loadbalancer.go:858
Nov 13 03:53:57.873: INFO: Only supported for providers [gce gke] (not local)
[AfterEach] [sig-network] ESIPP [Slow]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Nov 13 03:53:57.874: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "esipp-5952" for this suite.
[AfterEach] [sig-network] ESIPP [Slow]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/loadbalancer.go:866


S [SKIPPING] in Spec Setup (BeforeEach) [0.031 seconds]
[sig-network] ESIPP [Slow]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should only target nodes with endpoints [BeforeEach]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/loadbalancer.go:959

  Only supported for providers [gce gke] (not local)

  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/loadbalancer.go:860
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Nov 13 03:53:30.893: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename nettest
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:83
STEP: Executing a successful http request from the external internet
[It] should function for service endpoints using hostNetwork
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:474
STEP: Performing setup for networking test in namespace nettest-2004
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Nov 13 03:53:31.003: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
Nov 13 03:53:31.034: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Nov 13 03:53:33.039: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Nov 13 03:53:35.038: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Nov 13 03:53:37.043: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Nov 13 03:53:39.037: INFO: The status of Pod netserver-0 is Running (Ready = false)
Nov 13 03:53:41.039: INFO: The status of Pod netserver-0 is Running (Ready = false)
Nov 13 03:53:43.038: INFO: The status of Pod netserver-0 is Running (Ready = false)
Nov 13 03:53:45.038: INFO: The status of Pod netserver-0 is Running (Ready = false)
Nov 13 03:53:47.039: INFO: The status of Pod netserver-0 is Running (Ready = false)
Nov 13 03:53:49.039: INFO: The status of Pod netserver-0 is Running (Ready = false)
Nov 13 03:53:51.040: INFO: The status of Pod netserver-0 is Running (Ready = false)
Nov 13 03:53:53.037: INFO: The status of Pod netserver-0 is Running (Ready = false)
Nov 13 03:53:55.037: INFO: The status of Pod netserver-0 is Running (Ready = true)
Nov 13 03:53:55.041: INFO: The status of Pod netserver-1 is Running (Ready = true)
STEP: Creating test pods
Nov 13 03:54:05.079: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2
STEP: Getting node addresses
Nov 13 03:54:05.079: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
Nov 13 03:54:05.086: INFO: Requires at least 2 nodes (not -1)
[AfterEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Nov 13 03:54:05.088: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "nettest-2004" for this suite.


S [SKIPPING] [34.204 seconds]
[sig-network] Networking
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  Granular Checks: Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:151
    should function for service endpoints using hostNetwork [It]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:474

    Requires at least 2 nodes (not -1)

    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/network/utils.go:782
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Nov 13 03:53:40.642: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746
[It] should be possible to connect to a service via ExternalIP when the external IP is not assigned to a node
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1177
STEP: creating service externalip-test with type=clusterIP in namespace services-8625
STEP: creating replication controller externalip-test in namespace services-8625
I1113 03:53:40.674704      32 runners.go:190] Created replication controller with name: externalip-test, namespace: services-8625, replica count: 2
I1113 03:53:43.728280      32 runners.go:190] externalip-test Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1113 03:53:46.729888      32 runners.go:190] externalip-test Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1113 03:53:49.730068      32 runners.go:190] externalip-test Pods: 2 out of 2 created, 1 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1113 03:53:52.732158      32 runners.go:190] externalip-test Pods: 2 out of 2 created, 1 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1113 03:53:55.732894      32 runners.go:190] externalip-test Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Nov 13 03:53:55.732: INFO: Creating new exec pod
Nov 13 03:54:04.749: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8625 exec execpodbtl2x -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalip-test 80'
Nov 13 03:54:05.011: INFO: stderr: "+ nc -v -t -w 2 externalip-test 80\n+ echo hostName\nConnection to externalip-test 80 port [tcp/http] succeeded!\n"
Nov 13 03:54:05.011: INFO: stdout: ""
Nov 13 03:54:06.011: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8625 exec execpodbtl2x -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalip-test 80'
Nov 13 03:54:06.320: INFO: stderr: "+ nc -v -t -w 2 externalip-test 80\n+ echo hostName\nConnection to externalip-test 80 port [tcp/http] succeeded!\n"
Nov 13 03:54:06.320: INFO: stdout: "externalip-test-bpf4c"
Nov 13 03:54:06.320: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8625 exec execpodbtl2x -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.233.32.6 80'
Nov 13 03:54:07.050: INFO: stderr: "+ nc -v -t -w 2 10.233.32.6 80\nConnection to 10.233.32.6 80 port [tcp/http] succeeded!\n+ echo hostName\n"
Nov 13 03:54:07.050: INFO: stdout: "externalip-test-bpf4c"
Nov 13 03:54:07.050: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8625 exec execpodbtl2x -- /bin/sh -x -c echo hostName | nc -v -t -w 2 203.0.113.250 80'
Nov 13 03:54:07.297: INFO: stderr: "+ nc -v -t -w 2 203.0.113.250 80\nConnection to 203.0.113.250 80 port [tcp/http] succeeded!\n+ echo hostName\n"
Nov 13 03:54:07.297: INFO: stdout: "externalip-test-bpf4c"
[AfterEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Nov 13 03:54:07.297: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-8625" for this suite.
[AfterEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750


• [SLOW TEST:26.667 seconds]
[sig-network] Services
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should be possible to connect to a service via ExternalIP when the external IP is not assigned to a node
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1177
------------------------------
{"msg":"PASSED [sig-network] Services should be possible to connect to a service via ExternalIP when the external IP is not assigned to a node","total":-1,"completed":2,"skipped":336,"failed":0}

SS
------------------------------
[BeforeEach] [sig-network] ESIPP [Slow]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Nov 13 03:54:07.317: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename esipp
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] ESIPP [Slow]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/loadbalancer.go:858
Nov 13 03:54:07.336: INFO: Only supported for providers [gce gke] (not local)
[AfterEach] [sig-network] ESIPP [Slow]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Nov 13 03:54:07.338: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "esipp-3906" for this suite.
[AfterEach] [sig-network] ESIPP [Slow]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/loadbalancer.go:866


S [SKIPPING] in Spec Setup (BeforeEach) [0.028 seconds]
[sig-network] ESIPP [Slow]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should handle updates to ExternalTrafficPolicy field [BeforeEach]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/loadbalancer.go:1095

  Only supported for providers [gce gke] (not local)

  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/loadbalancer.go:860
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Nov 13 03:53:39.131: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename nettest
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:83
STEP: Executing a successful http request from the external internet
[It] should support basic nodePort: udp functionality
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:387
STEP: Performing setup for networking test in namespace nettest-4112
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Nov 13 03:53:39.236: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
Nov 13 03:53:39.268: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Nov 13 03:53:41.273: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Nov 13 03:53:43.272: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Nov 13 03:53:45.271: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Nov 13 03:53:47.272: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Nov 13 03:53:49.271: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Nov 13 03:53:51.273: INFO: The status of Pod netserver-0 is Running (Ready = false)
Nov 13 03:53:53.271: INFO: The status of Pod netserver-0 is Running (Ready = false)
Nov 13 03:53:55.271: INFO: The status of Pod netserver-0 is Running (Ready = false)
Nov 13 03:53:57.273: INFO: The status of Pod netserver-0 is Running (Ready = false)
Nov 13 03:53:59.273: INFO: The status of Pod netserver-0 is Running (Ready = false)
Nov 13 03:54:01.272: INFO: The status of Pod netserver-0 is Running (Ready = true)
Nov 13 03:54:01.279: INFO: The status of Pod netserver-1 is Running (Ready = false)
Nov 13 03:54:03.283: INFO: The status of Pod netserver-1 is Running (Ready = false)
Nov 13 03:54:05.282: INFO: The status of Pod netserver-1 is Running (Ready = false)
Nov 13 03:54:07.284: INFO: The status of Pod netserver-1 is Running (Ready = true)
STEP: Creating test pods
Nov 13 03:54:17.320: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2
STEP: Getting node addresses
Nov 13 03:54:17.320: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
Nov 13 03:54:17.326: INFO: Requires at least 2 nodes (not -1)
[AfterEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Nov 13 03:54:17.328: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "nettest-4112" for this suite.


S [SKIPPING] [38.207 seconds]
[sig-network] Networking
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  Granular Checks: Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:151
    should support basic nodePort: udp functionality [It]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:387

    Requires at least 2 nodes (not -1)

    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/network/utils.go:782
------------------------------
SSSSSSS
------------------------------
[BeforeEach] [sig-network] ESIPP [Slow]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Nov 13 03:54:17.356: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename esipp
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] ESIPP [Slow]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/loadbalancer.go:858
Nov 13 03:54:17.380: INFO: Only supported for providers [gce gke] (not local)
[AfterEach] [sig-network] ESIPP [Slow]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Nov 13 03:54:17.381: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "esipp-7518" for this suite.
[AfterEach] [sig-network] ESIPP [Slow]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/loadbalancer.go:866


S [SKIPPING] in Spec Setup (BeforeEach) [0.034 seconds]
[sig-network] ESIPP [Slow]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should work from pods [BeforeEach]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/loadbalancer.go:1036

  Only supported for providers [gce gke] (not local)

  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/loadbalancer.go:860
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Nov 13 03:53:50.324: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename nettest
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:83
STEP: Executing a successful http request from the external internet
[It] should update nodePort: http [Slow]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:369
STEP: Performing setup for networking test in namespace nettest-1866
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Nov 13 03:53:50.462: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
Nov 13 03:53:50.495: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Nov 13 03:53:52.500: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Nov 13 03:53:54.498: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Nov 13 03:53:56.498: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Nov 13 03:53:58.499: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Nov 13 03:54:00.500: INFO: The status of Pod netserver-0 is Running (Ready = false)
Nov 13 03:54:02.499: INFO: The status of Pod netserver-0 is Running (Ready = false)
Nov 13 03:54:04.498: INFO: The status of Pod netserver-0 is Running (Ready = false)
Nov 13 03:54:06.500: INFO: The status of Pod netserver-0 is Running (Ready = false)
Nov 13 03:54:08.498: INFO: The status of Pod netserver-0 is Running (Ready = false)
Nov 13 03:54:10.498: INFO: The status of Pod netserver-0 is Running (Ready = false)
Nov 13 03:54:12.499: INFO: The status of Pod netserver-0 is Running (Ready = true)
Nov 13 03:54:12.504: INFO: The status of Pod netserver-1 is Running (Ready = false)
Nov 13 03:54:14.508: INFO: The status of Pod netserver-1 is Running (Ready = true)
STEP: Creating test pods
Nov 13 03:54:22.543: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2
STEP: Getting node addresses
Nov 13 03:54:22.543: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
Nov 13 03:54:22.550: INFO: Requires at least 2 nodes (not -1)
[AfterEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Nov 13 03:54:22.552: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "nettest-1866" for this suite.


S [SKIPPING] [32.238 seconds]
[sig-network] Networking
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  Granular Checks: Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:151
    should update nodePort: http [Slow] [It]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:369

    Requires at least 2 nodes (not -1)

    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/network/utils.go:782
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-network] DNS
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Nov 13 03:54:07.431: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support configurable pod resolv.conf
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns.go:458
STEP: Preparing a test DNS service with injected DNS names...
Nov 13 03:54:07.466: INFO: Created pod &Pod{ObjectMeta:{e2e-configmap-dns-server-febfa0f4-5494-44e6-9c4f-8468ca644516  dns-2095  3b56a9de-3243-46ee-9bb3-479a73cbb304 149298 0 2021-11-13 03:54:07 +0000 UTC   map[] map[kubernetes.io/psp:collectd] [] []  [{e2e.test Update v1 2021-11-13 03:54:07 +0000 UTC FieldsV1 {"f:spec":{"f:containers":{"k:{\"name\":\"agnhost-container\"}":{".":{},"f:args":{},"f:command":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{},"f:volumeMounts":{".":{},"k:{\"mountPath\":\"/etc/coredns\"}":{".":{},"f:mountPath":{},"f:name":{},"f:readOnly":{}}}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{},"f:volumes":{".":{},"k:{\"name\":\"coredns-config\"}":{".":{},"f:configMap":{".":{},"f:defaultMode":{},"f:name":{}},"f:name":{}}}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:coredns-config,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:&ConfigMapVolumeSource{LocalObjectReference:LocalObjectReference{Name:e2e-coredns-configmap-lj452,},Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},Volume{Name:kube-api-access-qp4dn,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:agnhost-container,Image:k8s.gcr.io/e2e-test-images/agnhost:2.32,Command:[/coredns],Args:[-conf /etc/coredns/Corefile],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:coredns-config,ReadOnly:true,MountPath:/etc/coredns,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-qp4dn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:Default,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Nov 13 03:54:19.477: INFO: testServerIP is 10.244.4.192
STEP: Creating a pod with dnsPolicy=None and customized dnsConfig...
Nov 13 03:54:19.489: INFO: Created pod &Pod{ObjectMeta:{e2e-dns-utils  dns-2095  8e6e3633-30cf-4a57-8081-19b2d45f0558 149554 0 2021-11-13 03:54:19 +0000 UTC   map[] map[kubernetes.io/psp:collectd] [] []  [{e2e.test Update v1 2021-11-13 03:54:19 +0000 UTC FieldsV1 {"f:spec":{"f:containers":{"k:{\"name\":\"agnhost-container\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsConfig":{".":{},"f:nameservers":{},"f:options":{},"f:searches":{}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-q5r64,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:agnhost-container,Image:k8s.gcr.io/e2e-test-images/agnhost:2.32,Command:[],Args:[pause],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-q5r64,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:None,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:&PodDNSConfig{Nameservers:[10.244.4.192],Searches:[resolv.conf.local],Options:[]PodDNSConfigOption{PodDNSConfigOption{Name:ndots,Value:*2,},},},ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
STEP: Verifying customized DNS option is configured on pod...
Nov 13 03:54:23.495: INFO: ExecWithOptions {Command:[cat /etc/resolv.conf] Namespace:dns-2095 PodName:e2e-dns-utils ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Nov 13 03:54:23.495: INFO: >>> kubeConfig: /root/.kube/config
STEP: Verifying customized name server and search path are working...
Nov 13 03:54:23.757: INFO: ExecWithOptions {Command:[dig +short +search notexistname] Namespace:dns-2095 PodName:e2e-dns-utils ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Nov 13 03:54:23.757: INFO: >>> kubeConfig: /root/.kube/config
Nov 13 03:54:23.860: INFO: Deleting pod e2e-dns-utils...
Nov 13 03:54:23.867: INFO: Deleting pod e2e-configmap-dns-server-febfa0f4-5494-44e6-9c4f-8468ca644516...
Nov 13 03:54:23.873: INFO: Deleting configmap e2e-coredns-configmap-lj452...
[AfterEach] [sig-network] DNS
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Nov 13 03:54:23.876: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-2095" for this suite.


• [SLOW TEST:16.453 seconds]
[sig-network] DNS
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should support configurable pod resolv.conf
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns.go:458
------------------------------
{"msg":"PASSED [sig-network] DNS should support configurable pod resolv.conf","total":-1,"completed":3,"skipped":385,"failed":0}

SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] version v1
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Nov 13 03:54:23.938: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename proxy
STEP: Waiting for a default service account to be provisioned in namespace
[It] should proxy logs on node using proxy subresource 
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:91
Nov 13 03:54:23.966: INFO: (0) /api/v1/nodes/node2/proxy/logs/: 
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename nettest
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:83
STEP: Executing a successful http request from the external internet
[It] should provide unchanging, static URL paths for kubernetes api services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:112
STEP: testing: /healthz
STEP: testing: /api
STEP: testing: /apis
STEP: testing: /metrics
STEP: testing: /openapi/v2
STEP: testing: /version
STEP: testing: /logs
[AfterEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Nov 13 03:54:25.488: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "nettest-5641" for this suite.

•
------------------------------
{"msg":"PASSED [sig-network] Networking should provide unchanging, static URL paths for kubernetes api services","total":-1,"completed":5,"skipped":1040,"failed":0}

SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Nov 13 03:53:57.930: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename nettest
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:83
STEP: Executing a successful http request from the external internet
[It] should function for endpoint-Service: udp
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:256
STEP: Performing setup for networking test in namespace nettest-7668
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Nov 13 03:53:58.040: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
Nov 13 03:53:58.072: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Nov 13 03:54:00.075: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Nov 13 03:54:02.075: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Nov 13 03:54:04.076: INFO: The status of Pod netserver-0 is Running (Ready = false)
Nov 13 03:54:06.077: INFO: The status of Pod netserver-0 is Running (Ready = false)
Nov 13 03:54:08.075: INFO: The status of Pod netserver-0 is Running (Ready = false)
Nov 13 03:54:10.076: INFO: The status of Pod netserver-0 is Running (Ready = false)
Nov 13 03:54:12.079: INFO: The status of Pod netserver-0 is Running (Ready = false)
Nov 13 03:54:14.078: INFO: The status of Pod netserver-0 is Running (Ready = false)
Nov 13 03:54:16.077: INFO: The status of Pod netserver-0 is Running (Ready = false)
Nov 13 03:54:18.076: INFO: The status of Pod netserver-0 is Running (Ready = false)
Nov 13 03:54:20.076: INFO: The status of Pod netserver-0 is Running (Ready = true)
Nov 13 03:54:20.082: INFO: The status of Pod netserver-1 is Running (Ready = false)
Nov 13 03:54:22.086: INFO: The status of Pod netserver-1 is Running (Ready = true)
STEP: Creating test pods
Nov 13 03:54:30.109: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2
STEP: Getting node addresses
Nov 13 03:54:30.109: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
Nov 13 03:54:30.116: INFO: Requires at least 2 nodes (not -1)
[AfterEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Nov 13 03:54:30.118: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "nettest-7668" for this suite.


S [SKIPPING] [32.196 seconds]
[sig-network] Networking
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  Granular Checks: Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:151
    should function for endpoint-Service: udp [It]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:256

    Requires at least 2 nodes (not -1)

    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/network/utils.go:782
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Nov 13 03:54:25.948: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746
[It] should release NodePorts on delete
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1561
STEP: creating service nodeport-reuse with type NodePort in namespace services-8664
STEP: deleting original service nodeport-reuse
Nov 13 03:54:25.996: INFO: Creating new host exec pod
Nov 13 03:54:26.012: INFO: The status of Pod hostexec is Pending, waiting for it to be Running (with Ready = true)
Nov 13 03:54:28.015: INFO: The status of Pod hostexec is Pending, waiting for it to be Running (with Ready = true)
Nov 13 03:54:30.016: INFO: The status of Pod hostexec is Running (Ready = true)
Nov 13 03:54:30.016: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8664 exec hostexec -- /bin/sh -x -c ! ss -ant46 'sport = :30428' | tail -n +2 | grep LISTEN'
Nov 13 03:54:30.262: INFO: stderr: "+ ss -ant46 'sport = :30428'\n+ tail -n +2\n+ grep LISTEN\n"
Nov 13 03:54:30.262: INFO: stdout: ""
STEP: creating service nodeport-reuse with same NodePort 30428
STEP: deleting service nodeport-reuse in namespace services-8664
[AfterEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Nov 13 03:54:30.281: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-8664" for this suite.
[AfterEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750

•
------------------------------
[BeforeEach] [sig-network] NetworkPolicy API
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Nov 13 03:54:30.208: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename networkpolicies
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support creating NetworkPolicy API operations
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/netpol/network_legacy.go:2196
STEP: getting /apis
STEP: getting /apis/networking.k8s.io
STEP: getting /apis/networking.k8s.iov1
STEP: creating
STEP: getting
STEP: listing
STEP: watching
Nov 13 03:54:30.255: INFO: starting watch
STEP: cluster-wide listing
STEP: cluster-wide watching
Nov 13 03:54:30.258: INFO: starting watch
STEP: patching
STEP: updating
Nov 13 03:54:30.265: INFO: waiting for watch events with expected annotations
Nov 13 03:54:30.265: INFO: missing expected annotations, waiting: map[string]string{"patched":"true"}
Nov 13 03:54:30.265: INFO: saw patched and updated annotations
STEP: deleting
STEP: deleting a collection
[AfterEach] [sig-network] NetworkPolicy API
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Nov 13 03:54:30.283: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "networkpolicies-6557" for this suite.

•
------------------------------
{"msg":"PASSED [sig-network] Services should release NodePorts on delete","total":-1,"completed":6,"skipped":1270,"failed":0}

S
------------------------------
{"msg":"PASSED [sig-network] NetworkPolicy API should support creating NetworkPolicy API operations","total":-1,"completed":3,"skipped":442,"failed":0}

SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-network] DNS
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Nov 13 03:54:30.361: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for the cluster [Provider:GCE]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns.go:68
Nov 13 03:54:30.384: INFO: Only supported for providers [gce gke] (not local)
[AfterEach] [sig-network] DNS
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Nov 13 03:54:30.385: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-7356" for this suite.


S [SKIPPING] [0.034 seconds]
[sig-network] DNS
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should provide DNS for the cluster [Provider:GCE] [It]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns.go:68

  Only supported for providers [gce gke] (not local)

  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns.go:69
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Nov 13 03:54:05.407: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename nettest
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:83
STEP: Executing a successful http request from the external internet
[It] should function for endpoint-Service: http
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:242
STEP: Performing setup for networking test in namespace nettest-7657
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Nov 13 03:54:05.514: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
Nov 13 03:54:05.544: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Nov 13 03:54:07.548: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Nov 13 03:54:09.548: INFO: The status of Pod netserver-0 is Running (Ready = false)
Nov 13 03:54:11.549: INFO: The status of Pod netserver-0 is Running (Ready = false)
Nov 13 03:54:13.548: INFO: The status of Pod netserver-0 is Running (Ready = false)
Nov 13 03:54:15.548: INFO: The status of Pod netserver-0 is Running (Ready = false)
Nov 13 03:54:17.548: INFO: The status of Pod netserver-0 is Running (Ready = false)
Nov 13 03:54:19.548: INFO: The status of Pod netserver-0 is Running (Ready = false)
Nov 13 03:54:21.548: INFO: The status of Pod netserver-0 is Running (Ready = false)
Nov 13 03:54:23.548: INFO: The status of Pod netserver-0 is Running (Ready = false)
Nov 13 03:54:25.548: INFO: The status of Pod netserver-0 is Running (Ready = false)
Nov 13 03:54:27.549: INFO: The status of Pod netserver-0 is Running (Ready = true)
Nov 13 03:54:27.554: INFO: The status of Pod netserver-1 is Running (Ready = true)
STEP: Creating test pods
Nov 13 03:54:33.575: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2
STEP: Getting node addresses
Nov 13 03:54:33.575: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
Nov 13 03:54:33.582: INFO: Requires at least 2 nodes (not -1)
[AfterEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Nov 13 03:54:33.584: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "nettest-7657" for this suite.


S [SKIPPING] [28.185 seconds]
[sig-network] Networking
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  Granular Checks: Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:151
    should function for endpoint-Service: http [It]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:242

    Requires at least 2 nodes (not -1)

    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/network/utils.go:782
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-network] DNS
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Nov 13 03:54:30.520: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should resolve DNS of partial qualified names for the cluster [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns.go:90
STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default;check="$$(dig +tcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc;test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-4014.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.dns-4014.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-4014.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default;check="$$(dig +tcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc;test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-4014.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.dns-4014.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-4014.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Nov 13 03:54:38.612: INFO: DNS probes using dns-4014/dns-test-e39fe201-784d-4656-8b50-76c6ae9b6d46 succeeded

STEP: deleting the pod
[AfterEach] [sig-network] DNS
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Nov 13 03:54:38.619: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-4014" for this suite.


• [SLOW TEST:8.107 seconds]
[sig-network] DNS
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should resolve DNS of partial qualified names for the cluster [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns.go:90
------------------------------
{"msg":"PASSED [sig-network] DNS should resolve DNS of partial qualified names for the cluster [LinuxOnly]","total":-1,"completed":7,"skipped":1375,"failed":0}

SSSSSSS
------------------------------
Nov 13 03:54:38.645: INFO: Running AfterSuite actions on all nodes


[BeforeEach] [sig-network] NoSNAT [Feature:NoSNAT] [Slow]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Nov 13 03:54:22.822: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename no-snat-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] Should be able to send traffic between Pods without SNAT
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/no_snat.go:64
STEP: creating a test pod on each Node
STEP: waiting for all of the no-snat-test pods to be scheduled and running
STEP: sending traffic from each pod to the others and checking that SNAT does not occur
Nov 13 03:54:32.908: INFO: Waiting up to 2m0s to get response from 10.244.3.82:8080
Nov 13 03:54:32.909: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=no-snat-test-7621 exec no-snat-test4x2zx -- /bin/sh -x -c curl -q -s --connect-timeout 30 10.244.3.82:8080/clientip'
Nov 13 03:54:33.138: INFO: stderr: "+ curl -q -s --connect-timeout 30 10.244.3.82:8080/clientip\n"
Nov 13 03:54:33.138: INFO: stdout: "10.244.0.11:54544"
STEP: Verifying the preserved source ip
Nov 13 03:54:33.138: INFO: Waiting up to 2m0s to get response from 10.244.1.9:8080
Nov 13 03:54:33.138: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=no-snat-test-7621 exec no-snat-test4x2zx -- /bin/sh -x -c curl -q -s --connect-timeout 30 10.244.1.9:8080/clientip'
Nov 13 03:54:33.378: INFO: stderr: "+ curl -q -s --connect-timeout 30 10.244.1.9:8080/clientip\n"
Nov 13 03:54:33.378: INFO: stdout: "10.244.0.11:33752"
STEP: Verifying the preserved source ip
Nov 13 03:54:33.378: INFO: Waiting up to 2m0s to get response from 10.244.4.198:8080
Nov 13 03:54:33.379: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=no-snat-test-7621 exec no-snat-test4x2zx -- /bin/sh -x -c curl -q -s --connect-timeout 30 10.244.4.198:8080/clientip'
Nov 13 03:54:33.612: INFO: stderr: "+ curl -q -s --connect-timeout 30 10.244.4.198:8080/clientip\n"
Nov 13 03:54:33.613: INFO: stdout: "10.244.0.11:56710"
STEP: Verifying the preserved source ip
Nov 13 03:54:33.613: INFO: Waiting up to 2m0s to get response from 10.244.2.6:8080
Nov 13 03:54:33.613: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=no-snat-test-7621 exec no-snat-test4x2zx -- /bin/sh -x -c curl -q -s --connect-timeout 30 10.244.2.6:8080/clientip'
Nov 13 03:54:33.847: INFO: stderr: "+ curl -q -s --connect-timeout 30 10.244.2.6:8080/clientip\n"
Nov 13 03:54:33.847: INFO: stdout: "10.244.0.11:54310"
STEP: Verifying the preserved source ip
Nov 13 03:54:33.847: INFO: Waiting up to 2m0s to get response from 10.244.0.11:8080
Nov 13 03:54:33.847: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=no-snat-test-7621 exec no-snat-testc6zxj -- /bin/sh -x -c curl -q -s --connect-timeout 30 10.244.0.11:8080/clientip'
Nov 13 03:54:34.116: INFO: stderr: "+ curl -q -s --connect-timeout 30 10.244.0.11:8080/clientip\n"
Nov 13 03:54:34.116: INFO: stdout: "10.244.3.82:59498"
STEP: Verifying the preserved source ip
Nov 13 03:54:34.116: INFO: Waiting up to 2m0s to get response from 10.244.1.9:8080
Nov 13 03:54:34.116: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=no-snat-test-7621 exec no-snat-testc6zxj -- /bin/sh -x -c curl -q -s --connect-timeout 30 10.244.1.9:8080/clientip'
Nov 13 03:54:34.433: INFO: stderr: "+ curl -q -s --connect-timeout 30 10.244.1.9:8080/clientip\n"
Nov 13 03:54:34.433: INFO: stdout: "10.244.3.82:38000"
STEP: Verifying the preserved source ip
Nov 13 03:54:34.434: INFO: Waiting up to 2m0s to get response from 10.244.4.198:8080
Nov 13 03:54:34.434: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=no-snat-test-7621 exec no-snat-testc6zxj -- /bin/sh -x -c curl -q -s --connect-timeout 30 10.244.4.198:8080/clientip'
Nov 13 03:54:34.710: INFO: stderr: "+ curl -q -s --connect-timeout 30 10.244.4.198:8080/clientip\n"
Nov 13 03:54:34.710: INFO: stdout: "10.244.3.82:38798"
STEP: Verifying the preserved source ip
Nov 13 03:54:34.710: INFO: Waiting up to 2m0s to get response from 10.244.2.6:8080
Nov 13 03:54:34.710: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=no-snat-test-7621 exec no-snat-testc6zxj -- /bin/sh -x -c curl -q -s --connect-timeout 30 10.244.2.6:8080/clientip'
Nov 13 03:54:35.421: INFO: stderr: "+ curl -q -s --connect-timeout 30 10.244.2.6:8080/clientip\n"
Nov 13 03:54:35.421: INFO: stdout: "10.244.3.82:43240"
STEP: Verifying the preserved source ip
Nov 13 03:54:35.421: INFO: Waiting up to 2m0s to get response from 10.244.0.11:8080
Nov 13 03:54:35.421: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=no-snat-test-7621 exec no-snat-testnm77l -- /bin/sh -x -c curl -q -s --connect-timeout 30 10.244.0.11:8080/clientip'
Nov 13 03:54:35.679: INFO: stderr: "+ curl -q -s --connect-timeout 30 10.244.0.11:8080/clientip\n"
Nov 13 03:54:35.679: INFO: stdout: "10.244.1.9:46820"
STEP: Verifying the preserved source ip
Nov 13 03:54:35.679: INFO: Waiting up to 2m0s to get response from 10.244.3.82:8080
Nov 13 03:54:35.679: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=no-snat-test-7621 exec no-snat-testnm77l -- /bin/sh -x -c curl -q -s --connect-timeout 30 10.244.3.82:8080/clientip'
Nov 13 03:54:35.909: INFO: stderr: "+ curl -q -s --connect-timeout 30 10.244.3.82:8080/clientip\n"
Nov 13 03:54:35.909: INFO: stdout: "10.244.1.9:40272"
STEP: Verifying the preserved source ip
Nov 13 03:54:35.909: INFO: Waiting up to 2m0s to get response from 10.244.4.198:8080
Nov 13 03:54:35.909: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=no-snat-test-7621 exec no-snat-testnm77l -- /bin/sh -x -c curl -q -s --connect-timeout 30 10.244.4.198:8080/clientip'
Nov 13 03:54:36.148: INFO: stderr: "+ curl -q -s --connect-timeout 30 10.244.4.198:8080/clientip\n"
Nov 13 03:54:36.148: INFO: stdout: "10.244.1.9:48152"
STEP: Verifying the preserved source ip
Nov 13 03:54:36.148: INFO: Waiting up to 2m0s to get response from 10.244.2.6:8080
Nov 13 03:54:36.149: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=no-snat-test-7621 exec no-snat-testnm77l -- /bin/sh -x -c curl -q -s --connect-timeout 30 10.244.2.6:8080/clientip'
Nov 13 03:54:36.376: INFO: stderr: "+ curl -q -s --connect-timeout 30 10.244.2.6:8080/clientip\n"
Nov 13 03:54:36.376: INFO: stdout: "10.244.1.9:36526"
STEP: Verifying the preserved source ip
Nov 13 03:54:36.376: INFO: Waiting up to 2m0s to get response from 10.244.0.11:8080
Nov 13 03:54:36.376: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=no-snat-test-7621 exec no-snat-testqdsz6 -- /bin/sh -x -c curl -q -s --connect-timeout 30 10.244.0.11:8080/clientip'
Nov 13 03:54:37.131: INFO: stderr: "+ curl -q -s --connect-timeout 30 10.244.0.11:8080/clientip\n"
Nov 13 03:54:37.131: INFO: stdout: "10.244.4.198:43724"
STEP: Verifying the preserved source ip
Nov 13 03:54:37.131: INFO: Waiting up to 2m0s to get response from 10.244.3.82:8080
Nov 13 03:54:37.131: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=no-snat-test-7621 exec no-snat-testqdsz6 -- /bin/sh -x -c curl -q -s --connect-timeout 30 10.244.3.82:8080/clientip'
Nov 13 03:54:37.511: INFO: stderr: "+ curl -q -s --connect-timeout 30 10.244.3.82:8080/clientip\n"
Nov 13 03:54:37.511: INFO: stdout: "10.244.4.198:40960"
STEP: Verifying the preserved source ip
Nov 13 03:54:37.511: INFO: Waiting up to 2m0s to get response from 10.244.1.9:8080
Nov 13 03:54:37.511: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=no-snat-test-7621 exec no-snat-testqdsz6 -- /bin/sh -x -c curl -q -s --connect-timeout 30 10.244.1.9:8080/clientip'
Nov 13 03:54:37.755: INFO: stderr: "+ curl -q -s --connect-timeout 30 10.244.1.9:8080/clientip\n"
Nov 13 03:54:37.755: INFO: stdout: "10.244.4.198:55410"
STEP: Verifying the preserved source ip
Nov 13 03:54:37.755: INFO: Waiting up to 2m0s to get response from 10.244.2.6:8080
Nov 13 03:54:37.755: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=no-snat-test-7621 exec no-snat-testqdsz6 -- /bin/sh -x -c curl -q -s --connect-timeout 30 10.244.2.6:8080/clientip'
Nov 13 03:54:38.091: INFO: stderr: "+ curl -q -s --connect-timeout 30 10.244.2.6:8080/clientip\n"
Nov 13 03:54:38.091: INFO: stdout: "10.244.4.198:39596"
STEP: Verifying the preserved source ip
Nov 13 03:54:38.091: INFO: Waiting up to 2m0s to get response from 10.244.0.11:8080
Nov 13 03:54:38.091: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=no-snat-test-7621 exec no-snat-testqzh2d -- /bin/sh -x -c curl -q -s --connect-timeout 30 10.244.0.11:8080/clientip'
Nov 13 03:54:38.339: INFO: stderr: "+ curl -q -s --connect-timeout 30 10.244.0.11:8080/clientip\n"
Nov 13 03:54:38.339: INFO: stdout: "10.244.2.6:40280"
STEP: Verifying the preserved source ip
Nov 13 03:54:38.339: INFO: Waiting up to 2m0s to get response from 10.244.3.82:8080
Nov 13 03:54:38.339: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=no-snat-test-7621 exec no-snat-testqzh2d -- /bin/sh -x -c curl -q -s --connect-timeout 30 10.244.3.82:8080/clientip'
Nov 13 03:54:38.553: INFO: stderr: "+ curl -q -s --connect-timeout 30 10.244.3.82:8080/clientip\n"
Nov 13 03:54:38.553: INFO: stdout: "10.244.2.6:51442"
STEP: Verifying the preserved source ip
Nov 13 03:54:38.553: INFO: Waiting up to 2m0s to get response from 10.244.1.9:8080
Nov 13 03:54:38.553: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=no-snat-test-7621 exec no-snat-testqzh2d -- /bin/sh -x -c curl -q -s --connect-timeout 30 10.244.1.9:8080/clientip'
Nov 13 03:54:38.806: INFO: stderr: "+ curl -q -s --connect-timeout 30 10.244.1.9:8080/clientip\n"
Nov 13 03:54:38.806: INFO: stdout: "10.244.2.6:33294"
STEP: Verifying the preserved source ip
Nov 13 03:54:38.807: INFO: Waiting up to 2m0s to get response from 10.244.4.198:8080
Nov 13 03:54:38.807: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=no-snat-test-7621 exec no-snat-testqzh2d -- /bin/sh -x -c curl -q -s --connect-timeout 30 10.244.4.198:8080/clientip'
Nov 13 03:54:39.059: INFO: stderr: "+ curl -q -s --connect-timeout 30 10.244.4.198:8080/clientip\n"
Nov 13 03:54:39.059: INFO: stdout: "10.244.2.6:57444"
STEP: Verifying the preserved source ip
[AfterEach] [sig-network] NoSNAT [Feature:NoSNAT] [Slow]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Nov 13 03:54:39.059: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "no-snat-test-7621" for this suite.


• [SLOW TEST:16.246 seconds]
[sig-network] NoSNAT [Feature:NoSNAT] [Slow]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  Should be able to send traffic between Pods without SNAT
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/no_snat.go:64
------------------------------
{"msg":"PASSED [sig-network] NoSNAT [Feature:NoSNAT] [Slow] Should be able to send traffic between Pods without SNAT","total":-1,"completed":4,"skipped":406,"failed":0}
Nov 13 03:54:39.070: INFO: Running AfterSuite actions on all nodes


[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Nov 13 03:52:20.387: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
W1113 03:52:20.413880      30 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
Nov 13 03:52:20.414: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled
Nov 13 03:52:20.415: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746
[It] should be able to up and down services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1015
STEP: creating up-down-1 in namespace services-7272
STEP: creating service up-down-1 in namespace services-7272
STEP: creating replication controller up-down-1 in namespace services-7272
I1113 03:52:20.434011      30 runners.go:190] Created replication controller with name: up-down-1, namespace: services-7272, replica count: 3
I1113 03:52:23.484893      30 runners.go:190] up-down-1 Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1113 03:52:26.486116      30 runners.go:190] up-down-1 Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1113 03:52:29.487226      30 runners.go:190] up-down-1 Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1113 03:52:32.487981      30 runners.go:190] up-down-1 Pods: 3 out of 3 created, 2 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1113 03:52:35.488604      30 runners.go:190] up-down-1 Pods: 3 out of 3 created, 2 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1113 03:52:38.489005      30 runners.go:190] up-down-1 Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
STEP: creating up-down-2 in namespace services-7272
STEP: creating service up-down-2 in namespace services-7272
STEP: creating replication controller up-down-2 in namespace services-7272
I1113 03:52:38.502512      30 runners.go:190] Created replication controller with name: up-down-2, namespace: services-7272, replica count: 3
I1113 03:52:41.553978      30 runners.go:190] up-down-2 Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1113 03:52:44.554834      30 runners.go:190] up-down-2 Pods: 3 out of 3 created, 2 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1113 03:52:47.558571      30 runners.go:190] up-down-2 Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
STEP: verifying service up-down-1 is up
Nov 13 03:52:47.561: INFO: Creating new host exec pod
Nov 13 03:52:47.575: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Nov 13 03:52:49.578: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Nov 13 03:52:51.582: INFO: The status of Pod verify-service-up-host-exec-pod is Running (Ready = true)
Nov 13 03:52:51.582: INFO: Creating new exec pod
STEP: verifying service has 3 reachable backends
Nov 13 03:52:59.600: INFO: Executing cmd "for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.41.68:80 2>&1 || true; echo; done" in pod services-7272/verify-service-up-host-exec-pod
Nov 13 03:52:59.600: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7272 exec verify-service-up-host-exec-pod -- /bin/sh -x -c for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.41.68:80 2>&1 || true; echo; done'
Nov 13 03:52:59.963: INFO: stderr: "+ seq 1 150\n+ wget -q -T 1 -O - http://10.233.41.68:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.41.68:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.41.68:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.41.68:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.41.68:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.41.68:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.41.68:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.41.68:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.41.68:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.41.68:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.41.68:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.41.68:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.41.68:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.41.68:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.41.68:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.41.68:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.41.68:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.41.68:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.41.68:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.41.68:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.41.68:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.41.68:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.41.68:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.41.68:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.41.68:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.41.68:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.41.68:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.41.68:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.41.68:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.41.68:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.41.68:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.41.68:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.41.68:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.41.68:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.41.68:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.41.68:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.41.68:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.41.68:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.41.68:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.41.68:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.41.68:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.41.68:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.41.68:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.41.68:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.41.68:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.41.68:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.41.68:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.41.68:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.41.68:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.41.68:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.41.68:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.41.68:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.41.68:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.41.68:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.41.68:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.41.68:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.41.68:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.41.68:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.41.68:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.41.68:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.41.68:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.41.68:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.41.68:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.41.68:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.41.68:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.41.68:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.41.68:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.41.68:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.41.68:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.41.68:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.41.68:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.41.68:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.41.68:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.41.68:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.41.68:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.41.68:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.41.68:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.41.68:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.41.68:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.41.68:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.41.68:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.41.68:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.41.68:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.41.68:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.41.68:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.41.68:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.41.68:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.41.68:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.41.68:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.41.68:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.41.68:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.41.68:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.41.68:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.41.68:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.41.68:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.41.68:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.41.68:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.41.68:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.41.68:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.41.68:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.41.68:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.41.68:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.41.68:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.41.68:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.41.68:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.41.68:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.41.68:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.41.68:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.41.68:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.41.68:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.41.68:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.41.68:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.41.68:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.41.68:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.41.68:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.41.68:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.41.68:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.41.68:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.41.68:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.41.68:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.41.68:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.41.68:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.41.68:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.41.68:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.41.68:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.41.68:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.41.68:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.41.68:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.41.68:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.41.68:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.41.68:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.41.68:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.41.68:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.41.68:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.41.68:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.41.68:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.41.68:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.41.68:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.41.68:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.41.68:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.41.68:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.41.68:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.41.68:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.41.68:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.41.68:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.41.68:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.41.68:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.41.68:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.41.68:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.41.68:80\n+ echo\n"
Nov 13 03:52:59.963: INFO: stdout: "up-down-1-mzk2w\nup-down-1-c7dlh\nup-down-1-mzk2w\nup-down-1-mzk2w\nup-down-1-gzfhz\nup-down-1-c7dlh\nup-down-1-c7dlh\nup-down-1-gzfhz\nup-down-1-mzk2w\nup-down-1-gzfhz\nup-down-1-mzk2w\nup-down-1-c7dlh\nup-down-1-c7dlh\nup-down-1-c7dlh\nup-down-1-gzfhz\nup-down-1-c7dlh\nup-down-1-gzfhz\nup-down-1-mzk2w\nup-down-1-mzk2w\nup-down-1-mzk2w\nup-down-1-mzk2w\nup-down-1-c7dlh\nup-down-1-gzfhz\nup-down-1-mzk2w\nup-down-1-c7dlh\nup-down-1-mzk2w\nup-down-1-c7dlh\nup-down-1-mzk2w\nup-down-1-gzfhz\nup-down-1-c7dlh\nup-down-1-c7dlh\nup-down-1-c7dlh\nup-down-1-c7dlh\nup-down-1-gzfhz\nup-down-1-c7dlh\nup-down-1-c7dlh\nup-down-1-gzfhz\nup-down-1-gzfhz\nup-down-1-c7dlh\nup-down-1-gzfhz\nup-down-1-c7dlh\nup-down-1-gzfhz\nup-down-1-mzk2w\nup-down-1-c7dlh\nup-down-1-c7dlh\nup-down-1-c7dlh\nup-down-1-c7dlh\nup-down-1-mzk2w\nup-down-1-mzk2w\nup-down-1-mzk2w\nup-down-1-mzk2w\nup-down-1-mzk2w\nup-down-1-gzfhz\nup-down-1-gzfhz\nup-down-1-mzk2w\nup-down-1-gzfhz\nup-down-1-mzk2w\nup-down-1-gzfhz\nup-down-1-mzk2w\nup-down-1-c7dlh\nup-down-1-gzfhz\nup-down-1-gzfhz\nup-down-1-gzfhz\nup-down-1-mzk2w\nup-down-1-c7dlh\nup-down-1-gzfhz\nup-down-1-c7dlh\nup-down-1-gzfhz\nup-down-1-mzk2w\nup-down-1-gzfhz\nup-down-1-mzk2w\nup-down-1-gzfhz\nup-down-1-c7dlh\nup-down-1-c7dlh\nup-down-1-c7dlh\nup-down-1-c7dlh\nup-down-1-c7dlh\nup-down-1-gzfhz\nup-down-1-mzk2w\nup-down-1-c7dlh\nup-down-1-c7dlh\nup-down-1-c7dlh\nup-down-1-gzfhz\nup-down-1-c7dlh\nup-down-1-c7dlh\nup-down-1-mzk2w\nup-down-1-gzfhz\nup-down-1-c7dlh\nup-down-1-c7dlh\nup-down-1-c7dlh\nup-down-1-gzfhz\nup-down-1-mzk2w\nup-down-1-gzfhz\nup-down-1-mzk2w\nup-down-1-c7dlh\nup-down-1-mzk2w\nup-down-1-gzfhz\nup-down-1-c7dlh\nup-down-1-mzk2w\nup-down-1-mzk2w\nup-down-1-gzfhz\nup-down-1-mzk2w\nup-down-1-c7dlh\nup-down-1-c7dlh\nup-down-1-gzfhz\nup-down-1-gzfhz\nup-down-1-gzfhz\nup-down-1-gzfhz\nup-down-1-gzfhz\nup-down-1-gzfhz\nup-down-1-mzk2w\nup-down-1-gzfhz\nup-down-1-gzfhz\nup-down-1-c7dlh\nup-down-1-mzk2w\nup-down-1-gzfhz\nup-down-1-gzfhz\nup-down-1-gzfhz\nup-down-1-c7dlh\nup-down-1-gzfhz\nup-down-1-mzk2w\nup-down-1-c7dlh\nup-down-1-gzfhz\nup-down-1-c7dlh\nup-down-1-c7dlh\nup-down-1-gzfhz\nup-down-1-c7dlh\nup-down-1-mzk2w\nup-down-1-mzk2w\nup-down-1-c7dlh\nup-down-1-c7dlh\nup-down-1-gzfhz\nup-down-1-gzfhz\nup-down-1-c7dlh\nup-down-1-gzfhz\nup-down-1-c7dlh\nup-down-1-c7dlh\nup-down-1-mzk2w\nup-down-1-mzk2w\nup-down-1-c7dlh\nup-down-1-c7dlh\nup-down-1-gzfhz\nup-down-1-mzk2w\nup-down-1-gzfhz\nup-down-1-mzk2w\nup-down-1-mzk2w\nup-down-1-mzk2w\nup-down-1-c7dlh\nup-down-1-mzk2w\nup-down-1-mzk2w\n"
Nov 13 03:52:59.963: INFO: Executing cmd "for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.41.68:80 2>&1 || true; echo; done" in pod services-7272/verify-service-up-exec-pod-pmgsb
Nov 13 03:52:59.964: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7272 exec verify-service-up-exec-pod-pmgsb -- /bin/sh -x -c for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.41.68:80 2>&1 || true; echo; done'
Nov 13 03:53:00.329: INFO: stderr: "+ seq 1 150\n+ wget -q -T 1 -O - http://10.233.41.68:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.41.68:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.41.68:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.41.68:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.41.68:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.41.68:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.41.68:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.41.68:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.41.68:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.41.68:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.41.68:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.41.68:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.41.68:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.41.68:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.41.68:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.41.68:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.41.68:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.41.68:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.41.68:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.41.68:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.41.68:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.41.68:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.41.68:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.41.68:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.41.68:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.41.68:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.41.68:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.41.68:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.41.68:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.41.68:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.41.68:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.41.68:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.41.68:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.41.68:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.41.68:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.41.68:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.41.68:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.41.68:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.41.68:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.41.68:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.41.68:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.41.68:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.41.68:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.41.68:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.41.68:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.41.68:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.41.68:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.41.68:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.41.68:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.41.68:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.41.68:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.41.68:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.41.68:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.41.68:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.41.68:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.41.68:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.41.68:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.41.68:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.41.68:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.41.68:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.41.68:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.41.68:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.41.68:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.41.68:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.41.68:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.41.68:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.41.68:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.41.68:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.41.68:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.41.68:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.41.68:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.41.68:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.41.68:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.41.68:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.41.68:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.41.68:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.41.68:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.41.68:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.41.68:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.41.68:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.41.68:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.41.68:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.41.68:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.41.68:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.41.68:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.41.68:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.41.68:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.41.68:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.41.68:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.41.68:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.41.68:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.41.68:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.41.68:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.41.68:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.41.68:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.41.68:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.41.68:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.41.68:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.41.68:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.41.68:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.41.68:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.41.68:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.41.68:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.41.68:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.41.68:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.41.68:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.41.68:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.41.68:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.41.68:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.41.68:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.41.68:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.41.68:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.41.68:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.41.68:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.41.68:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.41.68:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.41.68:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.41.68:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.41.68:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.41.68:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.41.68:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.41.68:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.41.68:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.41.68:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.41.68:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.41.68:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.41.68:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.41.68:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.41.68:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.41.68:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.41.68:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.41.68:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.41.68:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.41.68:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.41.68:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.41.68:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.41.68:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.41.68:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.41.68:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.41.68:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.41.68:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.41.68:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.41.68:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.41.68:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.41.68:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.41.68:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.41.68:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.41.68:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.41.68:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.41.68:80\n+ echo\n"
Nov 13 03:53:00.330: INFO: stdout: "up-down-1-gzfhz\nup-down-1-c7dlh\nup-down-1-c7dlh\nup-down-1-gzfhz\nup-down-1-mzk2w\nup-down-1-c7dlh\nup-down-1-c7dlh\nup-down-1-mzk2w\nup-down-1-mzk2w\nup-down-1-c7dlh\nup-down-1-c7dlh\nup-down-1-gzfhz\nup-down-1-mzk2w\nup-down-1-c7dlh\nup-down-1-c7dlh\nup-down-1-gzfhz\nup-down-1-gzfhz\nup-down-1-mzk2w\nup-down-1-gzfhz\nup-down-1-gzfhz\nup-down-1-gzfhz\nup-down-1-gzfhz\nup-down-1-c7dlh\nup-down-1-mzk2w\nup-down-1-mzk2w\nup-down-1-c7dlh\nup-down-1-mzk2w\nup-down-1-gzfhz\nup-down-1-c7dlh\nup-down-1-mzk2w\nup-down-1-mzk2w\nup-down-1-c7dlh\nup-down-1-gzfhz\nup-down-1-c7dlh\nup-down-1-mzk2w\nup-down-1-mzk2w\nup-down-1-gzfhz\nup-down-1-gzfhz\nup-down-1-c7dlh\nup-down-1-gzfhz\nup-down-1-c7dlh\nup-down-1-c7dlh\nup-down-1-mzk2w\nup-down-1-gzfhz\nup-down-1-c7dlh\nup-down-1-c7dlh\nup-down-1-gzfhz\nup-down-1-c7dlh\nup-down-1-c7dlh\nup-down-1-mzk2w\nup-down-1-gzfhz\nup-down-1-mzk2w\nup-down-1-mzk2w\nup-down-1-mzk2w\nup-down-1-mzk2w\nup-down-1-gzfhz\nup-down-1-mzk2w\nup-down-1-c7dlh\nup-down-1-gzfhz\nup-down-1-c7dlh\nup-down-1-gzfhz\nup-down-1-mzk2w\nup-down-1-c7dlh\nup-down-1-mzk2w\nup-down-1-gzfhz\nup-down-1-c7dlh\nup-down-1-mzk2w\nup-down-1-mzk2w\nup-down-1-mzk2w\nup-down-1-gzfhz\nup-down-1-c7dlh\nup-down-1-mzk2w\nup-down-1-gzfhz\nup-down-1-c7dlh\nup-down-1-mzk2w\nup-down-1-c7dlh\nup-down-1-mzk2w\nup-down-1-c7dlh\nup-down-1-gzfhz\nup-down-1-gzfhz\nup-down-1-c7dlh\nup-down-1-c7dlh\nup-down-1-mzk2w\nup-down-1-c7dlh\nup-down-1-mzk2w\nup-down-1-gzfhz\nup-down-1-mzk2w\nup-down-1-c7dlh\nup-down-1-gzfhz\nup-down-1-c7dlh\nup-down-1-mzk2w\nup-down-1-mzk2w\nup-down-1-c7dlh\nup-down-1-c7dlh\nup-down-1-c7dlh\nup-down-1-c7dlh\nup-down-1-gzfhz\nup-down-1-mzk2w\nup-down-1-gzfhz\nup-down-1-mzk2w\nup-down-1-gzfhz\nup-down-1-c7dlh\nup-down-1-mzk2w\nup-down-1-mzk2w\nup-down-1-gzfhz\nup-down-1-c7dlh\nup-down-1-c7dlh\nup-down-1-mzk2w\nup-down-1-c7dlh\nup-down-1-gzfhz\nup-down-1-c7dlh\nup-down-1-gzfhz\nup-down-1-mzk2w\nup-down-1-gzfhz\nup-down-1-mzk2w\nup-down-1-gzfhz\nup-down-1-gzfhz\nup-down-1-gzfhz\nup-down-1-c7dlh\nup-down-1-c7dlh\nup-down-1-mzk2w\nup-down-1-mzk2w\nup-down-1-mzk2w\nup-down-1-gzfhz\nup-down-1-mzk2w\nup-down-1-mzk2w\nup-down-1-mzk2w\nup-down-1-mzk2w\nup-down-1-gzfhz\nup-down-1-gzfhz\nup-down-1-gzfhz\nup-down-1-gzfhz\nup-down-1-mzk2w\nup-down-1-c7dlh\nup-down-1-gzfhz\nup-down-1-c7dlh\nup-down-1-c7dlh\nup-down-1-c7dlh\nup-down-1-c7dlh\nup-down-1-c7dlh\nup-down-1-mzk2w\nup-down-1-c7dlh\nup-down-1-c7dlh\nup-down-1-mzk2w\nup-down-1-mzk2w\nup-down-1-mzk2w\nup-down-1-gzfhz\nup-down-1-mzk2w\nup-down-1-mzk2w\nup-down-1-mzk2w\n"
STEP: Deleting pod verify-service-up-host-exec-pod in namespace services-7272
STEP: Deleting pod verify-service-up-exec-pod-pmgsb in namespace services-7272
STEP: verifying service up-down-2 is up
Nov 13 03:53:00.342: INFO: Creating new host exec pod
Nov 13 03:53:00.353: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Nov 13 03:53:02.357: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Nov 13 03:53:04.357: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Nov 13 03:53:06.356: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Nov 13 03:53:08.357: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Nov 13 03:53:10.358: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Nov 13 03:53:12.361: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Nov 13 03:53:14.356: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Nov 13 03:53:16.358: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Nov 13 03:53:18.358: INFO: The status of Pod verify-service-up-host-exec-pod is Running (Ready = true)
Nov 13 03:53:18.358: INFO: Creating new exec pod
STEP: verifying service has 3 reachable backends
Nov 13 03:53:22.386: INFO: Executing cmd "for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.6.126:80 2>&1 || true; echo; done" in pod services-7272/verify-service-up-host-exec-pod
Nov 13 03:53:22.386: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7272 exec verify-service-up-host-exec-pod -- /bin/sh -x -c for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.6.126:80 2>&1 || true; echo; done'
Nov 13 03:53:22.953: INFO: stderr: "+ seq 1 150\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n"
Nov 13 03:53:22.953: INFO: stdout: "up-down-2-b4bjg\nup-down-2-b4bjg\nup-down-2-b4bjg\nup-down-2-5f54w\nup-down-2-5f54w\nup-down-2-b4bjg\nup-down-2-5f54w\nup-down-2-5f54w\nup-down-2-b4bjg\nup-down-2-q89nc\nup-down-2-q89nc\nup-down-2-q89nc\nup-down-2-q89nc\nup-down-2-q89nc\nup-down-2-q89nc\nup-down-2-b4bjg\nup-down-2-b4bjg\nup-down-2-5f54w\nup-down-2-b4bjg\nup-down-2-5f54w\nup-down-2-b4bjg\nup-down-2-5f54w\nup-down-2-b4bjg\nup-down-2-q89nc\nup-down-2-b4bjg\nup-down-2-q89nc\nup-down-2-b4bjg\nup-down-2-q89nc\nup-down-2-5f54w\nup-down-2-5f54w\nup-down-2-b4bjg\nup-down-2-q89nc\nup-down-2-5f54w\nup-down-2-5f54w\nup-down-2-q89nc\nup-down-2-q89nc\nup-down-2-q89nc\nup-down-2-5f54w\nup-down-2-q89nc\nup-down-2-q89nc\nup-down-2-b4bjg\nup-down-2-5f54w\nup-down-2-q89nc\nup-down-2-q89nc\nup-down-2-q89nc\nup-down-2-q89nc\nup-down-2-q89nc\nup-down-2-b4bjg\nup-down-2-5f54w\nup-down-2-q89nc\nup-down-2-q89nc\nup-down-2-b4bjg\nup-down-2-5f54w\nup-down-2-5f54w\nup-down-2-q89nc\nup-down-2-b4bjg\nup-down-2-q89nc\nup-down-2-q89nc\nup-down-2-b4bjg\nup-down-2-5f54w\nup-down-2-5f54w\nup-down-2-5f54w\nup-down-2-b4bjg\nup-down-2-5f54w\nup-down-2-b4bjg\nup-down-2-b4bjg\nup-down-2-q89nc\nup-down-2-b4bjg\nup-down-2-b4bjg\nup-down-2-5f54w\nup-down-2-5f54w\nup-down-2-5f54w\nup-down-2-b4bjg\nup-down-2-b4bjg\nup-down-2-b4bjg\nup-down-2-5f54w\nup-down-2-b4bjg\nup-down-2-q89nc\nup-down-2-b4bjg\nup-down-2-q89nc\nup-down-2-q89nc\nup-down-2-b4bjg\nup-down-2-b4bjg\nup-down-2-q89nc\nup-down-2-5f54w\nup-down-2-b4bjg\nup-down-2-5f54w\nup-down-2-q89nc\nup-down-2-q89nc\nup-down-2-q89nc\nup-down-2-q89nc\nup-down-2-b4bjg\nup-down-2-b4bjg\nup-down-2-q89nc\nup-down-2-q89nc\nup-down-2-5f54w\nup-down-2-q89nc\nup-down-2-q89nc\nup-down-2-b4bjg\nup-down-2-q89nc\nup-down-2-b4bjg\nup-down-2-5f54w\nup-down-2-b4bjg\nup-down-2-q89nc\nup-down-2-5f54w\nup-down-2-q89nc\nup-down-2-5f54w\nup-down-2-5f54w\nup-down-2-q89nc\nup-down-2-q89nc\nup-down-2-b4bjg\nup-down-2-q89nc\nup-down-2-q89nc\nup-down-2-5f54w\nup-down-2-5f54w\nup-down-2-b4bjg\nup-down-2-5f54w\nup-down-2-5f54w\nup-down-2-q89nc\nup-down-2-b4bjg\nup-down-2-b4bjg\nup-down-2-5f54w\nup-down-2-b4bjg\nup-down-2-q89nc\nup-down-2-q89nc\nup-down-2-b4bjg\nup-down-2-q89nc\nup-down-2-b4bjg\nup-down-2-5f54w\nup-down-2-5f54w\nup-down-2-b4bjg\nup-down-2-q89nc\nup-down-2-q89nc\nup-down-2-q89nc\nup-down-2-5f54w\nup-down-2-5f54w\nup-down-2-b4bjg\nup-down-2-5f54w\nup-down-2-5f54w\nup-down-2-q89nc\nup-down-2-5f54w\nup-down-2-q89nc\nup-down-2-5f54w\nup-down-2-b4bjg\nup-down-2-5f54w\nup-down-2-5f54w\nup-down-2-5f54w\nup-down-2-b4bjg\nup-down-2-5f54w\nup-down-2-q89nc\n"
Nov 13 03:53:22.954: INFO: Executing cmd "for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.6.126:80 2>&1 || true; echo; done" in pod services-7272/verify-service-up-exec-pod-d8dmt
Nov 13 03:53:22.954: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7272 exec verify-service-up-exec-pod-d8dmt -- /bin/sh -x -c for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.6.126:80 2>&1 || true; echo; done'
Nov 13 03:53:23.667: INFO: stderr: "+ seq 1 150\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n"
Nov 13 03:53:23.667: INFO: stdout: "up-down-2-q89nc\nup-down-2-q89nc\nup-down-2-q89nc\nup-down-2-5f54w\nup-down-2-b4bjg\nup-down-2-q89nc\nup-down-2-q89nc\nup-down-2-5f54w\nup-down-2-b4bjg\nup-down-2-5f54w\nup-down-2-5f54w\nup-down-2-5f54w\nup-down-2-5f54w\nup-down-2-5f54w\nup-down-2-q89nc\nup-down-2-5f54w\nup-down-2-5f54w\nup-down-2-q89nc\nup-down-2-b4bjg\nup-down-2-b4bjg\nup-down-2-5f54w\nup-down-2-b4bjg\nup-down-2-5f54w\nup-down-2-5f54w\nup-down-2-b4bjg\nup-down-2-5f54w\nup-down-2-q89nc\nup-down-2-q89nc\nup-down-2-b4bjg\nup-down-2-5f54w\nup-down-2-q89nc\nup-down-2-b4bjg\nup-down-2-5f54w\nup-down-2-b4bjg\nup-down-2-b4bjg\nup-down-2-5f54w\nup-down-2-b4bjg\nup-down-2-b4bjg\nup-down-2-b4bjg\nup-down-2-q89nc\nup-down-2-5f54w\nup-down-2-b4bjg\nup-down-2-5f54w\nup-down-2-5f54w\nup-down-2-5f54w\nup-down-2-q89nc\nup-down-2-b4bjg\nup-down-2-q89nc\nup-down-2-q89nc\nup-down-2-b4bjg\nup-down-2-q89nc\nup-down-2-q89nc\nup-down-2-q89nc\nup-down-2-q89nc\nup-down-2-5f54w\nup-down-2-5f54w\nup-down-2-q89nc\nup-down-2-q89nc\nup-down-2-q89nc\nup-down-2-q89nc\nup-down-2-q89nc\nup-down-2-b4bjg\nup-down-2-b4bjg\nup-down-2-5f54w\nup-down-2-q89nc\nup-down-2-q89nc\nup-down-2-b4bjg\nup-down-2-5f54w\nup-down-2-b4bjg\nup-down-2-5f54w\nup-down-2-b4bjg\nup-down-2-5f54w\nup-down-2-q89nc\nup-down-2-q89nc\nup-down-2-b4bjg\nup-down-2-q89nc\nup-down-2-q89nc\nup-down-2-q89nc\nup-down-2-q89nc\nup-down-2-5f54w\nup-down-2-5f54w\nup-down-2-b4bjg\nup-down-2-q89nc\nup-down-2-b4bjg\nup-down-2-q89nc\nup-down-2-q89nc\nup-down-2-b4bjg\nup-down-2-b4bjg\nup-down-2-q89nc\nup-down-2-5f54w\nup-down-2-5f54w\nup-down-2-q89nc\nup-down-2-b4bjg\nup-down-2-5f54w\nup-down-2-q89nc\nup-down-2-5f54w\nup-down-2-b4bjg\nup-down-2-q89nc\nup-down-2-5f54w\nup-down-2-q89nc\nup-down-2-5f54w\nup-down-2-q89nc\nup-down-2-b4bjg\nup-down-2-b4bjg\nup-down-2-q89nc\nup-down-2-q89nc\nup-down-2-q89nc\nup-down-2-q89nc\nup-down-2-q89nc\nup-down-2-5f54w\nup-down-2-5f54w\nup-down-2-q89nc\nup-down-2-b4bjg\nup-down-2-q89nc\nup-down-2-5f54w\nup-down-2-b4bjg\nup-down-2-5f54w\nup-down-2-q89nc\nup-down-2-q89nc\nup-down-2-5f54w\nup-down-2-5f54w\nup-down-2-5f54w\nup-down-2-b4bjg\nup-down-2-b4bjg\nup-down-2-5f54w\nup-down-2-5f54w\nup-down-2-5f54w\nup-down-2-q89nc\nup-down-2-q89nc\nup-down-2-b4bjg\nup-down-2-5f54w\nup-down-2-q89nc\nup-down-2-5f54w\nup-down-2-5f54w\nup-down-2-q89nc\nup-down-2-b4bjg\nup-down-2-5f54w\nup-down-2-q89nc\nup-down-2-q89nc\nup-down-2-b4bjg\nup-down-2-5f54w\nup-down-2-q89nc\nup-down-2-q89nc\nup-down-2-b4bjg\nup-down-2-5f54w\nup-down-2-q89nc\nup-down-2-q89nc\nup-down-2-q89nc\nup-down-2-q89nc\nup-down-2-b4bjg\n"
STEP: Deleting pod verify-service-up-host-exec-pod in namespace services-7272
STEP: Deleting pod verify-service-up-exec-pod-d8dmt in namespace services-7272
STEP: stopping service up-down-1
STEP: deleting ReplicationController up-down-1 in namespace services-7272, will wait for the garbage collector to delete the pods
Nov 13 03:53:23.753: INFO: Deleting ReplicationController up-down-1 took: 3.561402ms
Nov 13 03:53:23.854: INFO: Terminating ReplicationController up-down-1 pods took: 100.909775ms
STEP: verifying service up-down-1 is not up
Nov 13 03:53:41.564: INFO: Creating new host exec pod
Nov 13 03:53:41.577: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Nov 13 03:53:43.581: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Nov 13 03:53:45.581: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Nov 13 03:53:47.581: INFO: The status of Pod verify-service-down-host-exec-pod is Running (Ready = true)
Nov 13 03:53:47.582: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7272 exec verify-service-down-host-exec-pod -- /bin/sh -x -c curl -g -s --connect-timeout 2 http://10.233.41.68:80 && echo service-down-failed'
Nov 13 03:53:49.825: INFO: rc: 28
Nov 13 03:53:49.825: INFO: error while kubectl execing "curl -g -s --connect-timeout 2 http://10.233.41.68:80 && echo service-down-failed" in pod services-7272/verify-service-down-host-exec-pod: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7272 exec verify-service-down-host-exec-pod -- /bin/sh -x -c curl -g -s --connect-timeout 2 http://10.233.41.68:80 && echo service-down-failed:
Command stdout:

stderr:
+ curl -g -s --connect-timeout 2 http://10.233.41.68:80
command terminated with exit code 28

error:
exit status 28
Output: 
STEP: Deleting pod verify-service-down-host-exec-pod in namespace services-7272
STEP: verifying service up-down-2 is still up
Nov 13 03:53:49.832: INFO: Creating new host exec pod
Nov 13 03:53:49.845: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Nov 13 03:53:51.850: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Nov 13 03:53:53.849: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Nov 13 03:53:55.851: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Nov 13 03:53:57.849: INFO: The status of Pod verify-service-up-host-exec-pod is Running (Ready = true)
Nov 13 03:53:57.849: INFO: Creating new exec pod
STEP: verifying service has 3 reachable backends
Nov 13 03:54:09.865: INFO: Executing cmd "for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.6.126:80 2>&1 || true; echo; done" in pod services-7272/verify-service-up-host-exec-pod
Nov 13 03:54:09.865: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7272 exec verify-service-up-host-exec-pod -- /bin/sh -x -c for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.6.126:80 2>&1 || true; echo; done'
Nov 13 03:54:10.268: INFO: stderr: "+ seq 1 150\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n"
Nov 13 03:54:10.268: INFO: stdout: "up-down-2-q89nc\nup-down-2-5f54w\nup-down-2-q89nc\nup-down-2-5f54w\nup-down-2-b4bjg\nup-down-2-b4bjg\nup-down-2-b4bjg\nup-down-2-b4bjg\nup-down-2-q89nc\nup-down-2-5f54w\nup-down-2-5f54w\nup-down-2-5f54w\nup-down-2-b4bjg\nup-down-2-q89nc\nup-down-2-b4bjg\nup-down-2-5f54w\nup-down-2-5f54w\nup-down-2-b4bjg\nup-down-2-q89nc\nup-down-2-q89nc\nup-down-2-5f54w\nup-down-2-5f54w\nup-down-2-b4bjg\nup-down-2-b4bjg\nup-down-2-b4bjg\nup-down-2-q89nc\nup-down-2-5f54w\nup-down-2-q89nc\nup-down-2-q89nc\nup-down-2-5f54w\nup-down-2-5f54w\nup-down-2-b4bjg\nup-down-2-b4bjg\nup-down-2-b4bjg\nup-down-2-b4bjg\nup-down-2-5f54w\nup-down-2-q89nc\nup-down-2-b4bjg\nup-down-2-5f54w\nup-down-2-5f54w\nup-down-2-5f54w\nup-down-2-b4bjg\nup-down-2-q89nc\nup-down-2-b4bjg\nup-down-2-b4bjg\nup-down-2-b4bjg\nup-down-2-q89nc\nup-down-2-q89nc\nup-down-2-q89nc\nup-down-2-b4bjg\nup-down-2-5f54w\nup-down-2-5f54w\nup-down-2-q89nc\nup-down-2-b4bjg\nup-down-2-q89nc\nup-down-2-q89nc\nup-down-2-b4bjg\nup-down-2-q89nc\nup-down-2-q89nc\nup-down-2-b4bjg\nup-down-2-5f54w\nup-down-2-q89nc\nup-down-2-5f54w\nup-down-2-q89nc\nup-down-2-q89nc\nup-down-2-q89nc\nup-down-2-5f54w\nup-down-2-q89nc\nup-down-2-q89nc\nup-down-2-5f54w\nup-down-2-q89nc\nup-down-2-5f54w\nup-down-2-b4bjg\nup-down-2-q89nc\nup-down-2-b4bjg\nup-down-2-b4bjg\nup-down-2-b4bjg\nup-down-2-q89nc\nup-down-2-q89nc\nup-down-2-q89nc\nup-down-2-q89nc\nup-down-2-q89nc\nup-down-2-5f54w\nup-down-2-5f54w\nup-down-2-b4bjg\nup-down-2-q89nc\nup-down-2-5f54w\nup-down-2-b4bjg\nup-down-2-q89nc\nup-down-2-q89nc\nup-down-2-5f54w\nup-down-2-q89nc\nup-down-2-q89nc\nup-down-2-b4bjg\nup-down-2-q89nc\nup-down-2-5f54w\nup-down-2-q89nc\nup-down-2-q89nc\nup-down-2-5f54w\nup-down-2-b4bjg\nup-down-2-q89nc\nup-down-2-b4bjg\nup-down-2-q89nc\nup-down-2-5f54w\nup-down-2-5f54w\nup-down-2-5f54w\nup-down-2-5f54w\nup-down-2-5f54w\nup-down-2-q89nc\nup-down-2-q89nc\nup-down-2-b4bjg\nup-down-2-5f54w\nup-down-2-q89nc\nup-down-2-b4bjg\nup-down-2-q89nc\nup-down-2-5f54w\nup-down-2-b4bjg\nup-down-2-b4bjg\nup-down-2-b4bjg\nup-down-2-b4bjg\nup-down-2-q89nc\nup-down-2-q89nc\nup-down-2-q89nc\nup-down-2-5f54w\nup-down-2-q89nc\nup-down-2-b4bjg\nup-down-2-5f54w\nup-down-2-b4bjg\nup-down-2-q89nc\nup-down-2-5f54w\nup-down-2-5f54w\nup-down-2-5f54w\nup-down-2-b4bjg\nup-down-2-q89nc\nup-down-2-q89nc\nup-down-2-q89nc\nup-down-2-5f54w\nup-down-2-b4bjg\nup-down-2-5f54w\nup-down-2-5f54w\nup-down-2-q89nc\nup-down-2-q89nc\nup-down-2-b4bjg\nup-down-2-5f54w\nup-down-2-q89nc\nup-down-2-b4bjg\nup-down-2-q89nc\nup-down-2-q89nc\nup-down-2-5f54w\nup-down-2-b4bjg\n"
Nov 13 03:54:10.268: INFO: Executing cmd "for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.6.126:80 2>&1 || true; echo; done" in pod services-7272/verify-service-up-exec-pod-mtmc9
Nov 13 03:54:10.268: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7272 exec verify-service-up-exec-pod-mtmc9 -- /bin/sh -x -c for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.6.126:80 2>&1 || true; echo; done'
Nov 13 03:54:11.070: INFO: stderr: "+ seq 1 150\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n"
Nov 13 03:54:11.070: INFO: stdout: "up-down-2-5f54w\nup-down-2-5f54w\nup-down-2-q89nc\nup-down-2-q89nc\nup-down-2-b4bjg\nup-down-2-q89nc\nup-down-2-5f54w\nup-down-2-b4bjg\nup-down-2-b4bjg\nup-down-2-q89nc\nup-down-2-5f54w\nup-down-2-b4bjg\nup-down-2-5f54w\nup-down-2-b4bjg\nup-down-2-b4bjg\nup-down-2-b4bjg\nup-down-2-5f54w\nup-down-2-5f54w\nup-down-2-b4bjg\nup-down-2-5f54w\nup-down-2-q89nc\nup-down-2-5f54w\nup-down-2-5f54w\nup-down-2-5f54w\nup-down-2-q89nc\nup-down-2-b4bjg\nup-down-2-q89nc\nup-down-2-q89nc\nup-down-2-b4bjg\nup-down-2-b4bjg\nup-down-2-b4bjg\nup-down-2-5f54w\nup-down-2-q89nc\nup-down-2-5f54w\nup-down-2-q89nc\nup-down-2-b4bjg\nup-down-2-q89nc\nup-down-2-q89nc\nup-down-2-5f54w\nup-down-2-5f54w\nup-down-2-q89nc\nup-down-2-q89nc\nup-down-2-5f54w\nup-down-2-5f54w\nup-down-2-b4bjg\nup-down-2-q89nc\nup-down-2-5f54w\nup-down-2-b4bjg\nup-down-2-b4bjg\nup-down-2-q89nc\nup-down-2-b4bjg\nup-down-2-5f54w\nup-down-2-5f54w\nup-down-2-5f54w\nup-down-2-5f54w\nup-down-2-5f54w\nup-down-2-b4bjg\nup-down-2-q89nc\nup-down-2-q89nc\nup-down-2-b4bjg\nup-down-2-q89nc\nup-down-2-q89nc\nup-down-2-q89nc\nup-down-2-b4bjg\nup-down-2-q89nc\nup-down-2-b4bjg\nup-down-2-5f54w\nup-down-2-q89nc\nup-down-2-b4bjg\nup-down-2-5f54w\nup-down-2-5f54w\nup-down-2-q89nc\nup-down-2-q89nc\nup-down-2-q89nc\nup-down-2-q89nc\nup-down-2-b4bjg\nup-down-2-5f54w\nup-down-2-q89nc\nup-down-2-5f54w\nup-down-2-5f54w\nup-down-2-b4bjg\nup-down-2-b4bjg\nup-down-2-5f54w\nup-down-2-5f54w\nup-down-2-q89nc\nup-down-2-5f54w\nup-down-2-b4bjg\nup-down-2-q89nc\nup-down-2-b4bjg\nup-down-2-5f54w\nup-down-2-5f54w\nup-down-2-q89nc\nup-down-2-q89nc\nup-down-2-q89nc\nup-down-2-b4bjg\nup-down-2-b4bjg\nup-down-2-5f54w\nup-down-2-5f54w\nup-down-2-b4bjg\nup-down-2-5f54w\nup-down-2-q89nc\nup-down-2-5f54w\nup-down-2-5f54w\nup-down-2-q89nc\nup-down-2-q89nc\nup-down-2-5f54w\nup-down-2-q89nc\nup-down-2-q89nc\nup-down-2-5f54w\nup-down-2-q89nc\nup-down-2-5f54w\nup-down-2-q89nc\nup-down-2-q89nc\nup-down-2-b4bjg\nup-down-2-b4bjg\nup-down-2-b4bjg\nup-down-2-b4bjg\nup-down-2-5f54w\nup-down-2-5f54w\nup-down-2-5f54w\nup-down-2-b4bjg\nup-down-2-b4bjg\nup-down-2-q89nc\nup-down-2-5f54w\nup-down-2-b4bjg\nup-down-2-b4bjg\nup-down-2-5f54w\nup-down-2-b4bjg\nup-down-2-5f54w\nup-down-2-5f54w\nup-down-2-b4bjg\nup-down-2-b4bjg\nup-down-2-b4bjg\nup-down-2-5f54w\nup-down-2-q89nc\nup-down-2-q89nc\nup-down-2-5f54w\nup-down-2-q89nc\nup-down-2-q89nc\nup-down-2-b4bjg\nup-down-2-q89nc\nup-down-2-q89nc\nup-down-2-5f54w\nup-down-2-5f54w\nup-down-2-b4bjg\nup-down-2-q89nc\nup-down-2-q89nc\nup-down-2-b4bjg\nup-down-2-q89nc\nup-down-2-5f54w\n"
STEP: Deleting pod verify-service-up-host-exec-pod in namespace services-7272
STEP: Deleting pod verify-service-up-exec-pod-mtmc9 in namespace services-7272
STEP: creating service up-down-3 in namespace services-7272
STEP: creating service up-down-3 in namespace services-7272
STEP: creating replication controller up-down-3 in namespace services-7272
I1113 03:54:11.091114      30 runners.go:190] Created replication controller with name: up-down-3, namespace: services-7272, replica count: 3
I1113 03:54:14.143628      30 runners.go:190] up-down-3 Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1113 03:54:17.144211      30 runners.go:190] up-down-3 Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1113 03:54:20.146198      30 runners.go:190] up-down-3 Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
STEP: verifying service up-down-2 is still up
Nov 13 03:54:20.148: INFO: Creating new host exec pod
Nov 13 03:54:20.162: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Nov 13 03:54:22.166: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Nov 13 03:54:24.166: INFO: The status of Pod verify-service-up-host-exec-pod is Running (Ready = true)
Nov 13 03:54:24.166: INFO: Creating new exec pod
STEP: verifying service has 3 reachable backends
Nov 13 03:54:30.426: INFO: Executing cmd "for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.6.126:80 2>&1 || true; echo; done" in pod services-7272/verify-service-up-host-exec-pod
Nov 13 03:54:30.426: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7272 exec verify-service-up-host-exec-pod -- /bin/sh -x -c for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.6.126:80 2>&1 || true; echo; done'
Nov 13 03:54:30.877: INFO: stderr: "+ seq 1 150\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n"
Nov 13 03:54:30.877: INFO: stdout: "up-down-2-b4bjg\nup-down-2-b4bjg\nup-down-2-5f54w\nup-down-2-b4bjg\nup-down-2-q89nc\nup-down-2-b4bjg\nup-down-2-5f54w\nup-down-2-q89nc\nup-down-2-5f54w\nup-down-2-b4bjg\nup-down-2-q89nc\nup-down-2-5f54w\nup-down-2-q89nc\nup-down-2-b4bjg\nup-down-2-q89nc\nup-down-2-b4bjg\nup-down-2-5f54w\nup-down-2-5f54w\nup-down-2-b4bjg\nup-down-2-q89nc\nup-down-2-q89nc\nup-down-2-q89nc\nup-down-2-b4bjg\nup-down-2-b4bjg\nup-down-2-b4bjg\nup-down-2-q89nc\nup-down-2-b4bjg\nup-down-2-b4bjg\nup-down-2-b4bjg\nup-down-2-5f54w\nup-down-2-5f54w\nup-down-2-b4bjg\nup-down-2-b4bjg\nup-down-2-5f54w\nup-down-2-5f54w\nup-down-2-q89nc\nup-down-2-q89nc\nup-down-2-b4bjg\nup-down-2-5f54w\nup-down-2-5f54w\nup-down-2-b4bjg\nup-down-2-5f54w\nup-down-2-b4bjg\nup-down-2-5f54w\nup-down-2-b4bjg\nup-down-2-q89nc\nup-down-2-q89nc\nup-down-2-q89nc\nup-down-2-b4bjg\nup-down-2-b4bjg\nup-down-2-q89nc\nup-down-2-b4bjg\nup-down-2-5f54w\nup-down-2-5f54w\nup-down-2-5f54w\nup-down-2-5f54w\nup-down-2-b4bjg\nup-down-2-q89nc\nup-down-2-q89nc\nup-down-2-5f54w\nup-down-2-q89nc\nup-down-2-5f54w\nup-down-2-b4bjg\nup-down-2-5f54w\nup-down-2-5f54w\nup-down-2-q89nc\nup-down-2-q89nc\nup-down-2-5f54w\nup-down-2-b4bjg\nup-down-2-q89nc\nup-down-2-b4bjg\nup-down-2-b4bjg\nup-down-2-5f54w\nup-down-2-b4bjg\nup-down-2-5f54w\nup-down-2-5f54w\nup-down-2-b4bjg\nup-down-2-q89nc\nup-down-2-q89nc\nup-down-2-5f54w\nup-down-2-q89nc\nup-down-2-b4bjg\nup-down-2-5f54w\nup-down-2-q89nc\nup-down-2-5f54w\nup-down-2-5f54w\nup-down-2-b4bjg\nup-down-2-q89nc\nup-down-2-b4bjg\nup-down-2-5f54w\nup-down-2-5f54w\nup-down-2-q89nc\nup-down-2-b4bjg\nup-down-2-q89nc\nup-down-2-5f54w\nup-down-2-b4bjg\nup-down-2-5f54w\nup-down-2-b4bjg\nup-down-2-b4bjg\nup-down-2-b4bjg\nup-down-2-5f54w\nup-down-2-b4bjg\nup-down-2-q89nc\nup-down-2-b4bjg\nup-down-2-5f54w\nup-down-2-5f54w\nup-down-2-5f54w\nup-down-2-q89nc\nup-down-2-b4bjg\nup-down-2-5f54w\nup-down-2-b4bjg\nup-down-2-q89nc\nup-down-2-b4bjg\nup-down-2-b4bjg\nup-down-2-b4bjg\nup-down-2-q89nc\nup-down-2-5f54w\nup-down-2-q89nc\nup-down-2-b4bjg\nup-down-2-b4bjg\nup-down-2-5f54w\nup-down-2-q89nc\nup-down-2-5f54w\nup-down-2-5f54w\nup-down-2-5f54w\nup-down-2-5f54w\nup-down-2-5f54w\nup-down-2-5f54w\nup-down-2-q89nc\nup-down-2-b4bjg\nup-down-2-5f54w\nup-down-2-b4bjg\nup-down-2-b4bjg\nup-down-2-5f54w\nup-down-2-5f54w\nup-down-2-q89nc\nup-down-2-5f54w\nup-down-2-5f54w\nup-down-2-q89nc\nup-down-2-b4bjg\nup-down-2-5f54w\nup-down-2-b4bjg\nup-down-2-q89nc\nup-down-2-q89nc\nup-down-2-b4bjg\nup-down-2-5f54w\nup-down-2-b4bjg\nup-down-2-b4bjg\nup-down-2-q89nc\nup-down-2-b4bjg\n"
Nov 13 03:54:30.877: INFO: Executing cmd "for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.6.126:80 2>&1 || true; echo; done" in pod services-7272/verify-service-up-exec-pod-5jtlw
Nov 13 03:54:30.878: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7272 exec verify-service-up-exec-pod-5jtlw -- /bin/sh -x -c for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.6.126:80 2>&1 || true; echo; done'
Nov 13 03:54:31.262: INFO: stderr: "+ seq 1 150\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.6.126:80\n+ echo\n"
Nov 13 03:54:31.262: INFO: stdout: "up-down-2-b4bjg\nup-down-2-q89nc\nup-down-2-5f54w\nup-down-2-5f54w\nup-down-2-b4bjg\nup-down-2-q89nc\nup-down-2-q89nc\nup-down-2-5f54w\nup-down-2-b4bjg\nup-down-2-b4bjg\nup-down-2-b4bjg\nup-down-2-q89nc\nup-down-2-b4bjg\nup-down-2-b4bjg\nup-down-2-5f54w\nup-down-2-q89nc\nup-down-2-b4bjg\nup-down-2-q89nc\nup-down-2-q89nc\nup-down-2-5f54w\nup-down-2-b4bjg\nup-down-2-5f54w\nup-down-2-b4bjg\nup-down-2-q89nc\nup-down-2-b4bjg\nup-down-2-b4bjg\nup-down-2-q89nc\nup-down-2-q89nc\nup-down-2-5f54w\nup-down-2-q89nc\nup-down-2-5f54w\nup-down-2-5f54w\nup-down-2-q89nc\nup-down-2-5f54w\nup-down-2-b4bjg\nup-down-2-5f54w\nup-down-2-q89nc\nup-down-2-q89nc\nup-down-2-q89nc\nup-down-2-q89nc\nup-down-2-q89nc\nup-down-2-q89nc\nup-down-2-q89nc\nup-down-2-b4bjg\nup-down-2-5f54w\nup-down-2-q89nc\nup-down-2-q89nc\nup-down-2-5f54w\nup-down-2-5f54w\nup-down-2-q89nc\nup-down-2-5f54w\nup-down-2-b4bjg\nup-down-2-b4bjg\nup-down-2-b4bjg\nup-down-2-5f54w\nup-down-2-5f54w\nup-down-2-q89nc\nup-down-2-q89nc\nup-down-2-b4bjg\nup-down-2-b4bjg\nup-down-2-5f54w\nup-down-2-b4bjg\nup-down-2-5f54w\nup-down-2-q89nc\nup-down-2-b4bjg\nup-down-2-q89nc\nup-down-2-q89nc\nup-down-2-5f54w\nup-down-2-q89nc\nup-down-2-q89nc\nup-down-2-q89nc\nup-down-2-5f54w\nup-down-2-b4bjg\nup-down-2-5f54w\nup-down-2-q89nc\nup-down-2-q89nc\nup-down-2-5f54w\nup-down-2-b4bjg\nup-down-2-5f54w\nup-down-2-b4bjg\nup-down-2-q89nc\nup-down-2-5f54w\nup-down-2-5f54w\nup-down-2-q89nc\nup-down-2-q89nc\nup-down-2-q89nc\nup-down-2-5f54w\nup-down-2-q89nc\nup-down-2-b4bjg\nup-down-2-q89nc\nup-down-2-b4bjg\nup-down-2-b4bjg\nup-down-2-q89nc\nup-down-2-q89nc\nup-down-2-q89nc\nup-down-2-q89nc\nup-down-2-b4bjg\nup-down-2-b4bjg\nup-down-2-5f54w\nup-down-2-5f54w\nup-down-2-5f54w\nup-down-2-q89nc\nup-down-2-b4bjg\nup-down-2-q89nc\nup-down-2-5f54w\nup-down-2-5f54w\nup-down-2-b4bjg\nup-down-2-b4bjg\nup-down-2-b4bjg\nup-down-2-5f54w\nup-down-2-q89nc\nup-down-2-q89nc\nup-down-2-q89nc\nup-down-2-b4bjg\nup-down-2-q89nc\nup-down-2-q89nc\nup-down-2-q89nc\nup-down-2-5f54w\nup-down-2-q89nc\nup-down-2-b4bjg\nup-down-2-q89nc\nup-down-2-5f54w\nup-down-2-5f54w\nup-down-2-5f54w\nup-down-2-b4bjg\nup-down-2-5f54w\nup-down-2-b4bjg\nup-down-2-5f54w\nup-down-2-q89nc\nup-down-2-5f54w\nup-down-2-q89nc\nup-down-2-b4bjg\nup-down-2-q89nc\nup-down-2-b4bjg\nup-down-2-b4bjg\nup-down-2-b4bjg\nup-down-2-b4bjg\nup-down-2-b4bjg\nup-down-2-b4bjg\nup-down-2-b4bjg\nup-down-2-q89nc\nup-down-2-b4bjg\nup-down-2-q89nc\nup-down-2-5f54w\nup-down-2-b4bjg\nup-down-2-b4bjg\nup-down-2-q89nc\nup-down-2-5f54w\nup-down-2-q89nc\nup-down-2-5f54w\n"
STEP: Deleting pod verify-service-up-host-exec-pod in namespace services-7272
STEP: Deleting pod verify-service-up-exec-pod-5jtlw in namespace services-7272
STEP: verifying service up-down-3 is up
Nov 13 03:54:31.277: INFO: Creating new host exec pod
Nov 13 03:54:31.288: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Nov 13 03:54:33.313: INFO: The status of Pod verify-service-up-host-exec-pod is Running (Ready = true)
Nov 13 03:54:33.313: INFO: Creating new exec pod
STEP: verifying service has 3 reachable backends
Nov 13 03:54:39.330: INFO: Executing cmd "for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.16.228:80 2>&1 || true; echo; done" in pod services-7272/verify-service-up-host-exec-pod
Nov 13 03:54:39.331: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7272 exec verify-service-up-host-exec-pod -- /bin/sh -x -c for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.16.228:80 2>&1 || true; echo; done'
Nov 13 03:54:39.965: INFO: stderr: "+ seq 1 150\n+ wget -q -T 1 -O - http://10.233.16.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.16.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.16.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.16.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.16.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.16.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.16.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.16.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.16.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.16.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.16.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.16.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.16.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.16.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.16.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.16.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.16.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.16.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.16.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.16.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.16.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.16.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.16.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.16.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.16.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.16.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.16.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.16.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.16.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.16.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.16.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.16.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.16.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.16.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.16.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.16.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.16.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.16.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.16.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.16.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.16.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.16.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.16.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.16.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.16.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.16.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.16.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.16.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.16.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.16.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.16.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.16.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.16.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.16.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.16.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.16.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.16.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.16.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.16.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.16.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.16.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.16.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.16.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.16.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.16.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.16.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.16.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.16.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.16.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.16.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.16.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.16.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.16.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.16.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.16.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.16.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.16.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.16.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.16.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.16.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.16.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.16.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.16.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.16.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.16.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.16.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.16.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.16.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.16.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.16.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.16.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.16.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.16.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.16.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.16.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.16.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.16.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.16.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.16.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.16.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.16.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.16.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.16.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.16.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.16.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.16.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.16.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.16.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.16.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.16.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.16.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.16.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.16.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.16.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.16.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.16.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.16.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.16.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.16.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.16.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.16.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.16.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.16.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.16.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.16.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.16.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.16.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.16.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.16.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.16.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.16.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.16.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.16.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.16.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.16.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.16.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.16.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.16.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.16.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.16.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.16.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.16.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.16.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.16.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.16.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.16.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.16.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.16.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.16.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.16.228:80\n+ echo\n"
Nov 13 03:54:39.966: INFO: stdout: "up-down-3-q8qct\nup-down-3-f6dwd\nup-down-3-nqxn8\nup-down-3-nqxn8\nup-down-3-nqxn8\nup-down-3-f6dwd\nup-down-3-nqxn8\nup-down-3-nqxn8\nup-down-3-q8qct\nup-down-3-f6dwd\nup-down-3-nqxn8\nup-down-3-nqxn8\nup-down-3-f6dwd\nup-down-3-q8qct\nup-down-3-q8qct\nup-down-3-nqxn8\nup-down-3-q8qct\nup-down-3-f6dwd\nup-down-3-nqxn8\nup-down-3-q8qct\nup-down-3-nqxn8\nup-down-3-nqxn8\nup-down-3-nqxn8\nup-down-3-q8qct\nup-down-3-f6dwd\nup-down-3-nqxn8\nup-down-3-nqxn8\nup-down-3-q8qct\nup-down-3-nqxn8\nup-down-3-q8qct\nup-down-3-f6dwd\nup-down-3-nqxn8\nup-down-3-nqxn8\nup-down-3-nqxn8\nup-down-3-q8qct\nup-down-3-q8qct\nup-down-3-nqxn8\nup-down-3-q8qct\nup-down-3-nqxn8\nup-down-3-f6dwd\nup-down-3-q8qct\nup-down-3-q8qct\nup-down-3-f6dwd\nup-down-3-nqxn8\nup-down-3-nqxn8\nup-down-3-nqxn8\nup-down-3-q8qct\nup-down-3-f6dwd\nup-down-3-q8qct\nup-down-3-f6dwd\nup-down-3-q8qct\nup-down-3-q8qct\nup-down-3-f6dwd\nup-down-3-nqxn8\nup-down-3-q8qct\nup-down-3-q8qct\nup-down-3-q8qct\nup-down-3-f6dwd\nup-down-3-f6dwd\nup-down-3-nqxn8\nup-down-3-f6dwd\nup-down-3-nqxn8\nup-down-3-q8qct\nup-down-3-q8qct\nup-down-3-q8qct\nup-down-3-f6dwd\nup-down-3-nqxn8\nup-down-3-q8qct\nup-down-3-f6dwd\nup-down-3-f6dwd\nup-down-3-f6dwd\nup-down-3-q8qct\nup-down-3-f6dwd\nup-down-3-nqxn8\nup-down-3-nqxn8\nup-down-3-q8qct\nup-down-3-nqxn8\nup-down-3-q8qct\nup-down-3-f6dwd\nup-down-3-nqxn8\nup-down-3-nqxn8\nup-down-3-nqxn8\nup-down-3-nqxn8\nup-down-3-f6dwd\nup-down-3-q8qct\nup-down-3-nqxn8\nup-down-3-f6dwd\nup-down-3-q8qct\nup-down-3-f6dwd\nup-down-3-q8qct\nup-down-3-nqxn8\nup-down-3-f6dwd\nup-down-3-nqxn8\nup-down-3-q8qct\nup-down-3-f6dwd\nup-down-3-f6dwd\nup-down-3-q8qct\nup-down-3-f6dwd\nup-down-3-f6dwd\nup-down-3-f6dwd\nup-down-3-f6dwd\nup-down-3-nqxn8\nup-down-3-q8qct\nup-down-3-nqxn8\nup-down-3-q8qct\nup-down-3-f6dwd\nup-down-3-f6dwd\nup-down-3-f6dwd\nup-down-3-f6dwd\nup-down-3-f6dwd\nup-down-3-nqxn8\nup-down-3-q8qct\nup-down-3-q8qct\nup-down-3-nqxn8\nup-down-3-f6dwd\nup-down-3-nqxn8\nup-down-3-f6dwd\nup-down-3-nqxn8\nup-down-3-nqxn8\nup-down-3-f6dwd\nup-down-3-nqxn8\nup-down-3-q8qct\nup-down-3-f6dwd\nup-down-3-nqxn8\nup-down-3-f6dwd\nup-down-3-nqxn8\nup-down-3-q8qct\nup-down-3-f6dwd\nup-down-3-f6dwd\nup-down-3-nqxn8\nup-down-3-nqxn8\nup-down-3-q8qct\nup-down-3-f6dwd\nup-down-3-q8qct\nup-down-3-f6dwd\nup-down-3-nqxn8\nup-down-3-q8qct\nup-down-3-q8qct\nup-down-3-q8qct\nup-down-3-f6dwd\nup-down-3-q8qct\nup-down-3-f6dwd\nup-down-3-q8qct\nup-down-3-f6dwd\nup-down-3-nqxn8\nup-down-3-q8qct\nup-down-3-nqxn8\nup-down-3-q8qct\nup-down-3-nqxn8\nup-down-3-nqxn8\n"
Nov 13 03:54:39.966: INFO: Executing cmd "for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.16.228:80 2>&1 || true; echo; done" in pod services-7272/verify-service-up-exec-pod-28n5x
Nov 13 03:54:39.966: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7272 exec verify-service-up-exec-pod-28n5x -- /bin/sh -x -c for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.16.228:80 2>&1 || true; echo; done'
Nov 13 03:54:40.322: INFO: stderr: "+ seq 1 150\n+ wget -q -T 1 -O - http://10.233.16.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.16.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.16.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.16.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.16.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.16.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.16.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.16.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.16.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.16.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.16.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.16.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.16.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.16.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.16.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.16.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.16.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.16.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.16.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.16.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.16.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.16.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.16.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.16.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.16.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.16.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.16.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.16.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.16.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.16.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.16.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.16.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.16.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.16.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.16.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.16.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.16.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.16.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.16.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.16.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.16.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.16.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.16.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.16.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.16.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.16.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.16.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.16.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.16.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.16.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.16.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.16.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.16.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.16.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.16.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.16.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.16.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.16.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.16.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.16.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.16.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.16.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.16.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.16.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.16.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.16.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.16.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.16.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.16.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.16.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.16.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.16.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.16.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.16.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.16.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.16.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.16.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.16.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.16.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.16.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.16.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.16.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.16.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.16.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.16.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.16.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.16.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.16.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.16.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.16.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.16.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.16.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.16.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.16.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.16.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.16.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.16.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.16.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.16.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.16.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.16.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.16.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.16.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.16.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.16.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.16.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.16.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.16.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.16.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.16.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.16.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.16.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.16.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.16.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.16.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.16.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.16.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.16.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.16.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.16.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.16.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.16.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.16.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.16.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.16.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.16.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.16.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.16.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.16.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.16.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.16.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.16.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.16.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.16.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.16.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.16.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.16.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.16.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.16.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.16.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.16.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.16.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.16.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.16.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.16.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.16.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.16.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.16.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.16.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.16.228:80\n+ echo\n"
Nov 13 03:54:40.323: INFO: stdout: "up-down-3-f6dwd\nup-down-3-f6dwd\nup-down-3-nqxn8\nup-down-3-f6dwd\nup-down-3-q8qct\nup-down-3-q8qct\nup-down-3-f6dwd\nup-down-3-nqxn8\nup-down-3-nqxn8\nup-down-3-q8qct\nup-down-3-f6dwd\nup-down-3-f6dwd\nup-down-3-nqxn8\nup-down-3-f6dwd\nup-down-3-f6dwd\nup-down-3-q8qct\nup-down-3-q8qct\nup-down-3-nqxn8\nup-down-3-q8qct\nup-down-3-f6dwd\nup-down-3-q8qct\nup-down-3-f6dwd\nup-down-3-q8qct\nup-down-3-q8qct\nup-down-3-q8qct\nup-down-3-q8qct\nup-down-3-q8qct\nup-down-3-q8qct\nup-down-3-nqxn8\nup-down-3-nqxn8\nup-down-3-nqxn8\nup-down-3-q8qct\nup-down-3-f6dwd\nup-down-3-nqxn8\nup-down-3-nqxn8\nup-down-3-q8qct\nup-down-3-nqxn8\nup-down-3-q8qct\nup-down-3-nqxn8\nup-down-3-f6dwd\nup-down-3-f6dwd\nup-down-3-f6dwd\nup-down-3-nqxn8\nup-down-3-nqxn8\nup-down-3-q8qct\nup-down-3-f6dwd\nup-down-3-q8qct\nup-down-3-q8qct\nup-down-3-f6dwd\nup-down-3-f6dwd\nup-down-3-f6dwd\nup-down-3-nqxn8\nup-down-3-f6dwd\nup-down-3-f6dwd\nup-down-3-q8qct\nup-down-3-f6dwd\nup-down-3-q8qct\nup-down-3-f6dwd\nup-down-3-nqxn8\nup-down-3-f6dwd\nup-down-3-nqxn8\nup-down-3-q8qct\nup-down-3-q8qct\nup-down-3-f6dwd\nup-down-3-nqxn8\nup-down-3-f6dwd\nup-down-3-q8qct\nup-down-3-f6dwd\nup-down-3-f6dwd\nup-down-3-nqxn8\nup-down-3-f6dwd\nup-down-3-q8qct\nup-down-3-q8qct\nup-down-3-nqxn8\nup-down-3-f6dwd\nup-down-3-nqxn8\nup-down-3-nqxn8\nup-down-3-nqxn8\nup-down-3-f6dwd\nup-down-3-q8qct\nup-down-3-nqxn8\nup-down-3-f6dwd\nup-down-3-nqxn8\nup-down-3-q8qct\nup-down-3-q8qct\nup-down-3-nqxn8\nup-down-3-f6dwd\nup-down-3-q8qct\nup-down-3-q8qct\nup-down-3-f6dwd\nup-down-3-q8qct\nup-down-3-q8qct\nup-down-3-q8qct\nup-down-3-f6dwd\nup-down-3-nqxn8\nup-down-3-f6dwd\nup-down-3-nqxn8\nup-down-3-nqxn8\nup-down-3-nqxn8\nup-down-3-nqxn8\nup-down-3-nqxn8\nup-down-3-f6dwd\nup-down-3-q8qct\nup-down-3-f6dwd\nup-down-3-q8qct\nup-down-3-f6dwd\nup-down-3-nqxn8\nup-down-3-f6dwd\nup-down-3-nqxn8\nup-down-3-q8qct\nup-down-3-f6dwd\nup-down-3-f6dwd\nup-down-3-nqxn8\nup-down-3-q8qct\nup-down-3-f6dwd\nup-down-3-nqxn8\nup-down-3-f6dwd\nup-down-3-q8qct\nup-down-3-nqxn8\nup-down-3-q8qct\nup-down-3-nqxn8\nup-down-3-nqxn8\nup-down-3-q8qct\nup-down-3-f6dwd\nup-down-3-nqxn8\nup-down-3-f6dwd\nup-down-3-nqxn8\nup-down-3-q8qct\nup-down-3-f6dwd\nup-down-3-nqxn8\nup-down-3-q8qct\nup-down-3-f6dwd\nup-down-3-f6dwd\nup-down-3-q8qct\nup-down-3-nqxn8\nup-down-3-f6dwd\nup-down-3-f6dwd\nup-down-3-q8qct\nup-down-3-nqxn8\nup-down-3-q8qct\nup-down-3-nqxn8\nup-down-3-nqxn8\nup-down-3-f6dwd\nup-down-3-q8qct\nup-down-3-q8qct\nup-down-3-f6dwd\nup-down-3-f6dwd\nup-down-3-nqxn8\nup-down-3-f6dwd\nup-down-3-nqxn8\n"
STEP: Deleting pod verify-service-up-host-exec-pod in namespace services-7272
STEP: Deleting pod verify-service-up-exec-pod-28n5x in namespace services-7272
[AfterEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Nov 13 03:54:40.338: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-7272" for this suite.
[AfterEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750


• [SLOW TEST:139.959 seconds]
[sig-network] Services
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should be able to up and down services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1015
------------------------------
{"msg":"PASSED [sig-network] Services should be able to up and down services","total":-1,"completed":1,"skipped":283,"failed":0}
Nov 13 03:54:40.357: INFO: Running AfterSuite actions on all nodes


[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Nov 13 03:54:17.448: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename nettest
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:83
STEP: Executing a successful http request from the external internet
[It] should update endpoints: udp
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:351
STEP: Performing setup for networking test in namespace nettest-1144
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Nov 13 03:54:17.555: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
Nov 13 03:54:17.589: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Nov 13 03:54:19.593: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Nov 13 03:54:21.592: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Nov 13 03:54:23.593: INFO: The status of Pod netserver-0 is Running (Ready = false)
Nov 13 03:54:25.592: INFO: The status of Pod netserver-0 is Running (Ready = false)
Nov 13 03:54:27.593: INFO: The status of Pod netserver-0 is Running (Ready = false)
Nov 13 03:54:29.593: INFO: The status of Pod netserver-0 is Running (Ready = false)
Nov 13 03:54:31.594: INFO: The status of Pod netserver-0 is Running (Ready = false)
Nov 13 03:54:33.591: INFO: The status of Pod netserver-0 is Running (Ready = false)
Nov 13 03:54:35.592: INFO: The status of Pod netserver-0 is Running (Ready = false)
Nov 13 03:54:37.593: INFO: The status of Pod netserver-0 is Running (Ready = true)
Nov 13 03:54:37.597: INFO: The status of Pod netserver-1 is Running (Ready = false)
Nov 13 03:54:39.600: INFO: The status of Pod netserver-1 is Running (Ready = true)
STEP: Creating test pods
Nov 13 03:54:43.623: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2
STEP: Getting node addresses
Nov 13 03:54:43.623: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
Nov 13 03:54:43.630: INFO: Requires at least 2 nodes (not -1)
[AfterEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Nov 13 03:54:43.632: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "nettest-1144" for this suite.


S [SKIPPING] [26.191 seconds]
[sig-network] Networking
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  Granular Checks: Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:151
    should update endpoints: udp [It]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:351

    Requires at least 2 nodes (not -1)

    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/network/utils.go:782
------------------------------
Nov 13 03:54:43.641: INFO: Running AfterSuite actions on all nodes


[BeforeEach] [sig-network] Conntrack
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Nov 13 03:53:47.799: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename conntrack
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Conntrack
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/conntrack.go:96
[It] should be able to preserve UDP traffic when server pod cycles for a ClusterIP service
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/conntrack.go:203
STEP: creating a UDP service svc-udp with type=ClusterIP in conntrack-7083
STEP: creating a client pod for probing the service svc-udp
Nov 13 03:53:47.858: INFO: The status of Pod pod-client is Pending, waiting for it to be Running (with Ready = true)
Nov 13 03:53:49.861: INFO: The status of Pod pod-client is Pending, waiting for it to be Running (with Ready = true)
Nov 13 03:53:51.862: INFO: The status of Pod pod-client is Pending, waiting for it to be Running (with Ready = true)
Nov 13 03:53:53.861: INFO: The status of Pod pod-client is Pending, waiting for it to be Running (with Ready = true)
Nov 13 03:53:55.862: INFO: The status of Pod pod-client is Running (Ready = true)
Nov 13 03:53:57.624: INFO: Pod client logs: Sat Nov 13 03:53:50 UTC 2021
Sat Nov 13 03:53:50 UTC 2021 Try: 1

Sat Nov 13 03:53:50 UTC 2021 Try: 2

Sat Nov 13 03:53:50 UTC 2021 Try: 3

Sat Nov 13 03:53:50 UTC 2021 Try: 4

Sat Nov 13 03:53:50 UTC 2021 Try: 5

Sat Nov 13 03:53:50 UTC 2021 Try: 6

Sat Nov 13 03:53:50 UTC 2021 Try: 7

Sat Nov 13 03:53:55 UTC 2021 Try: 8

Sat Nov 13 03:53:55 UTC 2021 Try: 9

Sat Nov 13 03:53:55 UTC 2021 Try: 10

Sat Nov 13 03:53:55 UTC 2021 Try: 11

Sat Nov 13 03:53:55 UTC 2021 Try: 12

Sat Nov 13 03:53:55 UTC 2021 Try: 13

STEP: creating a backend pod pod-server-1 for the service svc-udp
Nov 13 03:53:57.637: INFO: The status of Pod pod-server-1 is Pending, waiting for it to be Running (with Ready = true)
Nov 13 03:53:59.639: INFO: The status of Pod pod-server-1 is Pending, waiting for it to be Running (with Ready = true)
Nov 13 03:54:01.641: INFO: The status of Pod pod-server-1 is Pending, waiting for it to be Running (with Ready = true)
Nov 13 03:54:03.641: INFO: The status of Pod pod-server-1 is Pending, waiting for it to be Running (with Ready = true)
Nov 13 03:54:05.641: INFO: The status of Pod pod-server-1 is Pending, waiting for it to be Running (with Ready = true)
Nov 13 03:54:07.641: INFO: The status of Pod pod-server-1 is Pending, waiting for it to be Running (with Ready = true)
Nov 13 03:54:09.641: INFO: The status of Pod pod-server-1 is Pending, waiting for it to be Running (with Ready = true)
Nov 13 03:54:11.642: INFO: The status of Pod pod-server-1 is Pending, waiting for it to be Running (with Ready = true)
Nov 13 03:54:13.641: INFO: The status of Pod pod-server-1 is Pending, waiting for it to be Running (with Ready = true)
Nov 13 03:54:15.644: INFO: The status of Pod pod-server-1 is Running (Ready = true)
STEP: waiting up to 3m0s for service svc-udp in namespace conntrack-7083 to expose endpoints map[pod-server-1:[80]]
Nov 13 03:54:15.654: INFO: successfully validated that service svc-udp in namespace conntrack-7083 exposes endpoints map[pod-server-1:[80]]
STEP: checking client pod connected to the backend 1 on Node IP 10.10.190.208
STEP: creating a second backend pod pod-server-2 for the service svc-udp
Nov 13 03:54:25.817: INFO: The status of Pod pod-server-2 is Pending, waiting for it to be Running (with Ready = true)
Nov 13 03:54:27.822: INFO: The status of Pod pod-server-2 is Pending, waiting for it to be Running (with Ready = true)
Nov 13 03:54:29.821: INFO: The status of Pod pod-server-2 is Pending, waiting for it to be Running (with Ready = true)
Nov 13 03:54:31.821: INFO: The status of Pod pod-server-2 is Pending, waiting for it to be Running (with Ready = true)
Nov 13 03:54:33.823: INFO: The status of Pod pod-server-2 is Running (Ready = true)
Nov 13 03:54:33.825: INFO: Cleaning up pod-server-1 pod
Nov 13 03:54:33.836: INFO: Waiting for pod pod-server-1 to disappear
Nov 13 03:54:33.838: INFO: Pod pod-server-1 no longer exists
STEP: waiting up to 3m0s for service svc-udp in namespace conntrack-7083 to expose endpoints map[pod-server-2:[80]]
Nov 13 03:54:33.845: INFO: successfully validated that service svc-udp in namespace conntrack-7083 exposes endpoints map[pod-server-2:[80]]
STEP: checking client pod connected to the backend 2 on Node IP 10.10.190.208
[AfterEach] [sig-network] Conntrack
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Nov 13 03:54:43.860: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "conntrack-7083" for this suite.


• [SLOW TEST:56.070 seconds]
[sig-network] Conntrack
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should be able to preserve UDP traffic when server pod cycles for a ClusterIP service
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/conntrack.go:203
------------------------------
{"msg":"PASSED [sig-network] Conntrack should be able to preserve UDP traffic when server pod cycles for a ClusterIP service","total":-1,"completed":3,"skipped":425,"failed":0}
Nov 13 03:54:43.870: INFO: Running AfterSuite actions on all nodes


[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Nov 13 03:54:31.377: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746
[It] should create endpoints for unready pods
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1624
STEP: creating RC slow-terminating-unready-pod with selectors map[name:slow-terminating-unready-pod]
STEP: creating Service tolerate-unready with selectors map[name:slow-terminating-unready-pod testid:tolerate-unready-0b238d04-c1d0-4c66-a369-c7da7bfaff58]
STEP: Verifying pods for RC slow-terminating-unready-pod
Nov 13 03:54:31.410: INFO: Pod name slow-terminating-unready-pod: Found 0 pods out of 1
Nov 13 03:54:36.416: INFO: Pod name slow-terminating-unready-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
STEP: trying to dial each unique pod
Nov 13 03:54:38.433: INFO: Controller slow-terminating-unready-pod: Got non-empty result from replica 1 [slow-terminating-unready-pod-v4s82]: "NOW: 2021-11-13 03:54:38.427580866 +0000 UTC m=+2.404275431", 1 of 1 required successes so far
STEP: Waiting for endpoints of Service with DNS name tolerate-unready.services-5950.svc.cluster.local
Nov 13 03:54:38.433: INFO: Creating new exec pod
Nov 13 03:54:42.454: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5950 exec execpod-tbgft -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://tolerate-unready.services-5950.svc.cluster.local:80/'
Nov 13 03:54:42.733: INFO: stderr: "+ curl -q -s --connect-timeout 2 http://tolerate-unready.services-5950.svc.cluster.local:80/\n"
Nov 13 03:54:42.733: INFO: stdout: "NOW: 2021-11-13 03:54:42.714658955 +0000 UTC m=+6.691353572"
STEP: Scaling down replication controller to zero
STEP: Scaling ReplicationController slow-terminating-unready-pod in namespace services-5950 to 0
STEP: Update service to not tolerate unready services
STEP: Check if pod is unreachable
Nov 13 03:54:47.778: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5950 exec execpod-tbgft -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://tolerate-unready.services-5950.svc.cluster.local:80/; test "$?" -ne "0"'
Nov 13 03:54:49.020: INFO: stderr: "+ curl -q -s --connect-timeout 2 http://tolerate-unready.services-5950.svc.cluster.local:80/\n+ test 7 -ne 0\n"
Nov 13 03:54:49.020: INFO: stdout: ""
STEP: Update service to tolerate unready services again
STEP: Check if terminating pod is available through service
Nov 13 03:54:49.027: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5950 exec execpod-tbgft -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://tolerate-unready.services-5950.svc.cluster.local:80/'
Nov 13 03:54:50.404: INFO: stderr: "+ curl -q -s --connect-timeout 2 http://tolerate-unready.services-5950.svc.cluster.local:80/\n"
Nov 13 03:54:50.404: INFO: stdout: "NOW: 2021-11-13 03:54:50.394676313 +0000 UTC m=+14.371370878"
STEP: Remove pods immediately
STEP: stopping RC slow-terminating-unready-pod in namespace services-5950
STEP: deleting service tolerate-unready in namespace services-5950
[AfterEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Nov 13 03:54:50.431: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-5950" for this suite.
[AfterEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750


• [SLOW TEST:19.062 seconds]
[sig-network] Services
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should create endpoints for unready pods
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1624
------------------------------
{"msg":"PASSED [sig-network] Services should create endpoints for unready pods","total":-1,"completed":4,"skipped":1048,"failed":0}
Nov 13 03:54:50.441: INFO: Running AfterSuite actions on all nodes


[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Nov 13 03:52:47.003: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename nettest
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:83
STEP: Executing a successful http request from the external internet
[It] should update endpoints: http
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:334
STEP: Performing setup for networking test in namespace nettest-4565
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Nov 13 03:52:47.132: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
Nov 13 03:52:47.161: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Nov 13 03:52:49.165: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Nov 13 03:52:51.166: INFO: The status of Pod netserver-0 is Running (Ready = false)
Nov 13 03:52:53.165: INFO: The status of Pod netserver-0 is Running (Ready = false)
Nov 13 03:52:55.165: INFO: The status of Pod netserver-0 is Running (Ready = false)
Nov 13 03:52:57.166: INFO: The status of Pod netserver-0 is Running (Ready = false)
Nov 13 03:52:59.166: INFO: The status of Pod netserver-0 is Running (Ready = false)
Nov 13 03:53:01.165: INFO: The status of Pod netserver-0 is Running (Ready = false)
Nov 13 03:53:03.165: INFO: The status of Pod netserver-0 is Running (Ready = false)
Nov 13 03:53:05.166: INFO: The status of Pod netserver-0 is Running (Ready = false)
Nov 13 03:53:07.166: INFO: The status of Pod netserver-0 is Running (Ready = false)
Nov 13 03:53:09.168: INFO: The status of Pod netserver-0 is Running (Ready = true)
Nov 13 03:53:09.173: INFO: The status of Pod netserver-1 is Running (Ready = true)
STEP: Creating test pods
Nov 13 03:53:15.192: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2
STEP: Getting node addresses
Nov 13 03:53:15.192: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating the service on top of the pods in kubernetes
Nov 13 03:53:15.214: INFO: Service node-port-service in namespace nettest-4565 found.
Nov 13 03:53:15.229: INFO: Service session-affinity-service in namespace nettest-4565 found.
STEP: Waiting for NodePort service to expose endpoint
Nov 13 03:53:16.233: INFO: Waiting for amount of service:node-port-service endpoints to be 2
STEP: Waiting for Session Affinity service to expose endpoint
Nov 13 03:53:17.237: INFO: Waiting for amount of service:session-affinity-service endpoints to be 2
STEP: dialing(http) test-container-pod --> 10.233.49.214:80 (config.clusterIP)
Nov 13 03:53:17.243: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.4.174:9080/dial?request=hostname&protocol=http&host=10.233.49.214&port=80&tries=1'] Namespace:nettest-4565 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Nov 13 03:53:17.243: INFO: >>> kubeConfig: /root/.kube/config
Nov 13 03:53:17.361: INFO: Waiting for responses: map[netserver-1:{}]
Nov 13 03:53:19.365: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.4.174:9080/dial?request=hostname&protocol=http&host=10.233.49.214&port=80&tries=1'] Namespace:nettest-4565 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Nov 13 03:53:19.365: INFO: >>> kubeConfig: /root/.kube/config
Nov 13 03:53:19.991: INFO: Waiting for responses: map[netserver-1:{}]
Nov 13 03:53:21.995: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.4.174:9080/dial?request=hostname&protocol=http&host=10.233.49.214&port=80&tries=1'] Namespace:nettest-4565 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Nov 13 03:53:21.995: INFO: >>> kubeConfig: /root/.kube/config
Nov 13 03:53:22.205: INFO: Waiting for responses: map[netserver-1:{}]
Nov 13 03:53:24.209: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.4.174:9080/dial?request=hostname&protocol=http&host=10.233.49.214&port=80&tries=1'] Namespace:nettest-4565 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Nov 13 03:53:24.209: INFO: >>> kubeConfig: /root/.kube/config
Nov 13 03:53:24.456: INFO: Waiting for responses: map[]
Nov 13 03:53:24.456: INFO: reached 10.233.49.214 after 3/34 tries
STEP: Deleting a pod which, will be replaced with a new endpoint
Nov 13 03:53:24.463: INFO: Waiting for pod netserver-0 to disappear
Nov 13 03:53:24.465: INFO: Pod netserver-0 no longer exists
Nov 13 03:53:25.466: INFO: Waiting for amount of service:node-port-service endpoints to be 1
STEP: dialing(http) test-container-pod --> 10.233.49.214:80 (config.clusterIP) (endpoint recovery)
Nov 13 03:53:30.475: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.4.174:9080/dial?request=hostname&protocol=http&host=10.233.49.214&port=80&tries=1'] Namespace:nettest-4565 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Nov 13 03:53:30.475: INFO: >>> kubeConfig: /root/.kube/config
Nov 13 03:53:30.566: INFO: Waiting for responses: map[]
Nov 13 03:53:32.574: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.4.174:9080/dial?request=hostname&protocol=http&host=10.233.49.214&port=80&tries=1'] Namespace:nettest-4565 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Nov 13 03:53:32.574: INFO: >>> kubeConfig: /root/.kube/config
Nov 13 03:53:37.388: INFO: Waiting for responses: map[]
Nov 13 03:53:39.392: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.4.174:9080/dial?request=hostname&protocol=http&host=10.233.49.214&port=80&tries=1'] Namespace:nettest-4565 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Nov 13 03:53:39.392: INFO: >>> kubeConfig: /root/.kube/config
Nov 13 03:53:39.681: INFO: Waiting for responses: map[]
Nov 13 03:53:41.686: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.4.174:9080/dial?request=hostname&protocol=http&host=10.233.49.214&port=80&tries=1'] Namespace:nettest-4565 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Nov 13 03:53:41.686: INFO: >>> kubeConfig: /root/.kube/config
Nov 13 03:53:42.772: INFO: Waiting for responses: map[]
Nov 13 03:53:44.776: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.4.174:9080/dial?request=hostname&protocol=http&host=10.233.49.214&port=80&tries=1'] Namespace:nettest-4565 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Nov 13 03:53:44.776: INFO: >>> kubeConfig: /root/.kube/config
Nov 13 03:53:45.331: INFO: Waiting for responses: map[]
Nov 13 03:53:47.334: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.4.174:9080/dial?request=hostname&protocol=http&host=10.233.49.214&port=80&tries=1'] Namespace:nettest-4565 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Nov 13 03:53:47.334: INFO: >>> kubeConfig: /root/.kube/config
Nov 13 03:53:47.438: INFO: Waiting for responses: map[]
Nov 13 03:53:49.441: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.4.174:9080/dial?request=hostname&protocol=http&host=10.233.49.214&port=80&tries=1'] Namespace:nettest-4565 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Nov 13 03:53:49.441: INFO: >>> kubeConfig: /root/.kube/config
Nov 13 03:53:49.550: INFO: Waiting for responses: map[]
Nov 13 03:53:51.554: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.4.174:9080/dial?request=hostname&protocol=http&host=10.233.49.214&port=80&tries=1'] Namespace:nettest-4565 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Nov 13 03:53:51.554: INFO: >>> kubeConfig: /root/.kube/config
Nov 13 03:53:51.648: INFO: Waiting for responses: map[]
Nov 13 03:53:53.652: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.4.174:9080/dial?request=hostname&protocol=http&host=10.233.49.214&port=80&tries=1'] Namespace:nettest-4565 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Nov 13 03:53:53.652: INFO: >>> kubeConfig: /root/.kube/config
Nov 13 03:53:54.035: INFO: Waiting for responses: map[]
Nov 13 03:53:56.040: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.4.174:9080/dial?request=hostname&protocol=http&host=10.233.49.214&port=80&tries=1'] Namespace:nettest-4565 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Nov 13 03:53:56.040: INFO: >>> kubeConfig: /root/.kube/config
Nov 13 03:53:56.151: INFO: Waiting for responses: map[]
Nov 13 03:53:58.154: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.4.174:9080/dial?request=hostname&protocol=http&host=10.233.49.214&port=80&tries=1'] Namespace:nettest-4565 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Nov 13 03:53:58.154: INFO: >>> kubeConfig: /root/.kube/config
Nov 13 03:53:58.877: INFO: Waiting for responses: map[]
Nov 13 03:54:00.881: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.4.174:9080/dial?request=hostname&protocol=http&host=10.233.49.214&port=80&tries=1'] Namespace:nettest-4565 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Nov 13 03:54:00.881: INFO: >>> kubeConfig: /root/.kube/config
Nov 13 03:54:01.772: INFO: Waiting for responses: map[]
Nov 13 03:54:03.776: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.4.174:9080/dial?request=hostname&protocol=http&host=10.233.49.214&port=80&tries=1'] Namespace:nettest-4565 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Nov 13 03:54:03.777: INFO: >>> kubeConfig: /root/.kube/config
Nov 13 03:54:04.570: INFO: Waiting for responses: map[]
Nov 13 03:54:06.574: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.4.174:9080/dial?request=hostname&protocol=http&host=10.233.49.214&port=80&tries=1'] Namespace:nettest-4565 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Nov 13 03:54:06.574: INFO: >>> kubeConfig: /root/.kube/config
Nov 13 03:54:06.852: INFO: Waiting for responses: map[]
Nov 13 03:54:08.856: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.4.174:9080/dial?request=hostname&protocol=http&host=10.233.49.214&port=80&tries=1'] Namespace:nettest-4565 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Nov 13 03:54:08.856: INFO: >>> kubeConfig: /root/.kube/config
Nov 13 03:54:09.385: INFO: Waiting for responses: map[]
Nov 13 03:54:11.391: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.4.174:9080/dial?request=hostname&protocol=http&host=10.233.49.214&port=80&tries=1'] Namespace:nettest-4565 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Nov 13 03:54:11.391: INFO: >>> kubeConfig: /root/.kube/config
Nov 13 03:54:11.585: INFO: Waiting for responses: map[]
Nov 13 03:54:13.588: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.4.174:9080/dial?request=hostname&protocol=http&host=10.233.49.214&port=80&tries=1'] Namespace:nettest-4565 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Nov 13 03:54:13.589: INFO: >>> kubeConfig: /root/.kube/config
Nov 13 03:54:14.225: INFO: Waiting for responses: map[]
Nov 13 03:54:16.230: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.4.174:9080/dial?request=hostname&protocol=http&host=10.233.49.214&port=80&tries=1'] Namespace:nettest-4565 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Nov 13 03:54:16.230: INFO: >>> kubeConfig: /root/.kube/config
Nov 13 03:54:16.327: INFO: Waiting for responses: map[]
Nov 13 03:54:18.331: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.4.174:9080/dial?request=hostname&protocol=http&host=10.233.49.214&port=80&tries=1'] Namespace:nettest-4565 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Nov 13 03:54:18.331: INFO: >>> kubeConfig: /root/.kube/config
Nov 13 03:54:18.416: INFO: Waiting for responses: map[]
Nov 13 03:54:20.420: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.4.174:9080/dial?request=hostname&protocol=http&host=10.233.49.214&port=80&tries=1'] Namespace:nettest-4565 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Nov 13 03:54:20.420: INFO: >>> kubeConfig: /root/.kube/config
Nov 13 03:54:20.519: INFO: Waiting for responses: map[]
Nov 13 03:54:22.522: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.4.174:9080/dial?request=hostname&protocol=http&host=10.233.49.214&port=80&tries=1'] Namespace:nettest-4565 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Nov 13 03:54:22.522: INFO: >>> kubeConfig: /root/.kube/config
Nov 13 03:54:22.707: INFO: Waiting for responses: map[]
Nov 13 03:54:24.712: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.4.174:9080/dial?request=hostname&protocol=http&host=10.233.49.214&port=80&tries=1'] Namespace:nettest-4565 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Nov 13 03:54:24.712: INFO: >>> kubeConfig: /root/.kube/config
Nov 13 03:54:24.977: INFO: Waiting for responses: map[]
Nov 13 03:54:26.982: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.4.174:9080/dial?request=hostname&protocol=http&host=10.233.49.214&port=80&tries=1'] Namespace:nettest-4565 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Nov 13 03:54:26.982: INFO: >>> kubeConfig: /root/.kube/config
Nov 13 03:54:28.481: INFO: Waiting for responses: map[]
Nov 13 03:54:30.486: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.4.174:9080/dial?request=hostname&protocol=http&host=10.233.49.214&port=80&tries=1'] Namespace:nettest-4565 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Nov 13 03:54:30.486: INFO: >>> kubeConfig: /root/.kube/config
Nov 13 03:54:30.813: INFO: Waiting for responses: map[]
Nov 13 03:54:32.818: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.4.174:9080/dial?request=hostname&protocol=http&host=10.233.49.214&port=80&tries=1'] Namespace:nettest-4565 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Nov 13 03:54:32.818: INFO: >>> kubeConfig: /root/.kube/config
Nov 13 03:54:32.904: INFO: Waiting for responses: map[]
Nov 13 03:54:34.908: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.4.174:9080/dial?request=hostname&protocol=http&host=10.233.49.214&port=80&tries=1'] Namespace:nettest-4565 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Nov 13 03:54:34.908: INFO: >>> kubeConfig: /root/.kube/config
Nov 13 03:54:35.213: INFO: Waiting for responses: map[]
Nov 13 03:54:37.217: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.4.174:9080/dial?request=hostname&protocol=http&host=10.233.49.214&port=80&tries=1'] Namespace:nettest-4565 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Nov 13 03:54:37.217: INFO: >>> kubeConfig: /root/.kube/config
Nov 13 03:54:37.423: INFO: Waiting for responses: map[]
Nov 13 03:54:39.426: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.4.174:9080/dial?request=hostname&protocol=http&host=10.233.49.214&port=80&tries=1'] Namespace:nettest-4565 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Nov 13 03:54:39.426: INFO: >>> kubeConfig: /root/.kube/config
Nov 13 03:54:39.702: INFO: Waiting for responses: map[]
Nov 13 03:54:41.706: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.4.174:9080/dial?request=hostname&protocol=http&host=10.233.49.214&port=80&tries=1'] Namespace:nettest-4565 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Nov 13 03:54:41.707: INFO: >>> kubeConfig: /root/.kube/config
Nov 13 03:54:41.821: INFO: Waiting for responses: map[]
Nov 13 03:54:43.824: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.4.174:9080/dial?request=hostname&protocol=http&host=10.233.49.214&port=80&tries=1'] Namespace:nettest-4565 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Nov 13 03:54:43.824: INFO: >>> kubeConfig: /root/.kube/config
Nov 13 03:54:43.917: INFO: Waiting for responses: map[]
Nov 13 03:54:45.921: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.4.174:9080/dial?request=hostname&protocol=http&host=10.233.49.214&port=80&tries=1'] Namespace:nettest-4565 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Nov 13 03:54:45.921: INFO: >>> kubeConfig: /root/.kube/config
Nov 13 03:54:46.004: INFO: Waiting for responses: map[]
Nov 13 03:54:48.008: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.4.174:9080/dial?request=hostname&protocol=http&host=10.233.49.214&port=80&tries=1'] Namespace:nettest-4565 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Nov 13 03:54:48.008: INFO: >>> kubeConfig: /root/.kube/config
Nov 13 03:54:48.120: INFO: Waiting for responses: map[]
Nov 13 03:54:50.124: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.4.174:9080/dial?request=hostname&protocol=http&host=10.233.49.214&port=80&tries=1'] Namespace:nettest-4565 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Nov 13 03:54:50.125: INFO: >>> kubeConfig: /root/.kube/config
Nov 13 03:54:50.355: INFO: Waiting for responses: map[]
Nov 13 03:54:52.358: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.4.174:9080/dial?request=hostname&protocol=http&host=10.233.49.214&port=80&tries=1'] Namespace:nettest-4565 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Nov 13 03:54:52.359: INFO: >>> kubeConfig: /root/.kube/config
Nov 13 03:54:52.555: INFO: Waiting for responses: map[]
Nov 13 03:54:52.555: INFO: reached 10.233.49.214 after 33/34 tries
[AfterEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Nov 13 03:54:52.555: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "nettest-4565" for this suite.


• [SLOW TEST:125.561 seconds]
[sig-network] Networking
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  Granular Checks: Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:151
    should update endpoints: http
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:334
------------------------------
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Nov 13 03:54:33.743: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename nettest
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:83
STEP: Executing a successful http request from the external internet
[It] should function for pod-Service: http
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:153
STEP: Performing setup for networking test in namespace nettest-8524
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Nov 13 03:54:33.854: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
Nov 13 03:54:33.889: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Nov 13 03:54:35.892: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Nov 13 03:54:37.893: INFO: The status of Pod netserver-0 is Running (Ready = false)
Nov 13 03:54:39.894: INFO: The status of Pod netserver-0 is Running (Ready = false)
Nov 13 03:54:41.893: INFO: The status of Pod netserver-0 is Running (Ready = false)
Nov 13 03:54:43.892: INFO: The status of Pod netserver-0 is Running (Ready = false)
Nov 13 03:54:45.895: INFO: The status of Pod netserver-0 is Running (Ready = false)
Nov 13 03:54:47.892: INFO: The status of Pod netserver-0 is Running (Ready = false)
Nov 13 03:54:49.893: INFO: The status of Pod netserver-0 is Running (Ready = false)
Nov 13 03:54:51.895: INFO: The status of Pod netserver-0 is Running (Ready = false)
Nov 13 03:54:53.892: INFO: The status of Pod netserver-0 is Running (Ready = true)
Nov 13 03:54:53.896: INFO: The status of Pod netserver-1 is Running (Ready = false)
Nov 13 03:54:55.903: INFO: The status of Pod netserver-1 is Running (Ready = true)
STEP: Creating test pods
Nov 13 03:55:01.931: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2
STEP: Getting node addresses
Nov 13 03:55:01.931: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
Nov 13 03:55:01.938: INFO: Requires at least 2 nodes (not -1)
[AfterEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Nov 13 03:55:01.939: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "nettest-8524" for this suite.


S [SKIPPING] [28.205 seconds]
[sig-network] Networking
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  Granular Checks: Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:151
    should function for pod-Service: http [It]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:153

    Requires at least 2 nodes (not -1)

    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/network/utils.go:782
------------------------------
Nov 13 03:55:01.950: INFO: Running AfterSuite actions on all nodes


[BeforeEach] [sig-network] Conntrack
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Nov 13 03:53:49.370: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename conntrack
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Conntrack
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/conntrack.go:96
[It] should be able to preserve UDP traffic when server pod cycles for a NodePort service
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/conntrack.go:130
STEP: creating a UDP service svc-udp with type=NodePort in conntrack-8646
STEP: creating a client pod for probing the service svc-udp
Nov 13 03:53:49.414: INFO: The status of Pod pod-client is Pending, waiting for it to be Running (with Ready = true)
Nov 13 03:53:51.418: INFO: The status of Pod pod-client is Pending, waiting for it to be Running (with Ready = true)
Nov 13 03:53:53.417: INFO: The status of Pod pod-client is Pending, waiting for it to be Running (with Ready = true)
Nov 13 03:53:55.419: INFO: The status of Pod pod-client is Pending, waiting for it to be Running (with Ready = true)
Nov 13 03:53:57.420: INFO: The status of Pod pod-client is Pending, waiting for it to be Running (with Ready = true)
Nov 13 03:53:59.418: INFO: The status of Pod pod-client is Running (Ready = true)
Nov 13 03:53:59.425: INFO: Pod client logs: Sat Nov 13 03:53:52 UTC 2021
Sat Nov 13 03:53:52 UTC 2021 Try: 1

Sat Nov 13 03:53:52 UTC 2021 Try: 2

Sat Nov 13 03:53:52 UTC 2021 Try: 3

Sat Nov 13 03:53:52 UTC 2021 Try: 4

Sat Nov 13 03:53:52 UTC 2021 Try: 5

Sat Nov 13 03:53:52 UTC 2021 Try: 6

Sat Nov 13 03:53:52 UTC 2021 Try: 7

Sat Nov 13 03:53:57 UTC 2021 Try: 8

Sat Nov 13 03:53:57 UTC 2021 Try: 9

Sat Nov 13 03:53:57 UTC 2021 Try: 10

Sat Nov 13 03:53:57 UTC 2021 Try: 11

Sat Nov 13 03:53:57 UTC 2021 Try: 12

Sat Nov 13 03:53:57 UTC 2021 Try: 13

STEP: creating a backend pod pod-server-1 for the service svc-udp
Nov 13 03:53:59.437: INFO: The status of Pod pod-server-1 is Pending, waiting for it to be Running (with Ready = true)
Nov 13 03:54:01.441: INFO: The status of Pod pod-server-1 is Pending, waiting for it to be Running (with Ready = true)
Nov 13 03:54:03.441: INFO: The status of Pod pod-server-1 is Pending, waiting for it to be Running (with Ready = true)
Nov 13 03:54:05.440: INFO: The status of Pod pod-server-1 is Pending, waiting for it to be Running (with Ready = true)
Nov 13 03:54:07.441: INFO: The status of Pod pod-server-1 is Pending, waiting for it to be Running (with Ready = true)
Nov 13 03:54:09.440: INFO: The status of Pod pod-server-1 is Pending, waiting for it to be Running (with Ready = true)
Nov 13 03:54:11.443: INFO: The status of Pod pod-server-1 is Pending, waiting for it to be Running (with Ready = true)
Nov 13 03:54:13.441: INFO: The status of Pod pod-server-1 is Pending, waiting for it to be Running (with Ready = true)
Nov 13 03:54:15.440: INFO: The status of Pod pod-server-1 is Pending, waiting for it to be Running (with Ready = true)
Nov 13 03:54:17.442: INFO: The status of Pod pod-server-1 is Running (Ready = true)
STEP: waiting up to 3m0s for service svc-udp in namespace conntrack-8646 to expose endpoints map[pod-server-1:[80]]
Nov 13 03:54:17.451: INFO: successfully validated that service svc-udp in namespace conntrack-8646 exposes endpoints map[pod-server-1:[80]]
STEP: checking client pod connected to the backend 1 on Node IP 10.10.190.208
Nov 13 03:55:17.530: INFO: Pod client logs: Sat Nov 13 03:53:52 UTC 2021
Sat Nov 13 03:53:52 UTC 2021 Try: 1

Sat Nov 13 03:53:52 UTC 2021 Try: 2

Sat Nov 13 03:53:52 UTC 2021 Try: 3

Sat Nov 13 03:53:52 UTC 2021 Try: 4

Sat Nov 13 03:53:52 UTC 2021 Try: 5

Sat Nov 13 03:53:52 UTC 2021 Try: 6

Sat Nov 13 03:53:52 UTC 2021 Try: 7

Sat Nov 13 03:53:57 UTC 2021 Try: 8

Sat Nov 13 03:53:57 UTC 2021 Try: 9

Sat Nov 13 03:53:57 UTC 2021 Try: 10

Sat Nov 13 03:53:57 UTC 2021 Try: 11

Sat Nov 13 03:53:57 UTC 2021 Try: 12

Sat Nov 13 03:53:57 UTC 2021 Try: 13

Sat Nov 13 03:54:02 UTC 2021 Try: 14

Sat Nov 13 03:54:02 UTC 2021 Try: 15

Sat Nov 13 03:54:02 UTC 2021 Try: 16

Sat Nov 13 03:54:02 UTC 2021 Try: 17

Sat Nov 13 03:54:02 UTC 2021 Try: 18

Sat Nov 13 03:54:02 UTC 2021 Try: 19

Sat Nov 13 03:54:07 UTC 2021 Try: 20

Sat Nov 13 03:54:07 UTC 2021 Try: 21

Sat Nov 13 03:54:07 UTC 2021 Try: 22

Sat Nov 13 03:54:07 UTC 2021 Try: 23

Sat Nov 13 03:54:07 UTC 2021 Try: 24

Sat Nov 13 03:54:07 UTC 2021 Try: 25

Sat Nov 13 03:54:12 UTC 2021 Try: 26

Sat Nov 13 03:54:12 UTC 2021 Try: 27

Sat Nov 13 03:54:12 UTC 2021 Try: 28

Sat Nov 13 03:54:12 UTC 2021 Try: 29

Sat Nov 13 03:54:12 UTC 2021 Try: 30

Sat Nov 13 03:54:12 UTC 2021 Try: 31

Sat Nov 13 03:54:17 UTC 2021 Try: 32

Sat Nov 13 03:54:17 UTC 2021 Try: 33

Sat Nov 13 03:54:17 UTC 2021 Try: 34

Sat Nov 13 03:54:17 UTC 2021 Try: 35

Sat Nov 13 03:54:17 UTC 2021 Try: 36

Sat Nov 13 03:54:17 UTC 2021 Try: 37

Sat Nov 13 03:54:22 UTC 2021 Try: 38

Sat Nov 13 03:54:22 UTC 2021 Try: 39

Sat Nov 13 03:54:22 UTC 2021 Try: 40

Sat Nov 13 03:54:22 UTC 2021 Try: 41

Sat Nov 13 03:54:22 UTC 2021 Try: 42

Sat Nov 13 03:54:22 UTC 2021 Try: 43

Sat Nov 13 03:54:27 UTC 2021 Try: 44

Sat Nov 13 03:54:27 UTC 2021 Try: 45

Sat Nov 13 03:54:27 UTC 2021 Try: 46

Sat Nov 13 03:54:27 UTC 2021 Try: 47

Sat Nov 13 03:54:27 UTC 2021 Try: 48

Sat Nov 13 03:54:27 UTC 2021 Try: 49

Sat Nov 13 03:54:32 UTC 2021 Try: 50

Sat Nov 13 03:54:32 UTC 2021 Try: 51

Sat Nov 13 03:54:32 UTC 2021 Try: 52

Sat Nov 13 03:54:32 UTC 2021 Try: 53

Sat Nov 13 03:54:32 UTC 2021 Try: 54

Sat Nov 13 03:54:32 UTC 2021 Try: 55

Sat Nov 13 03:54:37 UTC 2021 Try: 56

Sat Nov 13 03:54:37 UTC 2021 Try: 57

Sat Nov 13 03:54:37 UTC 2021 Try: 58

Sat Nov 13 03:54:37 UTC 2021 Try: 59

Sat Nov 13 03:54:37 UTC 2021 Try: 60

Sat Nov 13 03:54:37 UTC 2021 Try: 61

Sat Nov 13 03:54:42 UTC 2021 Try: 62

Sat Nov 13 03:54:42 UTC 2021 Try: 63

Sat Nov 13 03:54:42 UTC 2021 Try: 64

Sat Nov 13 03:54:42 UTC 2021 Try: 65

Sat Nov 13 03:54:42 UTC 2021 Try: 66

Sat Nov 13 03:54:42 UTC 2021 Try: 67

Sat Nov 13 03:54:47 UTC 2021 Try: 68

Sat Nov 13 03:54:47 UTC 2021 Try: 69

Sat Nov 13 03:54:47 UTC 2021 Try: 70

Sat Nov 13 03:54:47 UTC 2021 Try: 71

Sat Nov 13 03:54:47 UTC 2021 Try: 72

Sat Nov 13 03:54:47 UTC 2021 Try: 73

Sat Nov 13 03:54:52 UTC 2021 Try: 74

Sat Nov 13 03:54:52 UTC 2021 Try: 75

Sat Nov 13 03:54:52 UTC 2021 Try: 76

Sat Nov 13 03:54:52 UTC 2021 Try: 77

Sat Nov 13 03:54:52 UTC 2021 Try: 78

Sat Nov 13 03:54:52 UTC 2021 Try: 79

Sat Nov 13 03:54:57 UTC 2021 Try: 80

Sat Nov 13 03:54:57 UTC 2021 Try: 81

Sat Nov 13 03:54:57 UTC 2021 Try: 82

Sat Nov 13 03:54:57 UTC 2021 Try: 83

Sat Nov 13 03:54:57 UTC 2021 Try: 84

Sat Nov 13 03:54:57 UTC 2021 Try: 85

Sat Nov 13 03:55:02 UTC 2021 Try: 86

Sat Nov 13 03:55:02 UTC 2021 Try: 87

Sat Nov 13 03:55:02 UTC 2021 Try: 88

Sat Nov 13 03:55:02 UTC 2021 Try: 89

Sat Nov 13 03:55:02 UTC 2021 Try: 90

Sat Nov 13 03:55:02 UTC 2021 Try: 91

Sat Nov 13 03:55:07 UTC 2021 Try: 92

Sat Nov 13 03:55:07 UTC 2021 Try: 93

Sat Nov 13 03:55:07 UTC 2021 Try: 94

Sat Nov 13 03:55:07 UTC 2021 Try: 95

Sat Nov 13 03:55:07 UTC 2021 Try: 96

Sat Nov 13 03:55:07 UTC 2021 Try: 97

Sat Nov 13 03:55:12 UTC 2021 Try: 98

Sat Nov 13 03:55:12 UTC 2021 Try: 99

Sat Nov 13 03:55:12 UTC 2021 Try: 100

Sat Nov 13 03:55:12 UTC 2021 Try: 101

Sat Nov 13 03:55:12 UTC 2021 Try: 102

Sat Nov 13 03:55:12 UTC 2021 Try: 103

Sat Nov 13 03:55:17 UTC 2021 Try: 104

Sat Nov 13 03:55:17 UTC 2021 Try: 105

Sat Nov 13 03:55:17 UTC 2021 Try: 106

Sat Nov 13 03:55:17 UTC 2021 Try: 107

Sat Nov 13 03:55:17 UTC 2021 Try: 108

Sat Nov 13 03:55:17 UTC 2021 Try: 109

Nov 13 03:55:17.530: FAIL: Failed to connect to backend 1

Full Stack Trace
k8s.io/kubernetes/test/e2e.RunE2ETests(0xc00230f500)
	_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x36c
k8s.io/kubernetes/test/e2e.TestE2E(0xc00230f500)
	_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:144 +0x2b
testing.tRunner(0xc00230f500, 0x70e7b58)
	/usr/local/go/src/testing/testing.go:1193 +0xef
created by testing.(*T).Run
	/usr/local/go/src/testing/testing.go:1238 +0x2b3
[AfterEach] [sig-network] Conntrack
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
STEP: Collecting events from namespace "conntrack-8646".
STEP: Found 8 events.
Nov 13 03:55:17.535: INFO: At 2021-11-13 03:53:51 +0000 UTC - event for pod-client: {kubelet node1} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/agnhost:2.32"
Nov 13 03:55:17.535: INFO: At 2021-11-13 03:53:51 +0000 UTC - event for pod-client: {kubelet node1} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/agnhost:2.32" in 330.787178ms
Nov 13 03:55:17.535: INFO: At 2021-11-13 03:53:52 +0000 UTC - event for pod-client: {kubelet node1} Created: Created container pod-client
Nov 13 03:55:17.535: INFO: At 2021-11-13 03:53:52 +0000 UTC - event for pod-client: {kubelet node1} Started: Started container pod-client
Nov 13 03:55:17.535: INFO: At 2021-11-13 03:54:04 +0000 UTC - event for pod-server-1: {kubelet node2} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/agnhost:2.32"
Nov 13 03:55:17.535: INFO: At 2021-11-13 03:54:05 +0000 UTC - event for pod-server-1: {kubelet node2} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/agnhost:2.32" in 646.49041ms
Nov 13 03:55:17.535: INFO: At 2021-11-13 03:54:05 +0000 UTC - event for pod-server-1: {kubelet node2} Created: Created container agnhost-container
Nov 13 03:55:17.535: INFO: At 2021-11-13 03:54:05 +0000 UTC - event for pod-server-1: {kubelet node2} Started: Started container agnhost-container
Nov 13 03:55:17.538: INFO: POD           NODE   PHASE    GRACE  CONDITIONS
Nov 13 03:55:17.538: INFO: pod-client    node1  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-11-13 03:53:49 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2021-11-13 03:53:52 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2021-11-13 03:53:52 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-11-13 03:53:49 +0000 UTC  }]
Nov 13 03:55:17.538: INFO: pod-server-1  node2  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-11-13 03:53:59 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2021-11-13 03:54:05 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2021-11-13 03:54:05 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-11-13 03:53:59 +0000 UTC  }]
Nov 13 03:55:17.538: INFO: 
Nov 13 03:55:17.543: INFO: 
Logging node info for node master1
Nov 13 03:55:17.545: INFO: Node Info: &Node{ObjectMeta:{master1    56d66c54-e52b-494a-a758-e4b658c4b245 150685 0 2021-11-12 21:05:50 +0000 UTC   map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master1 kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.202 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  [{kubeadm Update v1 2021-11-12 21:05:52 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}}} {flanneld Update v1 2021-11-12 21:08:40 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2021-11-12 21:08:49 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.0.0/24\"":{}},"f:taints":{}}}} {kubelet Update v1 2021-11-12 21:13:20 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234763776 0} {} 196518324Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324599808 0} {} 195629492Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-11-12 21:11:25 +0000 UTC,LastTransitionTime:2021-11-12 21:11:25 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-11-13 03:55:13 +0000 UTC,LastTransitionTime:2021-11-12 21:05:48 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-11-13 03:55:13 +0000 UTC,LastTransitionTime:2021-11-12 21:05:48 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-11-13 03:55:13 +0000 UTC,LastTransitionTime:2021-11-12 21:05:48 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-11-13 03:55:13 +0000 UTC,LastTransitionTime:2021-11-12 21:11:19 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.202,},NodeAddress{Type:Hostname,Address:master1,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:94e600d00e79450a9fb632d8473a11eb,SystemUUID:00ACFB60-0631-E711-906E-0017A4403562,BootID:6e4bb815-8b93-47c2-9321-93e7ada261f6,KernelVersion:3.10.0-1160.45.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.10,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:724566215,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 sirot/netperf-latest:latest],SizeBytes:282025213,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:125930239,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e k8s.gcr.io/coredns/coredns:v1.8.0],SizeBytes:42454755,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[localhost:30500/tasextender@sha256:93c03fd0e56363624f0df9368752e6e7c270d969da058ae5066fe8d668541f51 tasextender:latest localhost:30500/tasextender:0.4],SizeBytes:28910791,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[registry@sha256:1cd9409a311350c3072fe510b52046f104416376c126a479cef9a4dfe692cf57 registry:2.7.0],SizeBytes:24191168,},ContainerImage{Names:[nginx@sha256:57d1a39684ee5a5b5d34638cff843561d440d0f996303b2e841cabf228a4c2af nginx:1.20.1-alpine],SizeBytes:22650454,},ContainerImage{Names:[@ :],SizeBytes:5577654,},ContainerImage{Names:[alpine@sha256:c0e9560cda118f9ec63ddefb4a173a2b2a0347082d7dff7dc14272e7841a5b5a alpine:3.12.1],SizeBytes:5573013,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},}
Nov 13 03:55:17.546: INFO: 
Logging kubelet events for node master1
Nov 13 03:55:17.548: INFO: 
Logging pods the kubelet thinks is on node master1
Nov 13 03:55:17.581: INFO: kube-apiserver-master1 started at 2021-11-12 21:06:59 +0000 UTC (0+1 container statuses recorded)
Nov 13 03:55:17.581: INFO: 	Container kube-apiserver ready: true, restart count 0
Nov 13 03:55:17.581: INFO: kube-proxy-6m7qt started at 2021-11-12 21:07:39 +0000 UTC (0+1 container statuses recorded)
Nov 13 03:55:17.581: INFO: 	Container kube-proxy ready: true, restart count 1
Nov 13 03:55:17.581: INFO: container-registry-65d7c44b96-qwqcz started at 2021-11-12 21:12:56 +0000 UTC (0+2 container statuses recorded)
Nov 13 03:55:17.581: INFO: 	Container docker-registry ready: true, restart count 0
Nov 13 03:55:17.581: INFO: 	Container nginx ready: true, restart count 0
Nov 13 03:55:17.581: INFO: node-exporter-zm5hq started at 2021-11-12 21:22:03 +0000 UTC (0+2 container statuses recorded)
Nov 13 03:55:17.581: INFO: 	Container kube-rbac-proxy ready: true, restart count 0
Nov 13 03:55:17.581: INFO: 	Container node-exporter ready: true, restart count 0
Nov 13 03:55:17.581: INFO: kube-scheduler-master1 started at 2021-11-12 21:06:59 +0000 UTC (0+1 container statuses recorded)
Nov 13 03:55:17.581: INFO: 	Container kube-scheduler ready: true, restart count 0
Nov 13 03:55:17.581: INFO: kube-controller-manager-master1 started at 2021-11-12 21:15:21 +0000 UTC (0+1 container statuses recorded)
Nov 13 03:55:17.581: INFO: 	Container kube-controller-manager ready: true, restart count 2
Nov 13 03:55:17.581: INFO: kube-flannel-79bvx started at 2021-11-12 21:08:36 +0000 UTC (1+1 container statuses recorded)
Nov 13 03:55:17.581: INFO: 	Init container install-cni ready: true, restart count 0
Nov 13 03:55:17.581: INFO: 	Container kube-flannel ready: true, restart count 2
Nov 13 03:55:17.581: INFO: kube-multus-ds-amd64-qtmwl started at 2021-11-12 21:08:45 +0000 UTC (0+1 container statuses recorded)
Nov 13 03:55:17.581: INFO: 	Container kube-multus ready: true, restart count 1
Nov 13 03:55:17.581: INFO: coredns-8474476ff8-9vc8b started at 2021-11-12 21:09:15 +0000 UTC (0+1 container statuses recorded)
Nov 13 03:55:17.581: INFO: 	Container coredns ready: true, restart count 2
W1113 03:55:17.594874      36 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled.
Nov 13 03:55:17.664: INFO: 
Latency metrics for node master1
Nov 13 03:55:17.664: INFO: 
Logging node info for node master2
Nov 13 03:55:17.667: INFO: Node Info: &Node{ObjectMeta:{master2    9cc6c106-2749-4b3a-bbe2-d8a672ab49e0 150672 0 2021-11-12 21:06:20 +0000 UTC   map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master2 kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.203 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/master.version:v0.8.2 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  [{kubeadm Update v1 2021-11-12 21:06:21 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}}} {flanneld Update v1 2021-11-12 21:08:41 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2021-11-12 21:08:49 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}},"f:taints":{}}}} {nfd-master Update v1 2021-11-12 21:16:42 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/master.version":{}}}}} {kubelet Update v1 2021-11-12 21:16:56 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:capacity":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.1.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234767872 0} {}  BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324603904 0} {}  BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-11-12 21:11:30 +0000 UTC,LastTransitionTime:2021-11-12 21:11:30 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-11-13 03:55:11 +0000 UTC,LastTransitionTime:2021-11-12 21:06:20 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-11-13 03:55:11 +0000 UTC,LastTransitionTime:2021-11-12 21:06:20 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-11-13 03:55:11 +0000 UTC,LastTransitionTime:2021-11-12 21:06:20 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-11-13 03:55:11 +0000 UTC,LastTransitionTime:2021-11-12 21:08:49 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.203,},NodeAddress{Type:Hostname,Address:master2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:65d51a0e6dc44ad1ac5d3b5cd37365f1,SystemUUID:00A0DE53-E51D-E711-906E-0017A4403562,BootID:728abaee-0c5e-4ddb-a22e-72a1345c5ab6,KernelVersion:3.10.0-1160.45.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.10,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:724566215,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 sirot/netperf-latest:latest],SizeBytes:282025213,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:125930239,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[k8s.gcr.io/nfd/node-feature-discovery@sha256:74a1cbd82354f148277e20cdce25d57816e355a896bc67f67a0f722164b16945 k8s.gcr.io/nfd/node-feature-discovery:v0.8.2],SizeBytes:108486428,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e k8s.gcr.io/coredns/coredns:v1.8.0],SizeBytes:42454755,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 aquasec/kube-bench:0.3.1],SizeBytes:19301876,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},}
Nov 13 03:55:17.668: INFO: 
Logging kubelet events for node master2
Nov 13 03:55:17.670: INFO: 
Logging pods the kubelet thinks is on node master2
Nov 13 03:55:17.686: INFO: coredns-8474476ff8-s7twh started at 2021-11-12 21:09:11 +0000 UTC (0+1 container statuses recorded)
Nov 13 03:55:17.686: INFO: 	Container coredns ready: true, restart count 1
Nov 13 03:55:17.686: INFO: node-exporter-clpwc started at 2021-11-12 21:22:03 +0000 UTC (0+2 container statuses recorded)
Nov 13 03:55:17.686: INFO: 	Container kube-rbac-proxy ready: true, restart count 0
Nov 13 03:55:17.686: INFO: 	Container node-exporter ready: true, restart count 0
Nov 13 03:55:17.686: INFO: kube-controller-manager-master2 started at 2021-11-12 21:06:59 +0000 UTC (0+1 container statuses recorded)
Nov 13 03:55:17.686: INFO: 	Container kube-controller-manager ready: true, restart count 2
Nov 13 03:55:17.686: INFO: kube-scheduler-master2 started at 2021-11-12 21:15:21 +0000 UTC (0+1 container statuses recorded)
Nov 13 03:55:17.686: INFO: 	Container kube-scheduler ready: true, restart count 2
Nov 13 03:55:17.686: INFO: kube-proxy-5xbt9 started at 2021-11-12 21:07:39 +0000 UTC (0+1 container statuses recorded)
Nov 13 03:55:17.686: INFO: 	Container kube-proxy ready: true, restart count 2
Nov 13 03:55:17.686: INFO: kube-flannel-x76f4 started at 2021-11-12 21:08:36 +0000 UTC (1+1 container statuses recorded)
Nov 13 03:55:17.686: INFO: 	Init container install-cni ready: true, restart count 0
Nov 13 03:55:17.686: INFO: 	Container kube-flannel ready: true, restart count 1
Nov 13 03:55:17.686: INFO: kube-multus-ds-amd64-8zzgk started at 2021-11-12 21:08:45 +0000 UTC (0+1 container statuses recorded)
Nov 13 03:55:17.686: INFO: 	Container kube-multus ready: true, restart count 1
Nov 13 03:55:17.686: INFO: kube-apiserver-master2 started at 2021-11-12 21:06:59 +0000 UTC (0+1 container statuses recorded)
Nov 13 03:55:17.686: INFO: 	Container kube-apiserver ready: true, restart count 0
Nov 13 03:55:17.686: INFO: node-feature-discovery-controller-cff799f9f-c54h8 started at 2021-11-12 21:16:36 +0000 UTC (0+1 container statuses recorded)
Nov 13 03:55:17.686: INFO: 	Container nfd-controller ready: true, restart count 0
W1113 03:55:17.698550      36 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled.
Nov 13 03:55:17.772: INFO: 
Latency metrics for node master2
Nov 13 03:55:17.772: INFO: 
Logging node info for node master3
Nov 13 03:55:17.775: INFO: Node Info: &Node{ObjectMeta:{master3    fce0cd54-e4d8-4ce1-b720-522aad2d7989 150675 0 2021-11-12 21:06:31 +0000 UTC   map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master3 kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.204 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  [{kubeadm Update v1 2021-11-12 21:06:32 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}}} {flanneld Update v1 2021-11-12 21:08:41 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2021-11-12 21:08:49 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.2.0/24\"":{}},"f:taints":{}}}} {kubelet Update v1 2021-11-12 21:19:27 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:capacity":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.2.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234767872 0} {}  BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324603904 0} {}  BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-11-12 21:11:26 +0000 UTC,LastTransitionTime:2021-11-12 21:11:26 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-11-13 03:55:11 +0000 UTC,LastTransitionTime:2021-11-12 21:06:31 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-11-13 03:55:11 +0000 UTC,LastTransitionTime:2021-11-12 21:06:31 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-11-13 03:55:11 +0000 UTC,LastTransitionTime:2021-11-12 21:06:31 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-11-13 03:55:11 +0000 UTC,LastTransitionTime:2021-11-12 21:11:20 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.204,},NodeAddress{Type:Hostname,Address:master3,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:592c271b4697499588d9c2b3767b866a,SystemUUID:008B1444-141E-E711-906E-0017A4403562,BootID:a95de4ca-c566-4b34-8463-623af932d416,KernelVersion:3.10.0-1160.45.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.10,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:724566215,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:125930239,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e k8s.gcr.io/coredns/coredns:v1.8.0],SizeBytes:42454755,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 aquasec/kube-bench:0.3.1],SizeBytes:19301876,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},}
Nov 13 03:55:17.775: INFO: 
Logging kubelet events for node master3
Nov 13 03:55:17.777: INFO: 
Logging pods the kubelet thinks is on node master3
Nov 13 03:55:17.795: INFO: kube-flannel-vxlrs started at 2021-11-12 21:08:36 +0000 UTC (1+1 container statuses recorded)
Nov 13 03:55:17.796: INFO: 	Init container install-cni ready: true, restart count 0
Nov 13 03:55:17.796: INFO: 	Container kube-flannel ready: true, restart count 1
Nov 13 03:55:17.796: INFO: kube-multus-ds-amd64-vp8p7 started at 2021-11-12 21:08:45 +0000 UTC (0+1 container statuses recorded)
Nov 13 03:55:17.796: INFO: 	Container kube-multus ready: true, restart count 1
Nov 13 03:55:17.796: INFO: dns-autoscaler-7df78bfcfb-d88qs started at 2021-11-12 21:09:13 +0000 UTC (0+1 container statuses recorded)
Nov 13 03:55:17.796: INFO: 	Container autoscaler ready: true, restart count 1
Nov 13 03:55:17.796: INFO: kube-proxy-tssd5 started at 2021-11-12 21:07:39 +0000 UTC (0+1 container statuses recorded)
Nov 13 03:55:17.796: INFO: 	Container kube-proxy ready: true, restart count 1
Nov 13 03:55:17.796: INFO: kube-controller-manager-master3 started at 2021-11-12 21:11:20 +0000 UTC (0+1 container statuses recorded)
Nov 13 03:55:17.796: INFO: 	Container kube-controller-manager ready: true, restart count 3
Nov 13 03:55:17.796: INFO: kube-scheduler-master3 started at 2021-11-12 21:06:59 +0000 UTC (0+1 container statuses recorded)
Nov 13 03:55:17.796: INFO: 	Container kube-scheduler ready: true, restart count 2
Nov 13 03:55:17.796: INFO: node-exporter-l4x25 started at 2021-11-12 21:22:03 +0000 UTC (0+2 container statuses recorded)
Nov 13 03:55:17.796: INFO: 	Container kube-rbac-proxy ready: true, restart count 0
Nov 13 03:55:17.796: INFO: 	Container node-exporter ready: true, restart count 0
Nov 13 03:55:17.796: INFO: kube-apiserver-master3 started at 2021-11-12 21:11:20 +0000 UTC (0+1 container statuses recorded)
Nov 13 03:55:17.796: INFO: 	Container kube-apiserver ready: true, restart count 0
W1113 03:55:17.813166      36 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled.
Nov 13 03:55:17.876: INFO: 
Latency metrics for node master3
Nov 13 03:55:17.876: INFO: 
Logging node info for node node1
Nov 13 03:55:17.879: INFO: Node Info: &Node{ObjectMeta:{node1    6ceb907c-9809-4d18-88c6-b1e10ba80f97 150688 0 2021-11-12 21:07:36 +0000 UTC   map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux cmk.intel.com/cmk-node:true feature.node.kubernetes.io/cpu-cpuid.ADX:true feature.node.kubernetes.io/cpu-cpuid.AESNI:true feature.node.kubernetes.io/cpu-cpuid.AVX:true feature.node.kubernetes.io/cpu-cpuid.AVX2:true feature.node.kubernetes.io/cpu-cpuid.AVX512BW:true feature.node.kubernetes.io/cpu-cpuid.AVX512CD:true feature.node.kubernetes.io/cpu-cpuid.AVX512DQ:true feature.node.kubernetes.io/cpu-cpuid.AVX512F:true feature.node.kubernetes.io/cpu-cpuid.AVX512VL:true feature.node.kubernetes.io/cpu-cpuid.FMA3:true feature.node.kubernetes.io/cpu-cpuid.HLE:true feature.node.kubernetes.io/cpu-cpuid.IBPB:true feature.node.kubernetes.io/cpu-cpuid.MPX:true feature.node.kubernetes.io/cpu-cpuid.RTM:true feature.node.kubernetes.io/cpu-cpuid.SSE4:true feature.node.kubernetes.io/cpu-cpuid.SSE42:true feature.node.kubernetes.io/cpu-cpuid.STIBP:true feature.node.kubernetes.io/cpu-cpuid.VMX:true feature.node.kubernetes.io/cpu-cstate.enabled:true feature.node.kubernetes.io/cpu-hardware_multithreading:true feature.node.kubernetes.io/cpu-pstate.status:active feature.node.kubernetes.io/cpu-pstate.turbo:true feature.node.kubernetes.io/cpu-rdt.RDTCMT:true feature.node.kubernetes.io/cpu-rdt.RDTL3CA:true feature.node.kubernetes.io/cpu-rdt.RDTMBA:true feature.node.kubernetes.io/cpu-rdt.RDTMBM:true feature.node.kubernetes.io/cpu-rdt.RDTMON:true feature.node.kubernetes.io/kernel-config.NO_HZ:true feature.node.kubernetes.io/kernel-config.NO_HZ_FULL:true feature.node.kubernetes.io/kernel-selinux.enabled:true feature.node.kubernetes.io/kernel-version.full:3.10.0-1160.45.1.el7.x86_64 feature.node.kubernetes.io/kernel-version.major:3 feature.node.kubernetes.io/kernel-version.minor:10 feature.node.kubernetes.io/kernel-version.revision:0 feature.node.kubernetes.io/memory-numa:true feature.node.kubernetes.io/network-sriov.capable:true feature.node.kubernetes.io/network-sriov.configured:true feature.node.kubernetes.io/pci-0300_1a03.present:true feature.node.kubernetes.io/storage-nonrotationaldisk:true feature.node.kubernetes.io/system-os_release.ID:centos feature.node.kubernetes.io/system-os_release.VERSION_ID:7 feature.node.kubernetes.io/system-os_release.VERSION_ID.major:7 kubernetes.io/arch:amd64 kubernetes.io/hostname:node1 kubernetes.io/os:linux] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.207 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/extended-resources: nfd.node.kubernetes.io/feature-labels:cpu-cpuid.ADX,cpu-cpuid.AESNI,cpu-cpuid.AVX,cpu-cpuid.AVX2,cpu-cpuid.AVX512BW,cpu-cpuid.AVX512CD,cpu-cpuid.AVX512DQ,cpu-cpuid.AVX512F,cpu-cpuid.AVX512VL,cpu-cpuid.FMA3,cpu-cpuid.HLE,cpu-cpuid.IBPB,cpu-cpuid.MPX,cpu-cpuid.RTM,cpu-cpuid.SSE4,cpu-cpuid.SSE42,cpu-cpuid.STIBP,cpu-cpuid.VMX,cpu-cstate.enabled,cpu-hardware_multithreading,cpu-pstate.status,cpu-pstate.turbo,cpu-rdt.RDTCMT,cpu-rdt.RDTL3CA,cpu-rdt.RDTMBA,cpu-rdt.RDTMBM,cpu-rdt.RDTMON,kernel-config.NO_HZ,kernel-config.NO_HZ_FULL,kernel-selinux.enabled,kernel-version.full,kernel-version.major,kernel-version.minor,kernel-version.revision,memory-numa,network-sriov.capable,network-sriov.configured,pci-0300_1a03.present,storage-nonrotationaldisk,system-os_release.ID,system-os_release.VERSION_ID,system-os_release.VERSION_ID.major nfd.node.kubernetes.io/worker.version:v0.8.2 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  [{kube-controller-manager Update v1 2021-11-12 21:07:36 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.3.0/24\"":{}}}}} {kubeadm Update v1 2021-11-12 21:07:36 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {flanneld Update v1 2021-11-12 21:08:41 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {nfd-master Update v1 2021-11-12 21:16:44 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/extended-resources":{},"f:nfd.node.kubernetes.io/feature-labels":{},"f:nfd.node.kubernetes.io/worker.version":{}},"f:labels":{"f:feature.node.kubernetes.io/cpu-cpuid.ADX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AESNI":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX2":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512BW":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512CD":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512DQ":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512F":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512VL":{},"f:feature.node.kubernetes.io/cpu-cpuid.FMA3":{},"f:feature.node.kubernetes.io/cpu-cpuid.HLE":{},"f:feature.node.kubernetes.io/cpu-cpuid.IBPB":{},"f:feature.node.kubernetes.io/cpu-cpuid.MPX":{},"f:feature.node.kubernetes.io/cpu-cpuid.RTM":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE4":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE42":{},"f:feature.node.kubernetes.io/cpu-cpuid.STIBP":{},"f:feature.node.kubernetes.io/cpu-cpuid.VMX":{},"f:feature.node.kubernetes.io/cpu-cstate.enabled":{},"f:feature.node.kubernetes.io/cpu-hardware_multithreading":{},"f:feature.node.kubernetes.io/cpu-pstate.status":{},"f:feature.node.kubernetes.io/cpu-pstate.turbo":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTCMT":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTL3CA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBM":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMON":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ_FULL":{},"f:feature.node.kubernetes.io/kernel-selinux.enabled":{},"f:feature.node.kubernetes.io/kernel-version.full":{},"f:feature.node.kubernetes.io/kernel-version.major":{},"f:feature.node.kubernetes.io/kernel-version.minor":{},"f:feature.node.kubernetes.io/kernel-version.revision":{},"f:feature.node.kubernetes.io/memory-numa":{},"f:feature.node.kubernetes.io/network-sriov.capable":{},"f:feature.node.kubernetes.io/network-sriov.configured":{},"f:feature.node.kubernetes.io/pci-0300_1a03.present":{},"f:feature.node.kubernetes.io/storage-nonrotationaldisk":{},"f:feature.node.kubernetes.io/system-os_release.ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID.major":{}}}}} {Swagger-Codegen Update v1 2021-11-12 21:20:21 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:cmk.intel.com/cmk-node":{}}},"f:status":{"f:capacity":{"f:cmk.intel.com/exclusive-cores":{}}}}} {e2e.test Update v1 2021-11-13 01:56:37 +0000 UTC FieldsV1 {"f:status":{"f:capacity":{"f:scheduling.k8s.io/foo":{}}}}} {kubelet Update v1 2021-11-13 01:56:43 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:cmk.intel.com/exclusive-cores":{},"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{},"f:scheduling.k8s.io/foo":{}},"f:capacity":{"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.3.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.3.0/24],},Status:NodeStatus{Capacity:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{201269633024 0} {}  BinarySI},pods: {{110 0} {} 110 DecimalSI},scheduling.k8s.io/foo: {{3 0} {} 3 DecimalSI},},Allocatable:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{77 0} {} 77 DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{178884632576 0} {}  BinarySI},pods: {{110 0} {} 110 DecimalSI},scheduling.k8s.io/foo: {{3 0} {} 3 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-11-12 21:11:27 +0000 UTC,LastTransitionTime:2021-11-12 21:11:27 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-11-13 03:55:15 +0000 UTC,LastTransitionTime:2021-11-12 21:07:36 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-11-13 03:55:15 +0000 UTC,LastTransitionTime:2021-11-12 21:07:36 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-11-13 03:55:15 +0000 UTC,LastTransitionTime:2021-11-12 21:07:36 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-11-13 03:55:15 +0000 UTC,LastTransitionTime:2021-11-12 21:08:47 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.207,},NodeAddress{Type:Hostname,Address:node1,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:7cf6287777fe4e3b9a80df40dea25b6d,SystemUUID:00CDA902-D022-E711-906E-0017A4403562,BootID:2125bc5f-9167-464a-b6d0-8e8a192327d3,KernelVersion:3.10.0-1160.45.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.10,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[opnfv/barometer-collectd@sha256:f30e965aa6195e6ac4ca2410f5a15e3704c92e4afa5208178ca22a7911975d66],SizeBytes:1075575763,},ContainerImage{Names:[@ :],SizeBytes:1003432896,},ContainerImage{Names:[localhost:30500/cmk@sha256:9c8712e686132463f8f4d9787e57cff8c3c47bb77fca5fc1d96f6763d2717f29 cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:724566215,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[golang@sha256:db2475a1dbb2149508e5db31d7d77a75e6600d54be645f37681f03f2762169ba golang:alpine3.12],SizeBytes:301186719,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[k8s.gcr.io/etcd@sha256:4ad90a11b55313b182afc186b9876c8e891531b8db4c9bf1541953021618d0e2 k8s.gcr.io/etcd:3.4.13-0],SizeBytes:253392289,},ContainerImage{Names:[grafana/grafana@sha256:ba39bf5131dcc0464134a3ff0e26e8c6380415249fa725e5f619176601255172 grafana/grafana:7.5.4],SizeBytes:203572842,},ContainerImage{Names:[quay.io/prometheus/prometheus@sha256:b899dbd1b9017b9a379f76ce5b40eead01a62762c4f2057eacef945c3c22d210 quay.io/prometheus/prometheus:v2.22.1],SizeBytes:168344243,},ContainerImage{Names:[nginx@sha256:a05b0cdd4fc1be3b224ba9662ebdf98fe44c09c0c9215b45f84344c12867002e nginx:1.21.1],SizeBytes:133175493,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:716d2f68314c5c4ddd5ecdb45183fcb4ed8019015982c1321571f863989b70b0 k8s.gcr.io/e2e-test-images/httpd:2.4.39-1],SizeBytes:126894770,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:125930239,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:1f36a24cfb5e0c3f725d7565a867c2384282fcbeccc77b07b423c9da95763a9a k8s.gcr.io/e2e-test-images/nautilus:1.4],SizeBytes:121748345,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[k8s.gcr.io/nfd/node-feature-discovery@sha256:74a1cbd82354f148277e20cdce25d57816e355a896bc67f67a0f722164b16945 k8s.gcr.io/nfd/node-feature-discovery:v0.8.2],SizeBytes:108486428,},ContainerImage{Names:[directxman12/k8s-prometheus-adapter@sha256:2b09a571757a12c0245f2f1a74db4d1b9386ff901cf57f5ce48a0a682bd0e3af directxman12/k8s-prometheus-adapter:v0.8.2],SizeBytes:68230450,},ContainerImage{Names:[k8s.gcr.io/build-image/debian-iptables@sha256:d226f3fd5f293ff513f53573a40c069b89d57d42338a1045b493bf702ac6b1f6 k8s.gcr.io/build-image/debian-iptables:buster-v1.6.5],SizeBytes:60182158,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/sample-apiserver@sha256:e7fddbaac4c3451da2365ab90bad149d32f11409738034e41e0f460927f7c276 k8s.gcr.io/e2e-test-images/sample-apiserver:1.17.4],SizeBytes:58172101,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[localhost:30500/sriov-device-plugin@sha256:760040499bb9aa55e93c6074d538c7fb6c32ef9fc567e4edcc0ef55197276560 nfvpe/sriov-device-plugin:latest localhost:30500/sriov-device-plugin:v3.3.2],SizeBytes:42686989,},ContainerImage{Names:[quay.io/prometheus-operator/prometheus-operator@sha256:850c86bfeda4389bc9c757a9fd17ca5a090ea6b424968178d4467492cfa13921 quay.io/prometheus-operator/prometheus-operator:v0.44.1],SizeBytes:42617274,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[prom/collectd-exporter@sha256:73fbda4d24421bff3b741c27efc36f1b6fbe7c57c378d56d4ff78101cd556654],SizeBytes:17463681,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:16032814,},ContainerImage{Names:[quay.io/prometheus-operator/prometheus-config-reloader@sha256:4dee0fcf1820355ddd6986c1317b555693776c731315544a99d6cc59a7e34ce9 quay.io/prometheus-operator/prometheus-config-reloader:v0.44.1],SizeBytes:13433274,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:11443478,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonewprivs@sha256:8ac1264691820febacf3aea5d152cbde6d10685731ec14966a9401c6f47a68ac k8s.gcr.io/e2e-test-images/nonewprivs:1.3],SizeBytes:7107254,},ContainerImage{Names:[appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172cead6502fec989198b98d8bb appropriate/curl:edge],SizeBytes:5654234,},ContainerImage{Names:[alpine@sha256:1841df8d4cc71e4f69cc1603012b99570f40d18cd36ee1065933b46f984cf0cd alpine:3.12],SizeBytes:5592390,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:1154361,},ContainerImage{Names:[busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 busybox:1.28],SizeBytes:1146369,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},}
Nov 13 03:55:17.880: INFO: 
Logging kubelet events for node node1
Nov 13 03:55:17.882: INFO: 
Logging pods the kubelet thinks is on node node1
Nov 13 03:55:17.898: INFO: cmk-init-discover-node1-vkj2s started at 2021-11-12 21:20:18 +0000 UTC (0+3 container statuses recorded)
Nov 13 03:55:17.898: INFO: 	Container discover ready: false, restart count 0
Nov 13 03:55:17.898: INFO: 	Container init ready: false, restart count 0
Nov 13 03:55:17.898: INFO: 	Container install ready: false, restart count 0
Nov 13 03:55:17.898: INFO: node-feature-discovery-worker-zgr4c started at 2021-11-12 21:16:36 +0000 UTC (0+1 container statuses recorded)
Nov 13 03:55:17.898: INFO: 	Container nfd-worker ready: true, restart count 0
Nov 13 03:55:17.898: INFO: cmk-webhook-6c9d5f8578-2gp25 started at 2021-11-12 21:21:01 +0000 UTC (0+1 container statuses recorded)
Nov 13 03:55:17.898: INFO: 	Container cmk-webhook ready: true, restart count 0
Nov 13 03:55:17.898: INFO: prometheus-k8s-0 started at 2021-11-12 21:22:14 +0000 UTC (0+4 container statuses recorded)
Nov 13 03:55:17.898: INFO: 	Container config-reloader ready: true, restart count 0
Nov 13 03:55:17.898: INFO: 	Container custom-metrics-apiserver ready: true, restart count 0
Nov 13 03:55:17.898: INFO: 	Container grafana ready: true, restart count 0
Nov 13 03:55:17.898: INFO: 	Container prometheus ready: true, restart count 1
Nov 13 03:55:17.898: INFO: execpodsh7t5 started at 2021-11-13 03:53:43 +0000 UTC (0+1 container statuses recorded)
Nov 13 03:55:17.898: INFO: 	Container agnhost-container ready: true, restart count 0
Nov 13 03:55:17.898: INFO: kube-multus-ds-amd64-4wqsv started at 2021-11-12 21:08:45 +0000 UTC (0+1 container statuses recorded)
Nov 13 03:55:17.898: INFO: 	Container kube-multus ready: true, restart count 1
Nov 13 03:55:17.898: INFO: cmk-4tcdw started at 2021-11-12 21:21:00 +0000 UTC (0+2 container statuses recorded)
Nov 13 03:55:17.898: INFO: 	Container nodereport ready: true, restart count 0
Nov 13 03:55:17.898: INFO: 	Container reconcile ready: true, restart count 0
Nov 13 03:55:17.898: INFO: node-exporter-hqkfs started at 2021-11-12 21:22:03 +0000 UTC (0+2 container statuses recorded)
Nov 13 03:55:17.898: INFO: 	Container kube-rbac-proxy ready: true, restart count 0
Nov 13 03:55:17.898: INFO: 	Container node-exporter ready: true, restart count 0
Nov 13 03:55:17.898: INFO: pod-client started at 2021-11-13 03:53:49 +0000 UTC (0+1 container statuses recorded)
Nov 13 03:55:17.898: INFO: 	Container pod-client ready: true, restart count 0
Nov 13 03:55:17.898: INFO: nginx-proxy-node1 started at 2021-11-12 21:07:36 +0000 UTC (0+1 container statuses recorded)
Nov 13 03:55:17.898: INFO: 	Container nginx-proxy ready: true, restart count 2
Nov 13 03:55:17.898: INFO: kube-flannel-r7bbp started at 2021-11-12 21:08:36 +0000 UTC (1+1 container statuses recorded)
Nov 13 03:55:17.898: INFO: 	Init container install-cni ready: true, restart count 2
Nov 13 03:55:17.898: INFO: 	Container kube-flannel ready: true, restart count 3
Nov 13 03:55:17.898: INFO: prometheus-operator-585ccfb458-qcz7s started at 2021-11-12 21:21:55 +0000 UTC (0+2 container statuses recorded)
Nov 13 03:55:17.898: INFO: 	Container kube-rbac-proxy ready: true, restart count 0
Nov 13 03:55:17.898: INFO: 	Container prometheus-operator ready: true, restart count 0
Nov 13 03:55:17.898: INFO: nodeport-update-service-2phmx started at 2021-11-13 03:53:34 +0000 UTC (0+1 container statuses recorded)
Nov 13 03:55:17.898: INFO: 	Container nodeport-update-service ready: true, restart count 0
Nov 13 03:55:17.898: INFO: nodeport-update-service-cr2sn started at 2021-11-13 03:53:34 +0000 UTC (0+1 container statuses recorded)
Nov 13 03:55:17.898: INFO: 	Container nodeport-update-service ready: true, restart count 0
Nov 13 03:55:17.898: INFO: collectd-74xkn started at 2021-11-12 21:25:58 +0000 UTC (0+3 container statuses recorded)
Nov 13 03:55:17.898: INFO: 	Container collectd ready: true, restart count 0
Nov 13 03:55:17.898: INFO: 	Container collectd-exporter ready: true, restart count 0
Nov 13 03:55:17.898: INFO: 	Container rbac-proxy ready: true, restart count 0
Nov 13 03:55:17.898: INFO: kube-proxy-p6kbl started at 2021-11-12 21:07:39 +0000 UTC (0+1 container statuses recorded)
Nov 13 03:55:17.898: INFO: 	Container kube-proxy ready: true, restart count 2
Nov 13 03:55:17.898: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-m62v8 started at 2021-11-12 21:17:59 +0000 UTC (0+1 container statuses recorded)
Nov 13 03:55:17.898: INFO: 	Container kube-sriovdp ready: true, restart count 0
W1113 03:55:17.914032      36 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled.
Nov 13 03:55:18.194: INFO: 
Latency metrics for node node1
Nov 13 03:55:18.194: INFO: 
Logging node info for node node2
Nov 13 03:55:18.199: INFO: Node Info: &Node{ObjectMeta:{node2    652722dd-12b1-4529-ba4d-a00c590e4a68 150671 0 2021-11-12 21:07:36 +0000 UTC   map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux cmk.intel.com/cmk-node:true feature.node.kubernetes.io/cpu-cpuid.ADX:true feature.node.kubernetes.io/cpu-cpuid.AESNI:true feature.node.kubernetes.io/cpu-cpuid.AVX:true feature.node.kubernetes.io/cpu-cpuid.AVX2:true feature.node.kubernetes.io/cpu-cpuid.AVX512BW:true feature.node.kubernetes.io/cpu-cpuid.AVX512CD:true feature.node.kubernetes.io/cpu-cpuid.AVX512DQ:true feature.node.kubernetes.io/cpu-cpuid.AVX512F:true feature.node.kubernetes.io/cpu-cpuid.AVX512VL:true feature.node.kubernetes.io/cpu-cpuid.FMA3:true feature.node.kubernetes.io/cpu-cpuid.HLE:true feature.node.kubernetes.io/cpu-cpuid.IBPB:true feature.node.kubernetes.io/cpu-cpuid.MPX:true feature.node.kubernetes.io/cpu-cpuid.RTM:true feature.node.kubernetes.io/cpu-cpuid.SSE4:true feature.node.kubernetes.io/cpu-cpuid.SSE42:true feature.node.kubernetes.io/cpu-cpuid.STIBP:true feature.node.kubernetes.io/cpu-cpuid.VMX:true feature.node.kubernetes.io/cpu-cstate.enabled:true feature.node.kubernetes.io/cpu-hardware_multithreading:true feature.node.kubernetes.io/cpu-pstate.status:active feature.node.kubernetes.io/cpu-pstate.turbo:true feature.node.kubernetes.io/cpu-rdt.RDTCMT:true feature.node.kubernetes.io/cpu-rdt.RDTL3CA:true feature.node.kubernetes.io/cpu-rdt.RDTMBA:true feature.node.kubernetes.io/cpu-rdt.RDTMBM:true feature.node.kubernetes.io/cpu-rdt.RDTMON:true feature.node.kubernetes.io/kernel-config.NO_HZ:true feature.node.kubernetes.io/kernel-config.NO_HZ_FULL:true feature.node.kubernetes.io/kernel-selinux.enabled:true feature.node.kubernetes.io/kernel-version.full:3.10.0-1160.45.1.el7.x86_64 feature.node.kubernetes.io/kernel-version.major:3 feature.node.kubernetes.io/kernel-version.minor:10 feature.node.kubernetes.io/kernel-version.revision:0 feature.node.kubernetes.io/memory-numa:true feature.node.kubernetes.io/network-sriov.capable:true feature.node.kubernetes.io/network-sriov.configured:true feature.node.kubernetes.io/pci-0300_1a03.present:true feature.node.kubernetes.io/storage-nonrotationaldisk:true feature.node.kubernetes.io/system-os_release.ID:centos feature.node.kubernetes.io/system-os_release.VERSION_ID:7 feature.node.kubernetes.io/system-os_release.VERSION_ID.major:7 kubernetes.io/arch:amd64 kubernetes.io/hostname:node2 kubernetes.io/os:linux] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.208 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/extended-resources: nfd.node.kubernetes.io/feature-labels:cpu-cpuid.ADX,cpu-cpuid.AESNI,cpu-cpuid.AVX,cpu-cpuid.AVX2,cpu-cpuid.AVX512BW,cpu-cpuid.AVX512CD,cpu-cpuid.AVX512DQ,cpu-cpuid.AVX512F,cpu-cpuid.AVX512VL,cpu-cpuid.FMA3,cpu-cpuid.HLE,cpu-cpuid.IBPB,cpu-cpuid.MPX,cpu-cpuid.RTM,cpu-cpuid.SSE4,cpu-cpuid.SSE42,cpu-cpuid.STIBP,cpu-cpuid.VMX,cpu-cstate.enabled,cpu-hardware_multithreading,cpu-pstate.status,cpu-pstate.turbo,cpu-rdt.RDTCMT,cpu-rdt.RDTL3CA,cpu-rdt.RDTMBA,cpu-rdt.RDTMBM,cpu-rdt.RDTMON,kernel-config.NO_HZ,kernel-config.NO_HZ_FULL,kernel-selinux.enabled,kernel-version.full,kernel-version.major,kernel-version.minor,kernel-version.revision,memory-numa,network-sriov.capable,network-sriov.configured,pci-0300_1a03.present,storage-nonrotationaldisk,system-os_release.ID,system-os_release.VERSION_ID,system-os_release.VERSION_ID.major nfd.node.kubernetes.io/worker.version:v0.8.2 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  [{kube-controller-manager Update v1 2021-11-12 21:07:36 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.4.0/24\"":{}}}}} {kubeadm Update v1 2021-11-12 21:07:36 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {flanneld Update v1 2021-11-12 21:08:41 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {nfd-master Update v1 2021-11-12 21:16:44 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/extended-resources":{},"f:nfd.node.kubernetes.io/feature-labels":{},"f:nfd.node.kubernetes.io/worker.version":{}},"f:labels":{"f:feature.node.kubernetes.io/cpu-cpuid.ADX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AESNI":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX2":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512BW":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512CD":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512DQ":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512F":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512VL":{},"f:feature.node.kubernetes.io/cpu-cpuid.FMA3":{},"f:feature.node.kubernetes.io/cpu-cpuid.HLE":{},"f:feature.node.kubernetes.io/cpu-cpuid.IBPB":{},"f:feature.node.kubernetes.io/cpu-cpuid.MPX":{},"f:feature.node.kubernetes.io/cpu-cpuid.RTM":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE4":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE42":{},"f:feature.node.kubernetes.io/cpu-cpuid.STIBP":{},"f:feature.node.kubernetes.io/cpu-cpuid.VMX":{},"f:feature.node.kubernetes.io/cpu-cstate.enabled":{},"f:feature.node.kubernetes.io/cpu-hardware_multithreading":{},"f:feature.node.kubernetes.io/cpu-pstate.status":{},"f:feature.node.kubernetes.io/cpu-pstate.turbo":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTCMT":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTL3CA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBM":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMON":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ_FULL":{},"f:feature.node.kubernetes.io/kernel-selinux.enabled":{},"f:feature.node.kubernetes.io/kernel-version.full":{},"f:feature.node.kubernetes.io/kernel-version.major":{},"f:feature.node.kubernetes.io/kernel-version.minor":{},"f:feature.node.kubernetes.io/kernel-version.revision":{},"f:feature.node.kubernetes.io/memory-numa":{},"f:feature.node.kubernetes.io/network-sriov.capable":{},"f:feature.node.kubernetes.io/network-sriov.configured":{},"f:feature.node.kubernetes.io/pci-0300_1a03.present":{},"f:feature.node.kubernetes.io/storage-nonrotationaldisk":{},"f:feature.node.kubernetes.io/system-os_release.ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID.major":{}}}}} {Swagger-Codegen Update v1 2021-11-12 21:20:44 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:cmk.intel.com/cmk-node":{}}},"f:status":{"f:capacity":{"f:cmk.intel.com/exclusive-cores":{}}}}} {kubelet Update v1 2021-11-13 01:56:40 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:cmk.intel.com/exclusive-cores":{},"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{},"f:scheduling.k8s.io/foo":{}},"f:capacity":{"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}} {e2e.test Update v1 2021-11-13 02:52:24 +0000 UTC FieldsV1 {"f:status":{"f:capacity":{"f:scheduling.k8s.io/foo":{}}}}}]},Spec:NodeSpec{PodCIDR:10.244.4.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.4.0/24],},Status:NodeStatus{Capacity:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{201269628928 0} {} 196552372Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},scheduling.k8s.io/foo: {{3 0} {} 3 DecimalSI},},Allocatable:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{77 0} {} 77 DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{178884628480 0} {} 174692020Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},scheduling.k8s.io/foo: {{3 0} {} 3 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-11-12 21:11:26 +0000 UTC,LastTransitionTime:2021-11-12 21:11:26 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-11-13 03:55:10 +0000 UTC,LastTransitionTime:2021-11-12 21:07:36 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-11-13 03:55:10 +0000 UTC,LastTransitionTime:2021-11-12 21:07:36 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-11-13 03:55:10 +0000 UTC,LastTransitionTime:2021-11-12 21:07:36 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-11-13 03:55:10 +0000 UTC,LastTransitionTime:2021-11-12 21:08:47 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.208,},NodeAddress{Type:Hostname,Address:node2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:fec67f7547064c508c27d44a9bf99ae7,SystemUUID:80B3CD56-852F-E711-906E-0017A4403562,BootID:0a05ac00-ff21-4518-bf68-3564c7a8cf65,KernelVersion:3.10.0-1160.45.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.10,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[opnfv/barometer-collectd@sha256:f30e965aa6195e6ac4ca2410f5a15e3704c92e4afa5208178ca22a7911975d66],SizeBytes:1075575763,},ContainerImage{Names:[cmk:v1.5.1],SizeBytes:724566215,},ContainerImage{Names:[localhost:30500/cmk@sha256:9c8712e686132463f8f4d9787e57cff8c3c47bb77fca5fc1d96f6763d2717f29 localhost:30500/cmk:v1.5.1],SizeBytes:724566215,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[aquasec/kube-hunter@sha256:2be6820bc1d7e0f57193a9a27d5a3e16b2fd93c53747b03ce8ca48c6fc323781 aquasec/kube-hunter:0.3.1],SizeBytes:347611549,},ContainerImage{Names:[sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 sirot/netperf-latest:latest],SizeBytes:282025213,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:702a992280fb7c3303e84a5801acbb4c9c7fcf48cffe0e9c8be3f0c60f74cf89 k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.4],SizeBytes:253371792,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[nginx@sha256:a05b0cdd4fc1be3b224ba9662ebdf98fe44c09c0c9215b45f84344c12867002e nginx:1.21.1],SizeBytes:133175493,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:716d2f68314c5c4ddd5ecdb45183fcb4ed8019015982c1321571f863989b70b0 k8s.gcr.io/e2e-test-images/httpd:2.4.39-1],SizeBytes:126894770,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:125930239,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:1f36a24cfb5e0c3f725d7565a867c2384282fcbeccc77b07b423c9da95763a9a k8s.gcr.io/e2e-test-images/nautilus:1.4],SizeBytes:121748345,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[k8s.gcr.io/nfd/node-feature-discovery@sha256:74a1cbd82354f148277e20cdce25d57816e355a896bc67f67a0f722164b16945 k8s.gcr.io/nfd/node-feature-discovery:v0.8.2],SizeBytes:108486428,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/regression-issue-74839@sha256:b4f1d8d61bdad84bd50442d161d5460e4019d53e989b64220fdbc62fc87d76bf k8s.gcr.io/e2e-test-images/regression-issue-74839:1.2],SizeBytes:44576952,},ContainerImage{Names:[localhost:30500/sriov-device-plugin@sha256:760040499bb9aa55e93c6074d538c7fb6c32ef9fc567e4edcc0ef55197276560 localhost:30500/sriov-device-plugin:v3.3.2],SizeBytes:42686989,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[localhost:30500/tasextender@sha256:93c03fd0e56363624f0df9368752e6e7c270d969da058ae5066fe8d668541f51 localhost:30500/tasextender:0.4],SizeBytes:28910791,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 aquasec/kube-bench:0.3.1],SizeBytes:19301876,},ContainerImage{Names:[prom/collectd-exporter@sha256:73fbda4d24421bff3b741c27efc36f1b6fbe7c57c378d56d4ff78101cd556654],SizeBytes:17463681,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:16032814,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:11443478,},ContainerImage{Names:[appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172cead6502fec989198b98d8bb appropriate/curl:edge],SizeBytes:5654234,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:1154361,},ContainerImage{Names:[busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 busybox:1.28],SizeBytes:1146369,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},}
Nov 13 03:55:18.199: INFO: 
Logging kubelet events for node node2
Nov 13 03:55:18.202: INFO: 
Logging pods the kubelet thinks is on node node2
Nov 13 03:55:18.218: INFO: cmk-qhvr7 started at 2021-11-12 21:21:01 +0000 UTC (0+2 container statuses recorded)
Nov 13 03:55:18.218: INFO: 	Container nodereport ready: true, restart count 0
Nov 13 03:55:18.218: INFO: 	Container reconcile ready: true, restart count 0
Nov 13 03:55:18.218: INFO: cmk-init-discover-node2-5f4hp started at 2021-11-12 21:20:38 +0000 UTC (0+3 container statuses recorded)
Nov 13 03:55:18.218: INFO: 	Container discover ready: false, restart count 0
Nov 13 03:55:18.218: INFO: 	Container init ready: false, restart count 0
Nov 13 03:55:18.218: INFO: 	Container install ready: false, restart count 0
Nov 13 03:55:18.218: INFO: node-exporter-hstd9 started at 2021-11-12 21:22:03 +0000 UTC (0+2 container statuses recorded)
Nov 13 03:55:18.218: INFO: 	Container kube-rbac-proxy ready: true, restart count 0
Nov 13 03:55:18.218: INFO: 	Container node-exporter ready: true, restart count 0
Nov 13 03:55:18.218: INFO: tas-telemetry-aware-scheduling-84ff454dfb-q7m54 started at 2021-11-12 21:25:09 +0000 UTC (0+1 container statuses recorded)
Nov 13 03:55:18.218: INFO: 	Container tas-extender ready: true, restart count 0
Nov 13 03:55:18.218: INFO: kubernetes-metrics-scraper-5558854cb-jmbpk started at 2021-11-12 21:09:15 +0000 UTC (0+1 container statuses recorded)
Nov 13 03:55:18.218: INFO: 	Container kubernetes-metrics-scraper ready: true, restart count 1
Nov 13 03:55:18.218: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-7brrh started at 2021-11-12 21:17:59 +0000 UTC (0+1 container statuses recorded)
Nov 13 03:55:18.218: INFO: 	Container kube-sriovdp ready: true, restart count 0
Nov 13 03:55:18.218: INFO: nginx-proxy-node2 started at 2021-11-12 21:07:36 +0000 UTC (0+1 container statuses recorded)
Nov 13 03:55:18.218: INFO: 	Container nginx-proxy ready: true, restart count 2
Nov 13 03:55:18.218: INFO: kube-flannel-mg66r started at 2021-11-12 21:08:36 +0000 UTC (1+1 container statuses recorded)
Nov 13 03:55:18.218: INFO: 	Init container install-cni ready: true, restart count 2
Nov 13 03:55:18.218: INFO: 	Container kube-flannel ready: true, restart count 2
Nov 13 03:55:18.218: INFO: kube-multus-ds-amd64-2wqj5 started at 2021-11-12 21:08:45 +0000 UTC (0+1 container statuses recorded)
Nov 13 03:55:18.219: INFO: 	Container kube-multus ready: true, restart count 1
Nov 13 03:55:18.219: INFO: kubernetes-dashboard-785dcbb76d-w2mls started at 2021-11-12 21:09:15 +0000 UTC (0+1 container statuses recorded)
Nov 13 03:55:18.219: INFO: 	Container kubernetes-dashboard ready: true, restart count 1
Nov 13 03:55:18.219: INFO: kube-proxy-pzhf2 started at 2021-11-12 21:07:39 +0000 UTC (0+1 container statuses recorded)
Nov 13 03:55:18.219: INFO: 	Container kube-proxy ready: true, restart count 1
Nov 13 03:55:18.219: INFO: collectd-mp2z6 started at 2021-11-12 21:25:58 +0000 UTC (0+3 container statuses recorded)
Nov 13 03:55:18.219: INFO: 	Container collectd ready: true, restart count 0
Nov 13 03:55:18.219: INFO: 	Container collectd-exporter ready: true, restart count 0
Nov 13 03:55:18.219: INFO: 	Container rbac-proxy ready: true, restart count 0
Nov 13 03:55:18.219: INFO: pod-server-1 started at 2021-11-13 03:53:59 +0000 UTC (0+1 container statuses recorded)
Nov 13 03:55:18.219: INFO: 	Container agnhost-container ready: true, restart count 0
Nov 13 03:55:18.219: INFO: node-feature-discovery-worker-mm7xs started at 2021-11-12 21:16:36 +0000 UTC (0+1 container statuses recorded)
Nov 13 03:55:18.219: INFO: 	Container nfd-worker ready: true, restart count 0
W1113 03:55:18.231320      36 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled.
Nov 13 03:55:18.465: INFO: 
Latency metrics for node node2
Nov 13 03:55:18.465: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "conntrack-8646" for this suite.


• Failure [89.105 seconds]
[sig-network] Conntrack
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should be able to preserve UDP traffic when server pod cycles for a NodePort service [It]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/conntrack.go:130

  Nov 13 03:55:17.530: Failed to connect to backend 1

  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes/runner.go:113
------------------------------
{"msg":"FAILED [sig-network] Conntrack should be able to preserve UDP traffic when server pod cycles for a NodePort service","total":-1,"completed":1,"skipped":505,"failed":1,"failures":["[sig-network] Conntrack should be able to preserve UDP traffic when server pod cycles for a NodePort service"]}
Nov 13 03:55:18.479: INFO: Running AfterSuite actions on all nodes


[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Nov 13 03:53:34.185: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746
[It] should be able to update service type to NodePort listening on same port number but different protocols
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1211
STEP: creating a TCP service nodeport-update-service with type=ClusterIP in namespace services-9199
Nov 13 03:53:34.212: INFO: Service Port TCP: 80
STEP: changing the TCP service to type=NodePort
STEP: creating replication controller nodeport-update-service in namespace services-9199
I1113 03:53:34.224328      24 runners.go:190] Created replication controller with name: nodeport-update-service, namespace: services-9199, replica count: 2
I1113 03:53:37.276150      24 runners.go:190] nodeport-update-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1113 03:53:40.277030      24 runners.go:190] nodeport-update-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1113 03:53:43.278066      24 runners.go:190] nodeport-update-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Nov 13 03:53:43.278: INFO: Creating new exec pod
Nov 13 03:53:54.305: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9199 exec execpodsh7t5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 nodeport-update-service 80'
Nov 13 03:53:54.714: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 nodeport-update-service 80\nConnection to nodeport-update-service 80 port [tcp/http] succeeded!\n"
Nov 13 03:53:54.714: INFO: stdout: "nodeport-update-service-cr2sn"
Nov 13 03:53:54.714: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9199 exec execpodsh7t5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.233.48.30 80'
Nov 13 03:53:55.062: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 10.233.48.30 80\nConnection to 10.233.48.30 80 port [tcp/http] succeeded!\n"
Nov 13 03:53:55.062: INFO: stdout: "nodeport-update-service-cr2sn"
Nov 13 03:53:55.062: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9199 exec execpodsh7t5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30630'
Nov 13 03:53:55.313: INFO: rc: 1
Nov 13 03:53:55.313: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9199 exec execpodsh7t5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30630:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30630
nc: connect to 10.10.190.207 port 30630 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Nov 13 03:53:56.315: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9199 exec execpodsh7t5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30630'
Nov 13 03:53:57.753: INFO: rc: 1
Nov 13 03:53:57.753: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9199 exec execpodsh7t5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30630:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30630
nc: connect to 10.10.190.207 port 30630 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Nov 13 03:53:58.315: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9199 exec execpodsh7t5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30630'
Nov 13 03:53:58.878: INFO: rc: 1
Nov 13 03:53:58.878: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9199 exec execpodsh7t5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30630:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30630
nc: connect to 10.10.190.207 port 30630 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Nov 13 03:53:59.314: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9199 exec execpodsh7t5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30630'
Nov 13 03:53:59.714: INFO: rc: 1
Nov 13 03:53:59.715: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9199 exec execpodsh7t5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30630:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30630
nc: connect to 10.10.190.207 port 30630 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Nov 13 03:54:00.315: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9199 exec execpodsh7t5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30630'
Nov 13 03:54:00.598: INFO: rc: 1
Nov 13 03:54:00.598: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9199 exec execpodsh7t5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30630:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30630
nc: connect to 10.10.190.207 port 30630 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Nov 13 03:54:01.314: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9199 exec execpodsh7t5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30630'
Nov 13 03:54:01.581: INFO: rc: 1
Nov 13 03:54:01.581: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9199 exec execpodsh7t5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30630:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30630
nc: connect to 10.10.190.207 port 30630 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Nov 13 03:54:02.315: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9199 exec execpodsh7t5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30630'
Nov 13 03:54:02.564: INFO: rc: 1
Nov 13 03:54:02.564: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9199 exec execpodsh7t5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30630:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30630
nc: connect to 10.10.190.207 port 30630 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Nov 13 03:54:03.314: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9199 exec execpodsh7t5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30630'
Nov 13 03:54:03.563: INFO: rc: 1
Nov 13 03:54:03.563: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9199 exec execpodsh7t5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30630:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30630
nc: connect to 10.10.190.207 port 30630 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Nov 13 03:54:04.314: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9199 exec execpodsh7t5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30630'
Nov 13 03:54:04.577: INFO: rc: 1
Nov 13 03:54:04.577: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9199 exec execpodsh7t5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30630:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30630
nc: connect to 10.10.190.207 port 30630 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Nov 13 03:54:05.314: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9199 exec execpodsh7t5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30630'
Nov 13 03:54:05.565: INFO: rc: 1
Nov 13 03:54:05.565: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9199 exec execpodsh7t5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30630:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30630
nc: connect to 10.10.190.207 port 30630 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Nov 13 03:54:06.315: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9199 exec execpodsh7t5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30630'
Nov 13 03:54:06.908: INFO: rc: 1
Nov 13 03:54:06.908: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9199 exec execpodsh7t5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30630:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30630
nc: connect to 10.10.190.207 port 30630 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Nov 13 03:54:07.315: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9199 exec execpodsh7t5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30630'
Nov 13 03:54:07.605: INFO: rc: 1
Nov 13 03:54:07.605: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9199 exec execpodsh7t5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30630:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30630
nc: connect to 10.10.190.207 port 30630 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Nov 13 03:54:08.314: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9199 exec execpodsh7t5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30630'
Nov 13 03:54:08.581: INFO: rc: 1
Nov 13 03:54:08.581: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9199 exec execpodsh7t5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30630:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30630
nc: connect to 10.10.190.207 port 30630 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Nov 13 03:54:09.314: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9199 exec execpodsh7t5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30630'
Nov 13 03:54:09.599: INFO: rc: 1
Nov 13 03:54:09.599: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9199 exec execpodsh7t5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30630:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30630
nc: connect to 10.10.190.207 port 30630 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Nov 13 03:54:10.316: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9199 exec execpodsh7t5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30630'
Nov 13 03:54:10.722: INFO: rc: 1
Nov 13 03:54:10.722: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9199 exec execpodsh7t5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30630:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30630
nc: connect to 10.10.190.207 port 30630 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Nov 13 03:54:11.314: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9199 exec execpodsh7t5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30630'
Nov 13 03:54:11.680: INFO: rc: 1
Nov 13 03:54:11.680: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9199 exec execpodsh7t5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30630:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30630
nc: connect to 10.10.190.207 port 30630 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Nov 13 03:54:12.314: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9199 exec execpodsh7t5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30630'
Nov 13 03:54:12.880: INFO: rc: 1
Nov 13 03:54:12.880: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9199 exec execpodsh7t5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30630:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30630
nc: connect to 10.10.190.207 port 30630 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Nov 13 03:54:13.314: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9199 exec execpodsh7t5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30630'
Nov 13 03:54:13.580: INFO: rc: 1
Nov 13 03:54:13.580: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9199 exec execpodsh7t5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30630:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30630
nc: connect to 10.10.190.207 port 30630 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Nov 13 03:54:14.314: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9199 exec execpodsh7t5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30630'
Nov 13 03:54:14.572: INFO: rc: 1
Nov 13 03:54:14.572: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9199 exec execpodsh7t5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30630:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30630
nc: connect to 10.10.190.207 port 30630 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Nov 13 03:54:15.315: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9199 exec execpodsh7t5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30630'
Nov 13 03:54:17.802: INFO: rc: 1
Nov 13 03:54:17.802: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9199 exec execpodsh7t5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30630:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30630
nc: connect to 10.10.190.207 port 30630 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Nov 13 03:54:18.314: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9199 exec execpodsh7t5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30630'
Nov 13 03:54:19.726: INFO: rc: 1
Nov 13 03:54:19.726: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9199 exec execpodsh7t5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30630:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30630
nc: connect to 10.10.190.207 port 30630 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Nov 13 03:54:20.315: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9199 exec execpodsh7t5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30630'
Nov 13 03:54:20.604: INFO: rc: 1
Nov 13 03:54:20.604: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9199 exec execpodsh7t5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30630:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30630
nc: connect to 10.10.190.207 port 30630 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Nov 13 03:54:21.315: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9199 exec execpodsh7t5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30630'
Nov 13 03:54:21.989: INFO: rc: 1
Nov 13 03:54:21.989: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9199 exec execpodsh7t5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30630:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30630
nc: connect to 10.10.190.207 port 30630 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Nov 13 03:54:22.314: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9199 exec execpodsh7t5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30630'
Nov 13 03:54:22.605: INFO: rc: 1
Nov 13 03:54:22.605: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9199 exec execpodsh7t5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30630:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30630
nc: connect to 10.10.190.207 port 30630 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Nov 13 03:54:23.314: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9199 exec execpodsh7t5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30630'
Nov 13 03:54:23.714: INFO: rc: 1
Nov 13 03:54:23.715: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9199 exec execpodsh7t5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30630:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30630
nc: connect to 10.10.190.207 port 30630 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Nov 13 03:54:24.315: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9199 exec execpodsh7t5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30630'
Nov 13 03:54:24.641: INFO: rc: 1
Nov 13 03:54:24.641: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9199 exec execpodsh7t5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30630:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30630
nc: connect to 10.10.190.207 port 30630 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Nov 13 03:54:25.315: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9199 exec execpodsh7t5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30630'
Nov 13 03:54:25.561: INFO: rc: 1
Nov 13 03:54:25.562: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9199 exec execpodsh7t5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30630:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30630
nc: connect to 10.10.190.207 port 30630 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Nov 13 03:54:26.315: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9199 exec execpodsh7t5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30630'
Nov 13 03:54:27.302: INFO: rc: 1
Nov 13 03:54:27.302: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9199 exec execpodsh7t5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30630:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30630
nc: connect to 10.10.190.207 port 30630 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Nov 13 03:54:27.314: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9199 exec execpodsh7t5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30630'
Nov 13 03:54:27.755: INFO: rc: 1
Nov 13 03:54:27.755: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9199 exec execpodsh7t5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30630:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30630
nc: connect to 10.10.190.207 port 30630 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Nov 13 03:54:28.314: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9199 exec execpodsh7t5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30630'
Nov 13 03:54:28.574: INFO: rc: 1
Nov 13 03:54:28.574: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9199 exec execpodsh7t5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30630:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30630
nc: connect to 10.10.190.207 port 30630 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Nov 13 03:54:29.315: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9199 exec execpodsh7t5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30630'
Nov 13 03:54:29.573: INFO: rc: 1
Nov 13 03:54:29.573: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9199 exec execpodsh7t5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30630:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30630
nc: connect to 10.10.190.207 port 30630 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Nov 13 03:54:30.314: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9199 exec execpodsh7t5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30630'
Nov 13 03:54:30.546: INFO: rc: 1
Nov 13 03:54:30.546: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9199 exec execpodsh7t5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30630:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30630
nc: connect to 10.10.190.207 port 30630 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Nov 13 03:54:31.315: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9199 exec execpodsh7t5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30630'
Nov 13 03:54:31.572: INFO: rc: 1
Nov 13 03:54:31.572: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9199 exec execpodsh7t5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30630:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30630
nc: connect to 10.10.190.207 port 30630 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Nov 13 03:54:32.315: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9199 exec execpodsh7t5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30630'
Nov 13 03:54:32.575: INFO: rc: 1
Nov 13 03:54:32.575: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9199 exec execpodsh7t5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30630:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30630
nc: connect to 10.10.190.207 port 30630 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Nov 13 03:54:33.315: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9199 exec execpodsh7t5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30630'
Nov 13 03:54:33.583: INFO: rc: 1
Nov 13 03:54:33.583: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9199 exec execpodsh7t5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30630:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30630
nc: connect to 10.10.190.207 port 30630 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Nov 13 03:54:34.315: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9199 exec execpodsh7t5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30630'
Nov 13 03:54:34.636: INFO: rc: 1
Nov 13 03:54:34.636: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9199 exec execpodsh7t5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30630:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30630
nc: connect to 10.10.190.207 port 30630 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Nov 13 03:54:35.314: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9199 exec execpodsh7t5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30630'
Nov 13 03:54:35.605: INFO: rc: 1
Nov 13 03:54:35.605: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9199 exec execpodsh7t5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30630:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30630
nc: connect to 10.10.190.207 port 30630 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Nov 13 03:54:36.315: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9199 exec execpodsh7t5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30630'
Nov 13 03:54:36.578: INFO: rc: 1
Nov 13 03:54:36.578: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9199 exec execpodsh7t5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30630:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30630
nc: connect to 10.10.190.207 port 30630 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Nov 13 03:54:37.314: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9199 exec execpodsh7t5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30630'
Nov 13 03:54:37.574: INFO: rc: 1
Nov 13 03:54:37.574: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9199 exec execpodsh7t5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30630:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30630
nc: connect to 10.10.190.207 port 30630 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Nov 13 03:54:38.314: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9199 exec execpodsh7t5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30630'
Nov 13 03:54:38.561: INFO: rc: 1
Nov 13 03:54:38.561: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9199 exec execpodsh7t5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30630:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30630
nc: connect to 10.10.190.207 port 30630 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Nov 13 03:54:39.314: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9199 exec execpodsh7t5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30630'
Nov 13 03:54:39.577: INFO: rc: 1
Nov 13 03:54:39.577: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9199 exec execpodsh7t5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30630:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30630
nc: connect to 10.10.190.207 port 30630 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Nov 13 03:54:40.314: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9199 exec execpodsh7t5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30630'
Nov 13 03:54:40.606: INFO: rc: 1
Nov 13 03:54:40.606: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9199 exec execpodsh7t5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30630:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30630
nc: connect to 10.10.190.207 port 30630 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Nov 13 03:54:41.314: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9199 exec execpodsh7t5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30630'
Nov 13 03:54:41.803: INFO: rc: 1
Nov 13 03:54:41.804: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9199 exec execpodsh7t5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30630:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30630
nc: connect to 10.10.190.207 port 30630 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Nov 13 03:54:42.314: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9199 exec execpodsh7t5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30630'
Nov 13 03:54:42.571: INFO: rc: 1
Nov 13 03:54:42.571: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9199 exec execpodsh7t5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30630:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30630
nc: connect to 10.10.190.207 port 30630 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Nov 13 03:54:43.314: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9199 exec execpodsh7t5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30630'
Nov 13 03:54:43.556: INFO: rc: 1
Nov 13 03:54:43.556: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9199 exec execpodsh7t5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30630:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30630
nc: connect to 10.10.190.207 port 30630 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Nov 13 03:54:44.315: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9199 exec execpodsh7t5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30630'
Nov 13 03:54:44.784: INFO: rc: 1
Nov 13 03:54:44.784: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9199 exec execpodsh7t5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30630:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30630
nc: connect to 10.10.190.207 port 30630 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Nov 13 03:54:45.315: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9199 exec execpodsh7t5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30630'
Nov 13 03:54:45.543: INFO: rc: 1
Nov 13 03:54:45.543: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9199 exec execpodsh7t5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30630:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30630
nc: connect to 10.10.190.207 port 30630 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Nov 13 03:54:46.315: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9199 exec execpodsh7t5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30630'
Nov 13 03:54:46.579: INFO: rc: 1
Nov 13 03:54:46.579: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9199 exec execpodsh7t5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30630:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30630
nc: connect to 10.10.190.207 port 30630 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Nov 13 03:54:47.315: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9199 exec execpodsh7t5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30630'
Nov 13 03:54:47.577: INFO: rc: 1
Nov 13 03:54:47.577: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9199 exec execpodsh7t5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30630:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30630
nc: connect to 10.10.190.207 port 30630 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Nov 13 03:54:48.314: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9199 exec execpodsh7t5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30630'
Nov 13 03:54:48.582: INFO: rc: 1
Nov 13 03:54:48.582: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9199 exec execpodsh7t5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30630:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30630
nc: connect to 10.10.190.207 port 30630 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Nov 13 03:54:49.314: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9199 exec execpodsh7t5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30630'
Nov 13 03:54:49.623: INFO: rc: 1
Nov 13 03:54:49.623: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9199 exec execpodsh7t5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30630:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30630
nc: connect to 10.10.190.207 port 30630 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Nov 13 03:54:50.315: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9199 exec execpodsh7t5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30630'
Nov 13 03:54:50.575: INFO: rc: 1
Nov 13 03:54:50.575: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9199 exec execpodsh7t5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30630:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30630
nc: connect to 10.10.190.207 port 30630 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Nov 13 03:54:51.314: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9199 exec execpodsh7t5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30630'
Nov 13 03:54:51.556: INFO: rc: 1
Nov 13 03:54:51.556: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9199 exec execpodsh7t5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30630:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30630
nc: connect to 10.10.190.207 port 30630 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Nov 13 03:54:52.314: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9199 exec execpodsh7t5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30630'
Nov 13 03:54:52.597: INFO: rc: 1
Nov 13 03:54:52.597: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9199 exec execpodsh7t5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30630:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30630
nc: connect to 10.10.190.207 port 30630 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Nov 13 03:54:53.315: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9199 exec execpodsh7t5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30630'
Nov 13 03:54:53.559: INFO: rc: 1
Nov 13 03:54:53.559: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9199 exec execpodsh7t5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30630:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30630
nc: connect to 10.10.190.207 port 30630 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Nov 13 03:54:54.314: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9199 exec execpodsh7t5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30630'
Nov 13 03:54:54.586: INFO: rc: 1
Nov 13 03:54:54.586: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9199 exec execpodsh7t5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30630:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30630
nc: connect to 10.10.190.207 port 30630 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Nov 13 03:54:55.315: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9199 exec execpodsh7t5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30630'
Nov 13 03:54:55.557: INFO: rc: 1
Nov 13 03:54:55.557: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9199 exec execpodsh7t5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30630:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30630
nc: connect to 10.10.190.207 port 30630 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Nov 13 03:54:56.314: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9199 exec execpodsh7t5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30630'
Nov 13 03:54:56.784: INFO: rc: 1
Nov 13 03:54:56.784: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9199 exec execpodsh7t5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30630:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30630
nc: connect to 10.10.190.207 port 30630 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Nov 13 03:54:57.315: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9199 exec execpodsh7t5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30630'
Nov 13 03:54:57.590: INFO: rc: 1
Nov 13 03:54:57.590: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9199 exec execpodsh7t5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30630:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30630
nc: connect to 10.10.190.207 port 30630 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Nov 13 03:54:58.315: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9199 exec execpodsh7t5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30630'
Nov 13 03:54:58.551: INFO: rc: 1
Nov 13 03:54:58.551: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9199 exec execpodsh7t5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30630:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30630
nc: connect to 10.10.190.207 port 30630 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Nov 13 03:54:59.314: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9199 exec execpodsh7t5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30630'
Nov 13 03:54:59.561: INFO: rc: 1
Nov 13 03:54:59.561: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9199 exec execpodsh7t5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30630:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30630
nc: connect to 10.10.190.207 port 30630 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Nov 13 03:55:00.315: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9199 exec execpodsh7t5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30630'
Nov 13 03:55:00.550: INFO: rc: 1
Nov 13 03:55:00.550: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9199 exec execpodsh7t5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30630:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30630
nc: connect to 10.10.190.207 port 30630 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Nov 13 03:55:01.315: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9199 exec execpodsh7t5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30630'
Nov 13 03:55:01.569: INFO: rc: 1
Nov 13 03:55:01.569: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9199 exec execpodsh7t5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30630:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30630
nc: connect to 10.10.190.207 port 30630 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Nov 13 03:55:02.315: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9199 exec execpodsh7t5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30630'
Nov 13 03:55:02.566: INFO: rc: 1
Nov 13 03:55:02.567: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9199 exec execpodsh7t5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30630:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30630
nc: connect to 10.10.190.207 port 30630 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Nov 13 03:55:03.314: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9199 exec execpodsh7t5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30630'
Nov 13 03:55:03.550: INFO: rc: 1
Nov 13 03:55:03.550: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9199 exec execpodsh7t5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30630:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30630
nc: connect to 10.10.190.207 port 30630 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Nov 13 03:55:04.315: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9199 exec execpodsh7t5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30630'
Nov 13 03:55:04.574: INFO: rc: 1
Nov 13 03:55:04.574: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9199 exec execpodsh7t5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30630:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30630
nc: connect to 10.10.190.207 port 30630 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Nov 13 03:55:05.314: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9199 exec execpodsh7t5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30630'
Nov 13 03:55:05.559: INFO: rc: 1
Nov 13 03:55:05.559: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9199 exec execpodsh7t5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30630:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30630
nc: connect to 10.10.190.207 port 30630 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Nov 13 03:55:06.316: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9199 exec execpodsh7t5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30630'
Nov 13 03:55:06.564: INFO: rc: 1
Nov 13 03:55:06.564: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9199 exec execpodsh7t5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30630:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30630
nc: connect to 10.10.190.207 port 30630 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Nov 13 03:55:07.314: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9199 exec execpodsh7t5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30630'
Nov 13 03:55:07.568: INFO: rc: 1
Nov 13 03:55:07.568: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9199 exec execpodsh7t5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30630:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30630
nc: connect to 10.10.190.207 port 30630 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Nov 13 03:55:08.314: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9199 exec execpodsh7t5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30630'
Nov 13 03:55:08.561: INFO: rc: 1
Nov 13 03:55:08.561: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9199 exec execpodsh7t5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30630:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30630
nc: connect to 10.10.190.207 port 30630 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Nov 13 03:55:09.315: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9199 exec execpodsh7t5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30630'
Nov 13 03:55:09.543: INFO: rc: 1
Nov 13 03:55:09.543: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9199 exec execpodsh7t5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30630:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30630
nc: connect to 10.10.190.207 port 30630 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Nov 13 03:55:10.315: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9199 exec execpodsh7t5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30630'
Nov 13 03:55:10.565: INFO: rc: 1
Nov 13 03:55:10.565: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9199 exec execpodsh7t5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30630:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30630
nc: connect to 10.10.190.207 port 30630 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Nov 13 03:55:11.316: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9199 exec execpodsh7t5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30630'
Nov 13 03:55:11.551: INFO: rc: 1
Nov 13 03:55:11.551: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9199 exec execpodsh7t5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30630:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30630
nc: connect to 10.10.190.207 port 30630 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Nov 13 03:55:12.314: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9199 exec execpodsh7t5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30630'
Nov 13 03:55:12.561: INFO: rc: 1
Nov 13 03:55:12.561: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9199 exec execpodsh7t5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30630:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30630
nc: connect to 10.10.190.207 port 30630 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Nov 13 03:55:13.315: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9199 exec execpodsh7t5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30630'
Nov 13 03:55:13.570: INFO: rc: 1
Nov 13 03:55:13.570: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9199 exec execpodsh7t5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30630:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30630
nc: connect to 10.10.190.207 port 30630 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Nov 13 03:55:14.314: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9199 exec execpodsh7t5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30630'
Nov 13 03:55:14.575: INFO: rc: 1
Nov 13 03:55:14.575: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9199 exec execpodsh7t5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30630:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30630
nc: connect to 10.10.190.207 port 30630 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Nov 13 03:55:15.314: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9199 exec execpodsh7t5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30630'
Nov 13 03:55:15.555: INFO: rc: 1
Nov 13 03:55:15.555: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9199 exec execpodsh7t5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30630:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30630
nc: connect to 10.10.190.207 port 30630 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Nov 13 03:55:16.314: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9199 exec execpodsh7t5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30630'
Nov 13 03:55:16.560: INFO: rc: 1
Nov 13 03:55:16.561: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9199 exec execpodsh7t5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30630:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30630
nc: connect to 10.10.190.207 port 30630 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Nov 13 03:55:17.315: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9199 exec execpodsh7t5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30630'
Nov 13 03:55:17.563: INFO: rc: 1
Nov 13 03:55:17.563: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9199 exec execpodsh7t5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30630:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30630
nc: connect to 10.10.190.207 port 30630 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Nov 13 03:55:18.314: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9199 exec execpodsh7t5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30630'
Nov 13 03:55:18.554: INFO: rc: 1
Nov 13 03:55:18.554: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9199 exec execpodsh7t5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30630:
Command stdout:

stderr:
+ nc -v -t -w 2 10.10.190.207 30630
+ echo hostName
nc: connect to 10.10.190.207 port 30630 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Nov 13 03:55:19.316: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9199 exec execpodsh7t5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30630'
Nov 13 03:55:19.584: INFO: rc: 1
Nov 13 03:55:19.584: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9199 exec execpodsh7t5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30630:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30630
nc: connect to 10.10.190.207 port 30630 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Nov 13 03:55:20.315: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9199 exec execpodsh7t5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30630'
Nov 13 03:55:21.130: INFO: rc: 1
Nov 13 03:55:21.130: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9199 exec execpodsh7t5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30630:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30630
nc: connect to 10.10.190.207 port 30630 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Nov 13 03:55:21.314: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9199 exec execpodsh7t5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30630'
Nov 13 03:55:21.756: INFO: rc: 1
Nov 13 03:55:21.756: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9199 exec execpodsh7t5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30630:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30630
nc: connect to 10.10.190.207 port 30630 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Nov 13 03:55:22.314: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9199 exec execpodsh7t5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30630'
Nov 13 03:55:23.166: INFO: rc: 1
Nov 13 03:55:23.166: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9199 exec execpodsh7t5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30630:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30630
nc: connect to 10.10.190.207 port 30630 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Nov 13 03:55:23.314: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9199 exec execpodsh7t5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30630'
Nov 13 03:55:24.724: INFO: rc: 1
Nov 13 03:55:24.724: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9199 exec execpodsh7t5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30630:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30630
nc: connect to 10.10.190.207 port 30630 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Nov 13 03:55:25.315: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9199 exec execpodsh7t5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30630'
Nov 13 03:55:27.845: INFO: rc: 1
Nov 13 03:55:27.845: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9199 exec execpodsh7t5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30630:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30630
nc: connect to 10.10.190.207 port 30630 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Nov 13 03:55:28.315: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9199 exec execpodsh7t5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30630'
Nov 13 03:55:28.573: INFO: rc: 1
Nov 13 03:55:28.573: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9199 exec execpodsh7t5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30630:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30630
nc: connect to 10.10.190.207 port 30630 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Nov 13 03:55:29.314: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9199 exec execpodsh7t5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30630'
Nov 13 03:55:29.567: INFO: rc: 1
Nov 13 03:55:29.567: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9199 exec execpodsh7t5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30630:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30630
nc: connect to 10.10.190.207 port 30630 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Nov 13 03:55:30.314: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9199 exec execpodsh7t5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30630'
Nov 13 03:55:30.568: INFO: rc: 1
Nov 13 03:55:30.568: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9199 exec execpodsh7t5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30630:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30630
nc: connect to 10.10.190.207 port 30630 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Nov 13 03:55:31.315: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9199 exec execpodsh7t5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30630'
Nov 13 03:55:31.561: INFO: rc: 1
Nov 13 03:55:31.561: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9199 exec execpodsh7t5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30630:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30630
nc: connect to 10.10.190.207 port 30630 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Nov 13 03:55:32.315: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9199 exec execpodsh7t5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30630'
Nov 13 03:55:32.573: INFO: rc: 1
Nov 13 03:55:32.573: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9199 exec execpodsh7t5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30630:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30630
nc: connect to 10.10.190.207 port 30630 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Nov 13 03:55:33.314: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9199 exec execpodsh7t5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30630'
Nov 13 03:55:33.556: INFO: rc: 1
Nov 13 03:55:33.556: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9199 exec execpodsh7t5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30630:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30630
nc: connect to 10.10.190.207 port 30630 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Nov 13 03:55:34.314: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9199 exec execpodsh7t5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30630'
Nov 13 03:55:34.549: INFO: rc: 1
Nov 13 03:55:34.550: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9199 exec execpodsh7t5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30630:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30630
nc: connect to 10.10.190.207 port 30630 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Nov 13 03:55:35.316: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9199 exec execpodsh7t5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30630'
Nov 13 03:55:35.550: INFO: rc: 1
Nov 13 03:55:35.550: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9199 exec execpodsh7t5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30630:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30630
nc: connect to 10.10.190.207 port 30630 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Nov 13 03:55:36.316: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9199 exec execpodsh7t5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30630'
Nov 13 03:55:36.547: INFO: rc: 1
Nov 13 03:55:36.547: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9199 exec execpodsh7t5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30630:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30630
nc: connect to 10.10.190.207 port 30630 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Nov 13 03:55:37.315: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9199 exec execpodsh7t5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30630'
Nov 13 03:55:37.562: INFO: rc: 1
Nov 13 03:55:37.562: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9199 exec execpodsh7t5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30630:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30630
nc: connect to 10.10.190.207 port 30630 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Nov 13 03:55:38.314: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9199 exec execpodsh7t5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30630'
Nov 13 03:55:38.553: INFO: rc: 1
Nov 13 03:55:38.553: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9199 exec execpodsh7t5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30630:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30630
nc: connect to 10.10.190.207 port 30630 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Nov 13 03:55:39.316: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9199 exec execpodsh7t5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30630'
Nov 13 03:55:39.569: INFO: rc: 1
Nov 13 03:55:39.569: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9199 exec execpodsh7t5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30630:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30630
nc: connect to 10.10.190.207 port 30630 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Nov 13 03:55:40.316: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9199 exec execpodsh7t5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30630'
Nov 13 03:55:40.563: INFO: rc: 1
Nov 13 03:55:40.564: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9199 exec execpodsh7t5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30630:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30630
nc: connect to 10.10.190.207 port 30630 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Nov 13 03:55:41.316: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9199 exec execpodsh7t5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30630'
Nov 13 03:55:41.557: INFO: rc: 1
Nov 13 03:55:41.557: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9199 exec execpodsh7t5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30630:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30630
nc: connect to 10.10.190.207 port 30630 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Nov 13 03:55:42.314: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9199 exec execpodsh7t5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30630'
Nov 13 03:55:42.571: INFO: rc: 1
Nov 13 03:55:42.571: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9199 exec execpodsh7t5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30630:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30630
nc: connect to 10.10.190.207 port 30630 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Nov 13 03:55:43.314: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9199 exec execpodsh7t5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30630'
Nov 13 03:55:43.566: INFO: rc: 1
Nov 13 03:55:43.566: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9199 exec execpodsh7t5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30630:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30630
nc: connect to 10.10.190.207 port 30630 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Nov 13 03:55:44.316: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9199 exec execpodsh7t5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30630'
Nov 13 03:55:44.547: INFO: rc: 1
Nov 13 03:55:44.547: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9199 exec execpodsh7t5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30630:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30630
nc: connect to 10.10.190.207 port 30630 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Nov 13 03:55:45.314: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9199 exec execpodsh7t5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30630'
Nov 13 03:55:45.572: INFO: rc: 1
Nov 13 03:55:45.572: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9199 exec execpodsh7t5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30630:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30630
nc: connect to 10.10.190.207 port 30630 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Nov 13 03:55:46.314: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9199 exec execpodsh7t5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30630'
Nov 13 03:55:46.595: INFO: rc: 1
Nov 13 03:55:46.595: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9199 exec execpodsh7t5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30630:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30630
nc: connect to 10.10.190.207 port 30630 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Nov 13 03:55:47.315: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9199 exec execpodsh7t5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30630'
Nov 13 03:55:47.566: INFO: rc: 1
Nov 13 03:55:47.566: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9199 exec execpodsh7t5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30630:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30630
nc: connect to 10.10.190.207 port 30630 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Nov 13 03:55:48.314: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9199 exec execpodsh7t5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30630'
Nov 13 03:55:48.603: INFO: rc: 1
Nov 13 03:55:48.603: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9199 exec execpodsh7t5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30630:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30630
nc: connect to 10.10.190.207 port 30630 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Nov 13 03:55:49.316: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9199 exec execpodsh7t5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30630'
Nov 13 03:55:49.570: INFO: rc: 1
Nov 13 03:55:49.570: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9199 exec execpodsh7t5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30630:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30630
nc: connect to 10.10.190.207 port 30630 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Nov 13 03:55:50.315: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9199 exec execpodsh7t5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30630'
Nov 13 03:55:50.573: INFO: rc: 1
Nov 13 03:55:50.573: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9199 exec execpodsh7t5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30630:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30630
nc: connect to 10.10.190.207 port 30630 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Nov 13 03:55:51.314: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9199 exec execpodsh7t5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30630'
Nov 13 03:55:51.561: INFO: rc: 1
Nov 13 03:55:51.561: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9199 exec execpodsh7t5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30630:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30630
nc: connect to 10.10.190.207 port 30630 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Nov 13 03:55:52.314: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9199 exec execpodsh7t5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30630'
Nov 13 03:55:52.540: INFO: rc: 1
Nov 13 03:55:52.540: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9199 exec execpodsh7t5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30630:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30630
nc: connect to 10.10.190.207 port 30630 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Nov 13 03:55:53.315: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9199 exec execpodsh7t5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30630'
Nov 13 03:55:53.564: INFO: rc: 1
Nov 13 03:55:53.564: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9199 exec execpodsh7t5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30630:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30630
nc: connect to 10.10.190.207 port 30630 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Nov 13 03:55:54.314: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9199 exec execpodsh7t5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30630'
Nov 13 03:55:54.539: INFO: rc: 1
Nov 13 03:55:54.539: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9199 exec execpodsh7t5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30630:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30630
nc: connect to 10.10.190.207 port 30630 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Nov 13 03:55:55.316: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9199 exec execpodsh7t5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30630'
Nov 13 03:55:55.558: INFO: rc: 1
Nov 13 03:55:55.558: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9199 exec execpodsh7t5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30630:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30630
nc: connect to 10.10.190.207 port 30630 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Nov 13 03:55:55.558: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9199 exec execpodsh7t5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30630'
Nov 13 03:55:55.787: INFO: rc: 1
Nov 13 03:55:55.787: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9199 exec execpodsh7t5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30630:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30630
nc: connect to 10.10.190.207 port 30630 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Nov 13 03:55:55.787: FAIL: Unexpected error:
    <*errors.errorString | 0xc0046895c0>: {
        s: "service is not reachable within 2m0s timeout on endpoint 10.10.190.207:30630 over TCP protocol",
    }
    service is not reachable within 2m0s timeout on endpoint 10.10.190.207:30630 over TCP protocol
occurred

Full Stack Trace
k8s.io/kubernetes/test/e2e/network.glob..func24.13()
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1245 +0x431
k8s.io/kubernetes/test/e2e.RunE2ETests(0xc000681680)
	_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x36c
k8s.io/kubernetes/test/e2e.TestE2E(0xc000681680)
	_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:144 +0x2b
testing.tRunner(0xc000681680, 0x70e7b58)
	/usr/local/go/src/testing/testing.go:1193 +0xef
created by testing.(*T).Run
	/usr/local/go/src/testing/testing.go:1238 +0x2b3
Nov 13 03:55:55.788: INFO: Cleaning up the updating NodePorts test service
[AfterEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
STEP: Collecting events from namespace "services-9199".
STEP: Found 17 events.
Nov 13 03:55:55.815: INFO: At 2021-11-13 03:53:34 +0000 UTC - event for nodeport-update-service: {replication-controller } SuccessfulCreate: Created pod: nodeport-update-service-2phmx
Nov 13 03:55:55.815: INFO: At 2021-11-13 03:53:34 +0000 UTC - event for nodeport-update-service: {replication-controller } SuccessfulCreate: Created pod: nodeport-update-service-cr2sn
Nov 13 03:55:55.815: INFO: At 2021-11-13 03:53:34 +0000 UTC - event for nodeport-update-service-2phmx: {default-scheduler } Scheduled: Successfully assigned services-9199/nodeport-update-service-2phmx to node1
Nov 13 03:55:55.815: INFO: At 2021-11-13 03:53:34 +0000 UTC - event for nodeport-update-service-cr2sn: {default-scheduler } Scheduled: Successfully assigned services-9199/nodeport-update-service-cr2sn to node1
Nov 13 03:55:55.815: INFO: At 2021-11-13 03:53:38 +0000 UTC - event for nodeport-update-service-2phmx: {kubelet node1} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/agnhost:2.32"
Nov 13 03:55:55.815: INFO: At 2021-11-13 03:53:38 +0000 UTC - event for nodeport-update-service-2phmx: {kubelet node1} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/agnhost:2.32" in 268.905599ms
Nov 13 03:55:55.815: INFO: At 2021-11-13 03:53:38 +0000 UTC - event for nodeport-update-service-cr2sn: {kubelet node1} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/agnhost:2.32"
Nov 13 03:55:55.815: INFO: At 2021-11-13 03:53:38 +0000 UTC - event for nodeport-update-service-cr2sn: {kubelet node1} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/agnhost:2.32" in 574.707266ms
Nov 13 03:55:55.815: INFO: At 2021-11-13 03:53:39 +0000 UTC - event for nodeport-update-service-2phmx: {kubelet node1} Started: Started container nodeport-update-service
Nov 13 03:55:55.815: INFO: At 2021-11-13 03:53:39 +0000 UTC - event for nodeport-update-service-2phmx: {kubelet node1} Created: Created container nodeport-update-service
Nov 13 03:55:55.815: INFO: At 2021-11-13 03:53:39 +0000 UTC - event for nodeport-update-service-cr2sn: {kubelet node1} Created: Created container nodeport-update-service
Nov 13 03:55:55.815: INFO: At 2021-11-13 03:53:39 +0000 UTC - event for nodeport-update-service-cr2sn: {kubelet node1} Started: Started container nodeport-update-service
Nov 13 03:55:55.815: INFO: At 2021-11-13 03:53:43 +0000 UTC - event for execpodsh7t5: {default-scheduler } Scheduled: Successfully assigned services-9199/execpodsh7t5 to node1
Nov 13 03:55:55.815: INFO: At 2021-11-13 03:53:45 +0000 UTC - event for execpodsh7t5: {kubelet node1} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/agnhost:2.32"
Nov 13 03:55:55.815: INFO: At 2021-11-13 03:53:46 +0000 UTC - event for execpodsh7t5: {kubelet node1} Created: Created container agnhost-container
Nov 13 03:55:55.815: INFO: At 2021-11-13 03:53:46 +0000 UTC - event for execpodsh7t5: {kubelet node1} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/agnhost:2.32" in 292.188504ms
Nov 13 03:55:55.815: INFO: At 2021-11-13 03:53:46 +0000 UTC - event for execpodsh7t5: {kubelet node1} Started: Started container agnhost-container
Nov 13 03:55:55.817: INFO: POD                            NODE   PHASE    GRACE  CONDITIONS
Nov 13 03:55:55.817: INFO: execpodsh7t5                   node1  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-11-13 03:53:43 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2021-11-13 03:53:46 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2021-11-13 03:53:46 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-11-13 03:53:43 +0000 UTC  }]
Nov 13 03:55:55.818: INFO: nodeport-update-service-2phmx  node1  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-11-13 03:53:34 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2021-11-13 03:53:39 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2021-11-13 03:53:39 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-11-13 03:53:34 +0000 UTC  }]
Nov 13 03:55:55.818: INFO: nodeport-update-service-cr2sn  node1  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-11-13 03:53:34 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2021-11-13 03:53:39 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2021-11-13 03:53:39 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-11-13 03:53:34 +0000 UTC  }]
Nov 13 03:55:55.818: INFO: 
Nov 13 03:55:55.824: INFO: 
Logging node info for node master1
Nov 13 03:55:55.827: INFO: Node Info: &Node{ObjectMeta:{master1    56d66c54-e52b-494a-a758-e4b658c4b245 150811 0 2021-11-12 21:05:50 +0000 UTC   map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master1 kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.202 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  [{kubeadm Update v1 2021-11-12 21:05:52 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}}} {flanneld Update v1 2021-11-12 21:08:40 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2021-11-12 21:08:49 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.0.0/24\"":{}},"f:taints":{}}}} {kubelet Update v1 2021-11-12 21:13:20 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234763776 0} {} 196518324Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324599808 0} {} 195629492Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-11-12 21:11:25 +0000 UTC,LastTransitionTime:2021-11-12 21:11:25 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-11-13 03:55:54 +0000 UTC,LastTransitionTime:2021-11-12 21:05:48 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-11-13 03:55:54 +0000 UTC,LastTransitionTime:2021-11-12 21:05:48 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-11-13 03:55:54 +0000 UTC,LastTransitionTime:2021-11-12 21:05:48 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-11-13 03:55:54 +0000 UTC,LastTransitionTime:2021-11-12 21:11:19 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.202,},NodeAddress{Type:Hostname,Address:master1,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:94e600d00e79450a9fb632d8473a11eb,SystemUUID:00ACFB60-0631-E711-906E-0017A4403562,BootID:6e4bb815-8b93-47c2-9321-93e7ada261f6,KernelVersion:3.10.0-1160.45.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.10,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:724566215,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 sirot/netperf-latest:latest],SizeBytes:282025213,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:125930239,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e k8s.gcr.io/coredns/coredns:v1.8.0],SizeBytes:42454755,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[localhost:30500/tasextender@sha256:93c03fd0e56363624f0df9368752e6e7c270d969da058ae5066fe8d668541f51 tasextender:latest localhost:30500/tasextender:0.4],SizeBytes:28910791,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[registry@sha256:1cd9409a311350c3072fe510b52046f104416376c126a479cef9a4dfe692cf57 registry:2.7.0],SizeBytes:24191168,},ContainerImage{Names:[nginx@sha256:57d1a39684ee5a5b5d34638cff843561d440d0f996303b2e841cabf228a4c2af nginx:1.20.1-alpine],SizeBytes:22650454,},ContainerImage{Names:[@ :],SizeBytes:5577654,},ContainerImage{Names:[alpine@sha256:c0e9560cda118f9ec63ddefb4a173a2b2a0347082d7dff7dc14272e7841a5b5a alpine:3.12.1],SizeBytes:5573013,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},}
Nov 13 03:55:55.827: INFO: 
Logging kubelet events for node master1
Nov 13 03:55:55.829: INFO: 
Logging pods the kubelet thinks is on node master1
Nov 13 03:55:55.852: INFO: kube-apiserver-master1 started at 2021-11-12 21:06:59 +0000 UTC (0+1 container statuses recorded)
Nov 13 03:55:55.852: INFO: 	Container kube-apiserver ready: true, restart count 0
Nov 13 03:55:55.852: INFO: kube-proxy-6m7qt started at 2021-11-12 21:07:39 +0000 UTC (0+1 container statuses recorded)
Nov 13 03:55:55.853: INFO: 	Container kube-proxy ready: true, restart count 1
Nov 13 03:55:55.853: INFO: container-registry-65d7c44b96-qwqcz started at 2021-11-12 21:12:56 +0000 UTC (0+2 container statuses recorded)
Nov 13 03:55:55.853: INFO: 	Container docker-registry ready: true, restart count 0
Nov 13 03:55:55.853: INFO: 	Container nginx ready: true, restart count 0
Nov 13 03:55:55.853: INFO: node-exporter-zm5hq started at 2021-11-12 21:22:03 +0000 UTC (0+2 container statuses recorded)
Nov 13 03:55:55.853: INFO: 	Container kube-rbac-proxy ready: true, restart count 0
Nov 13 03:55:55.853: INFO: 	Container node-exporter ready: true, restart count 0
Nov 13 03:55:55.853: INFO: kube-scheduler-master1 started at 2021-11-12 21:06:59 +0000 UTC (0+1 container statuses recorded)
Nov 13 03:55:55.853: INFO: 	Container kube-scheduler ready: true, restart count 0
Nov 13 03:55:55.853: INFO: kube-controller-manager-master1 started at 2021-11-12 21:15:21 +0000 UTC (0+1 container statuses recorded)
Nov 13 03:55:55.853: INFO: 	Container kube-controller-manager ready: true, restart count 2
Nov 13 03:55:55.853: INFO: kube-flannel-79bvx started at 2021-11-12 21:08:36 +0000 UTC (1+1 container statuses recorded)
Nov 13 03:55:55.853: INFO: 	Init container install-cni ready: true, restart count 0
Nov 13 03:55:55.853: INFO: 	Container kube-flannel ready: true, restart count 2
Nov 13 03:55:55.853: INFO: kube-multus-ds-amd64-qtmwl started at 2021-11-12 21:08:45 +0000 UTC (0+1 container statuses recorded)
Nov 13 03:55:55.853: INFO: 	Container kube-multus ready: true, restart count 1
Nov 13 03:55:55.853: INFO: coredns-8474476ff8-9vc8b started at 2021-11-12 21:09:15 +0000 UTC (0+1 container statuses recorded)
Nov 13 03:55:55.853: INFO: 	Container coredns ready: true, restart count 2
W1113 03:55:55.865698      24 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled.
Nov 13 03:55:55.932: INFO: 
Latency metrics for node master1
Nov 13 03:55:55.932: INFO: 
Logging node info for node master2
Nov 13 03:55:55.935: INFO: Node Info: &Node{ObjectMeta:{master2    9cc6c106-2749-4b3a-bbe2-d8a672ab49e0 150804 0 2021-11-12 21:06:20 +0000 UTC   map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master2 kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.203 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/master.version:v0.8.2 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  [{kubeadm Update v1 2021-11-12 21:06:21 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}}} {flanneld Update v1 2021-11-12 21:08:41 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2021-11-12 21:08:49 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}},"f:taints":{}}}} {nfd-master Update v1 2021-11-12 21:16:42 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/master.version":{}}}}} {kubelet Update v1 2021-11-12 21:16:56 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:capacity":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.1.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234767872 0} {}  BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324603904 0} {}  BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-11-12 21:11:30 +0000 UTC,LastTransitionTime:2021-11-12 21:11:30 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-11-13 03:55:51 +0000 UTC,LastTransitionTime:2021-11-12 21:06:20 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-11-13 03:55:51 +0000 UTC,LastTransitionTime:2021-11-12 21:06:20 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-11-13 03:55:51 +0000 UTC,LastTransitionTime:2021-11-12 21:06:20 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-11-13 03:55:51 +0000 UTC,LastTransitionTime:2021-11-12 21:08:49 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.203,},NodeAddress{Type:Hostname,Address:master2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:65d51a0e6dc44ad1ac5d3b5cd37365f1,SystemUUID:00A0DE53-E51D-E711-906E-0017A4403562,BootID:728abaee-0c5e-4ddb-a22e-72a1345c5ab6,KernelVersion:3.10.0-1160.45.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.10,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:724566215,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 sirot/netperf-latest:latest],SizeBytes:282025213,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:125930239,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[k8s.gcr.io/nfd/node-feature-discovery@sha256:74a1cbd82354f148277e20cdce25d57816e355a896bc67f67a0f722164b16945 k8s.gcr.io/nfd/node-feature-discovery:v0.8.2],SizeBytes:108486428,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e k8s.gcr.io/coredns/coredns:v1.8.0],SizeBytes:42454755,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 aquasec/kube-bench:0.3.1],SizeBytes:19301876,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},}
Nov 13 03:55:55.936: INFO: 
Logging kubelet events for node master2
Nov 13 03:55:55.938: INFO: 
Logging pods the kubelet thinks is on node master2
Nov 13 03:55:55.948: INFO: kube-controller-manager-master2 started at 2021-11-12 21:06:59 +0000 UTC (0+1 container statuses recorded)
Nov 13 03:55:55.948: INFO: 	Container kube-controller-manager ready: true, restart count 2
Nov 13 03:55:55.948: INFO: kube-scheduler-master2 started at 2021-11-12 21:15:21 +0000 UTC (0+1 container statuses recorded)
Nov 13 03:55:55.948: INFO: 	Container kube-scheduler ready: true, restart count 2
Nov 13 03:55:55.948: INFO: kube-proxy-5xbt9 started at 2021-11-12 21:07:39 +0000 UTC (0+1 container statuses recorded)
Nov 13 03:55:55.948: INFO: 	Container kube-proxy ready: true, restart count 2
Nov 13 03:55:55.948: INFO: kube-flannel-x76f4 started at 2021-11-12 21:08:36 +0000 UTC (1+1 container statuses recorded)
Nov 13 03:55:55.948: INFO: 	Init container install-cni ready: true, restart count 0
Nov 13 03:55:55.948: INFO: 	Container kube-flannel ready: true, restart count 1
Nov 13 03:55:55.949: INFO: kube-multus-ds-amd64-8zzgk started at 2021-11-12 21:08:45 +0000 UTC (0+1 container statuses recorded)
Nov 13 03:55:55.949: INFO: 	Container kube-multus ready: true, restart count 1
Nov 13 03:55:55.949: INFO: coredns-8474476ff8-s7twh started at 2021-11-12 21:09:11 +0000 UTC (0+1 container statuses recorded)
Nov 13 03:55:55.949: INFO: 	Container coredns ready: true, restart count 1
Nov 13 03:55:55.949: INFO: node-exporter-clpwc started at 2021-11-12 21:22:03 +0000 UTC (0+2 container statuses recorded)
Nov 13 03:55:55.949: INFO: 	Container kube-rbac-proxy ready: true, restart count 0
Nov 13 03:55:55.949: INFO: 	Container node-exporter ready: true, restart count 0
Nov 13 03:55:55.949: INFO: kube-apiserver-master2 started at 2021-11-12 21:06:59 +0000 UTC (0+1 container statuses recorded)
Nov 13 03:55:55.949: INFO: 	Container kube-apiserver ready: true, restart count 0
Nov 13 03:55:55.949: INFO: node-feature-discovery-controller-cff799f9f-c54h8 started at 2021-11-12 21:16:36 +0000 UTC (0+1 container statuses recorded)
Nov 13 03:55:55.949: INFO: 	Container nfd-controller ready: true, restart count 0
W1113 03:55:55.963206      24 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled.
Nov 13 03:55:56.031: INFO: 
Latency metrics for node master2
Nov 13 03:55:56.031: INFO: 
Logging node info for node master3
Nov 13 03:55:56.034: INFO: Node Info: &Node{ObjectMeta:{master3    fce0cd54-e4d8-4ce1-b720-522aad2d7989 150807 0 2021-11-12 21:06:31 +0000 UTC   map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master3 kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.204 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  [{kubeadm Update v1 2021-11-12 21:06:32 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}}} {flanneld Update v1 2021-11-12 21:08:41 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2021-11-12 21:08:49 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.2.0/24\"":{}},"f:taints":{}}}} {kubelet Update v1 2021-11-12 21:19:27 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:capacity":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.2.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234767872 0} {}  BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324603904 0} {}  BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-11-12 21:11:26 +0000 UTC,LastTransitionTime:2021-11-12 21:11:26 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-11-13 03:55:51 +0000 UTC,LastTransitionTime:2021-11-12 21:06:31 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-11-13 03:55:51 +0000 UTC,LastTransitionTime:2021-11-12 21:06:31 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-11-13 03:55:51 +0000 UTC,LastTransitionTime:2021-11-12 21:06:31 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-11-13 03:55:51 +0000 UTC,LastTransitionTime:2021-11-12 21:11:20 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.204,},NodeAddress{Type:Hostname,Address:master3,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:592c271b4697499588d9c2b3767b866a,SystemUUID:008B1444-141E-E711-906E-0017A4403562,BootID:a95de4ca-c566-4b34-8463-623af932d416,KernelVersion:3.10.0-1160.45.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.10,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:724566215,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:125930239,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e k8s.gcr.io/coredns/coredns:v1.8.0],SizeBytes:42454755,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 aquasec/kube-bench:0.3.1],SizeBytes:19301876,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},}
Nov 13 03:55:56.035: INFO: 
Logging kubelet events for node master3
Nov 13 03:55:56.036: INFO: 
Logging pods the kubelet thinks is on node master3
Nov 13 03:55:56.045: INFO: node-exporter-l4x25 started at 2021-11-12 21:22:03 +0000 UTC (0+2 container statuses recorded)
Nov 13 03:55:56.045: INFO: 	Container kube-rbac-proxy ready: true, restart count 0
Nov 13 03:55:56.045: INFO: 	Container node-exporter ready: true, restart count 0
Nov 13 03:55:56.045: INFO: kube-apiserver-master3 started at 2021-11-12 21:11:20 +0000 UTC (0+1 container statuses recorded)
Nov 13 03:55:56.045: INFO: 	Container kube-apiserver ready: true, restart count 0
Nov 13 03:55:56.045: INFO: kube-controller-manager-master3 started at 2021-11-12 21:11:20 +0000 UTC (0+1 container statuses recorded)
Nov 13 03:55:56.045: INFO: 	Container kube-controller-manager ready: true, restart count 3
Nov 13 03:55:56.045: INFO: kube-scheduler-master3 started at 2021-11-12 21:06:59 +0000 UTC (0+1 container statuses recorded)
Nov 13 03:55:56.045: INFO: 	Container kube-scheduler ready: true, restart count 2
Nov 13 03:55:56.045: INFO: dns-autoscaler-7df78bfcfb-d88qs started at 2021-11-12 21:09:13 +0000 UTC (0+1 container statuses recorded)
Nov 13 03:55:56.045: INFO: 	Container autoscaler ready: true, restart count 1
Nov 13 03:55:56.045: INFO: kube-proxy-tssd5 started at 2021-11-12 21:07:39 +0000 UTC (0+1 container statuses recorded)
Nov 13 03:55:56.045: INFO: 	Container kube-proxy ready: true, restart count 1
Nov 13 03:55:56.045: INFO: kube-flannel-vxlrs started at 2021-11-12 21:08:36 +0000 UTC (1+1 container statuses recorded)
Nov 13 03:55:56.045: INFO: 	Init container install-cni ready: true, restart count 0
Nov 13 03:55:56.045: INFO: 	Container kube-flannel ready: true, restart count 1
Nov 13 03:55:56.045: INFO: kube-multus-ds-amd64-vp8p7 started at 2021-11-12 21:08:45 +0000 UTC (0+1 container statuses recorded)
Nov 13 03:55:56.045: INFO: 	Container kube-multus ready: true, restart count 1
W1113 03:55:56.058890      24 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled.
Nov 13 03:55:56.121: INFO: 
Latency metrics for node master3
Nov 13 03:55:56.121: INFO: 
Logging node info for node node1
Nov 13 03:55:56.124: INFO: Node Info: &Node{ObjectMeta:{node1    6ceb907c-9809-4d18-88c6-b1e10ba80f97 150819 0 2021-11-12 21:07:36 +0000 UTC   map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux cmk.intel.com/cmk-node:true feature.node.kubernetes.io/cpu-cpuid.ADX:true feature.node.kubernetes.io/cpu-cpuid.AESNI:true feature.node.kubernetes.io/cpu-cpuid.AVX:true feature.node.kubernetes.io/cpu-cpuid.AVX2:true feature.node.kubernetes.io/cpu-cpuid.AVX512BW:true feature.node.kubernetes.io/cpu-cpuid.AVX512CD:true feature.node.kubernetes.io/cpu-cpuid.AVX512DQ:true feature.node.kubernetes.io/cpu-cpuid.AVX512F:true feature.node.kubernetes.io/cpu-cpuid.AVX512VL:true feature.node.kubernetes.io/cpu-cpuid.FMA3:true feature.node.kubernetes.io/cpu-cpuid.HLE:true feature.node.kubernetes.io/cpu-cpuid.IBPB:true feature.node.kubernetes.io/cpu-cpuid.MPX:true feature.node.kubernetes.io/cpu-cpuid.RTM:true feature.node.kubernetes.io/cpu-cpuid.SSE4:true feature.node.kubernetes.io/cpu-cpuid.SSE42:true feature.node.kubernetes.io/cpu-cpuid.STIBP:true feature.node.kubernetes.io/cpu-cpuid.VMX:true feature.node.kubernetes.io/cpu-cstate.enabled:true feature.node.kubernetes.io/cpu-hardware_multithreading:true feature.node.kubernetes.io/cpu-pstate.status:active feature.node.kubernetes.io/cpu-pstate.turbo:true feature.node.kubernetes.io/cpu-rdt.RDTCMT:true feature.node.kubernetes.io/cpu-rdt.RDTL3CA:true feature.node.kubernetes.io/cpu-rdt.RDTMBA:true feature.node.kubernetes.io/cpu-rdt.RDTMBM:true feature.node.kubernetes.io/cpu-rdt.RDTMON:true feature.node.kubernetes.io/kernel-config.NO_HZ:true feature.node.kubernetes.io/kernel-config.NO_HZ_FULL:true feature.node.kubernetes.io/kernel-selinux.enabled:true feature.node.kubernetes.io/kernel-version.full:3.10.0-1160.45.1.el7.x86_64 feature.node.kubernetes.io/kernel-version.major:3 feature.node.kubernetes.io/kernel-version.minor:10 feature.node.kubernetes.io/kernel-version.revision:0 feature.node.kubernetes.io/memory-numa:true feature.node.kubernetes.io/network-sriov.capable:true feature.node.kubernetes.io/network-sriov.configured:true feature.node.kubernetes.io/pci-0300_1a03.present:true feature.node.kubernetes.io/storage-nonrotationaldisk:true feature.node.kubernetes.io/system-os_release.ID:centos feature.node.kubernetes.io/system-os_release.VERSION_ID:7 feature.node.kubernetes.io/system-os_release.VERSION_ID.major:7 kubernetes.io/arch:amd64 kubernetes.io/hostname:node1 kubernetes.io/os:linux] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.207 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/extended-resources: nfd.node.kubernetes.io/feature-labels:cpu-cpuid.ADX,cpu-cpuid.AESNI,cpu-cpuid.AVX,cpu-cpuid.AVX2,cpu-cpuid.AVX512BW,cpu-cpuid.AVX512CD,cpu-cpuid.AVX512DQ,cpu-cpuid.AVX512F,cpu-cpuid.AVX512VL,cpu-cpuid.FMA3,cpu-cpuid.HLE,cpu-cpuid.IBPB,cpu-cpuid.MPX,cpu-cpuid.RTM,cpu-cpuid.SSE4,cpu-cpuid.SSE42,cpu-cpuid.STIBP,cpu-cpuid.VMX,cpu-cstate.enabled,cpu-hardware_multithreading,cpu-pstate.status,cpu-pstate.turbo,cpu-rdt.RDTCMT,cpu-rdt.RDTL3CA,cpu-rdt.RDTMBA,cpu-rdt.RDTMBM,cpu-rdt.RDTMON,kernel-config.NO_HZ,kernel-config.NO_HZ_FULL,kernel-selinux.enabled,kernel-version.full,kernel-version.major,kernel-version.minor,kernel-version.revision,memory-numa,network-sriov.capable,network-sriov.configured,pci-0300_1a03.present,storage-nonrotationaldisk,system-os_release.ID,system-os_release.VERSION_ID,system-os_release.VERSION_ID.major nfd.node.kubernetes.io/worker.version:v0.8.2 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  [{kube-controller-manager Update v1 2021-11-12 21:07:36 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.3.0/24\"":{}}}}} {kubeadm Update v1 2021-11-12 21:07:36 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {flanneld Update v1 2021-11-12 21:08:41 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {nfd-master Update v1 2021-11-12 21:16:44 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/extended-resources":{},"f:nfd.node.kubernetes.io/feature-labels":{},"f:nfd.node.kubernetes.io/worker.version":{}},"f:labels":{"f:feature.node.kubernetes.io/cpu-cpuid.ADX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AESNI":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX2":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512BW":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512CD":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512DQ":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512F":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512VL":{},"f:feature.node.kubernetes.io/cpu-cpuid.FMA3":{},"f:feature.node.kubernetes.io/cpu-cpuid.HLE":{},"f:feature.node.kubernetes.io/cpu-cpuid.IBPB":{},"f:feature.node.kubernetes.io/cpu-cpuid.MPX":{},"f:feature.node.kubernetes.io/cpu-cpuid.RTM":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE4":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE42":{},"f:feature.node.kubernetes.io/cpu-cpuid.STIBP":{},"f:feature.node.kubernetes.io/cpu-cpuid.VMX":{},"f:feature.node.kubernetes.io/cpu-cstate.enabled":{},"f:feature.node.kubernetes.io/cpu-hardware_multithreading":{},"f:feature.node.kubernetes.io/cpu-pstate.status":{},"f:feature.node.kubernetes.io/cpu-pstate.turbo":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTCMT":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTL3CA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBM":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMON":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ_FULL":{},"f:feature.node.kubernetes.io/kernel-selinux.enabled":{},"f:feature.node.kubernetes.io/kernel-version.full":{},"f:feature.node.kubernetes.io/kernel-version.major":{},"f:feature.node.kubernetes.io/kernel-version.minor":{},"f:feature.node.kubernetes.io/kernel-version.revision":{},"f:feature.node.kubernetes.io/memory-numa":{},"f:feature.node.kubernetes.io/network-sriov.capable":{},"f:feature.node.kubernetes.io/network-sriov.configured":{},"f:feature.node.kubernetes.io/pci-0300_1a03.present":{},"f:feature.node.kubernetes.io/storage-nonrotationaldisk":{},"f:feature.node.kubernetes.io/system-os_release.ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID.major":{}}}}} {Swagger-Codegen Update v1 2021-11-12 21:20:21 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:cmk.intel.com/cmk-node":{}}},"f:status":{"f:capacity":{"f:cmk.intel.com/exclusive-cores":{}}}}} {e2e.test Update v1 2021-11-13 01:56:37 +0000 UTC FieldsV1 {"f:status":{"f:capacity":{"f:scheduling.k8s.io/foo":{}}}}} {kubelet Update v1 2021-11-13 01:56:43 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:cmk.intel.com/exclusive-cores":{},"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{},"f:scheduling.k8s.io/foo":{}},"f:capacity":{"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.3.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.3.0/24],},Status:NodeStatus{Capacity:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{201269633024 0} {}  BinarySI},pods: {{110 0} {} 110 DecimalSI},scheduling.k8s.io/foo: {{3 0} {} 3 DecimalSI},},Allocatable:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{77 0} {} 77 DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{178884632576 0} {}  BinarySI},pods: {{110 0} {} 110 DecimalSI},scheduling.k8s.io/foo: {{3 0} {} 3 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-11-12 21:11:27 +0000 UTC,LastTransitionTime:2021-11-12 21:11:27 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-11-13 03:55:55 +0000 UTC,LastTransitionTime:2021-11-12 21:07:36 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-11-13 03:55:55 +0000 UTC,LastTransitionTime:2021-11-12 21:07:36 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-11-13 03:55:55 +0000 UTC,LastTransitionTime:2021-11-12 21:07:36 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-11-13 03:55:55 +0000 UTC,LastTransitionTime:2021-11-12 21:08:47 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.207,},NodeAddress{Type:Hostname,Address:node1,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:7cf6287777fe4e3b9a80df40dea25b6d,SystemUUID:00CDA902-D022-E711-906E-0017A4403562,BootID:2125bc5f-9167-464a-b6d0-8e8a192327d3,KernelVersion:3.10.0-1160.45.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.10,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[opnfv/barometer-collectd@sha256:f30e965aa6195e6ac4ca2410f5a15e3704c92e4afa5208178ca22a7911975d66],SizeBytes:1075575763,},ContainerImage{Names:[@ :],SizeBytes:1003432896,},ContainerImage{Names:[localhost:30500/cmk@sha256:9c8712e686132463f8f4d9787e57cff8c3c47bb77fca5fc1d96f6763d2717f29 cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:724566215,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[golang@sha256:db2475a1dbb2149508e5db31d7d77a75e6600d54be645f37681f03f2762169ba golang:alpine3.12],SizeBytes:301186719,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[k8s.gcr.io/etcd@sha256:4ad90a11b55313b182afc186b9876c8e891531b8db4c9bf1541953021618d0e2 k8s.gcr.io/etcd:3.4.13-0],SizeBytes:253392289,},ContainerImage{Names:[grafana/grafana@sha256:ba39bf5131dcc0464134a3ff0e26e8c6380415249fa725e5f619176601255172 grafana/grafana:7.5.4],SizeBytes:203572842,},ContainerImage{Names:[quay.io/prometheus/prometheus@sha256:b899dbd1b9017b9a379f76ce5b40eead01a62762c4f2057eacef945c3c22d210 quay.io/prometheus/prometheus:v2.22.1],SizeBytes:168344243,},ContainerImage{Names:[nginx@sha256:a05b0cdd4fc1be3b224ba9662ebdf98fe44c09c0c9215b45f84344c12867002e nginx:1.21.1],SizeBytes:133175493,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:716d2f68314c5c4ddd5ecdb45183fcb4ed8019015982c1321571f863989b70b0 k8s.gcr.io/e2e-test-images/httpd:2.4.39-1],SizeBytes:126894770,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:125930239,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:1f36a24cfb5e0c3f725d7565a867c2384282fcbeccc77b07b423c9da95763a9a k8s.gcr.io/e2e-test-images/nautilus:1.4],SizeBytes:121748345,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[k8s.gcr.io/nfd/node-feature-discovery@sha256:74a1cbd82354f148277e20cdce25d57816e355a896bc67f67a0f722164b16945 k8s.gcr.io/nfd/node-feature-discovery:v0.8.2],SizeBytes:108486428,},ContainerImage{Names:[directxman12/k8s-prometheus-adapter@sha256:2b09a571757a12c0245f2f1a74db4d1b9386ff901cf57f5ce48a0a682bd0e3af directxman12/k8s-prometheus-adapter:v0.8.2],SizeBytes:68230450,},ContainerImage{Names:[k8s.gcr.io/build-image/debian-iptables@sha256:d226f3fd5f293ff513f53573a40c069b89d57d42338a1045b493bf702ac6b1f6 k8s.gcr.io/build-image/debian-iptables:buster-v1.6.5],SizeBytes:60182158,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/sample-apiserver@sha256:e7fddbaac4c3451da2365ab90bad149d32f11409738034e41e0f460927f7c276 k8s.gcr.io/e2e-test-images/sample-apiserver:1.17.4],SizeBytes:58172101,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[localhost:30500/sriov-device-plugin@sha256:760040499bb9aa55e93c6074d538c7fb6c32ef9fc567e4edcc0ef55197276560 nfvpe/sriov-device-plugin:latest localhost:30500/sriov-device-plugin:v3.3.2],SizeBytes:42686989,},ContainerImage{Names:[quay.io/prometheus-operator/prometheus-operator@sha256:850c86bfeda4389bc9c757a9fd17ca5a090ea6b424968178d4467492cfa13921 quay.io/prometheus-operator/prometheus-operator:v0.44.1],SizeBytes:42617274,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[prom/collectd-exporter@sha256:73fbda4d24421bff3b741c27efc36f1b6fbe7c57c378d56d4ff78101cd556654],SizeBytes:17463681,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:16032814,},ContainerImage{Names:[quay.io/prometheus-operator/prometheus-config-reloader@sha256:4dee0fcf1820355ddd6986c1317b555693776c731315544a99d6cc59a7e34ce9 quay.io/prometheus-operator/prometheus-config-reloader:v0.44.1],SizeBytes:13433274,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:11443478,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonewprivs@sha256:8ac1264691820febacf3aea5d152cbde6d10685731ec14966a9401c6f47a68ac k8s.gcr.io/e2e-test-images/nonewprivs:1.3],SizeBytes:7107254,},ContainerImage{Names:[appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172cead6502fec989198b98d8bb appropriate/curl:edge],SizeBytes:5654234,},ContainerImage{Names:[alpine@sha256:1841df8d4cc71e4f69cc1603012b99570f40d18cd36ee1065933b46f984cf0cd alpine:3.12],SizeBytes:5592390,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:1154361,},ContainerImage{Names:[busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 busybox:1.28],SizeBytes:1146369,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},}
Nov 13 03:55:56.125: INFO: 
Logging kubelet events for node node1
Nov 13 03:55:56.127: INFO: 
Logging pods the kubelet thinks is on node node1
Nov 13 03:55:57.321: INFO: kube-proxy-p6kbl started at 2021-11-12 21:07:39 +0000 UTC (0+1 container statuses recorded)
Nov 13 03:55:57.321: INFO: 	Container kube-proxy ready: true, restart count 2
Nov 13 03:55:57.321: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-m62v8 started at 2021-11-12 21:17:59 +0000 UTC (0+1 container statuses recorded)
Nov 13 03:55:57.321: INFO: 	Container kube-sriovdp ready: true, restart count 0
Nov 13 03:55:57.321: INFO: cmk-init-discover-node1-vkj2s started at 2021-11-12 21:20:18 +0000 UTC (0+3 container statuses recorded)
Nov 13 03:55:57.321: INFO: 	Container discover ready: false, restart count 0
Nov 13 03:55:57.321: INFO: 	Container init ready: false, restart count 0
Nov 13 03:55:57.321: INFO: 	Container install ready: false, restart count 0
Nov 13 03:55:57.321: INFO: node-feature-discovery-worker-zgr4c started at 2021-11-12 21:16:36 +0000 UTC (0+1 container statuses recorded)
Nov 13 03:55:57.321: INFO: 	Container nfd-worker ready: true, restart count 0
Nov 13 03:55:57.321: INFO: cmk-webhook-6c9d5f8578-2gp25 started at 2021-11-12 21:21:01 +0000 UTC (0+1 container statuses recorded)
Nov 13 03:55:57.321: INFO: 	Container cmk-webhook ready: true, restart count 0
Nov 13 03:55:57.321: INFO: prometheus-k8s-0 started at 2021-11-12 21:22:14 +0000 UTC (0+4 container statuses recorded)
Nov 13 03:55:57.321: INFO: 	Container config-reloader ready: true, restart count 0
Nov 13 03:55:57.321: INFO: 	Container custom-metrics-apiserver ready: true, restart count 0
Nov 13 03:55:57.321: INFO: 	Container grafana ready: true, restart count 0
Nov 13 03:55:57.321: INFO: 	Container prometheus ready: true, restart count 1
Nov 13 03:55:57.321: INFO: execpodsh7t5 started at 2021-11-13 03:53:43 +0000 UTC (0+1 container statuses recorded)
Nov 13 03:55:57.321: INFO: 	Container agnhost-container ready: true, restart count 0
Nov 13 03:55:57.321: INFO: cmk-4tcdw started at 2021-11-12 21:21:00 +0000 UTC (0+2 container statuses recorded)
Nov 13 03:55:57.321: INFO: 	Container nodereport ready: true, restart count 0
Nov 13 03:55:57.321: INFO: 	Container reconcile ready: true, restart count 0
Nov 13 03:55:57.321: INFO: node-exporter-hqkfs started at 2021-11-12 21:22:03 +0000 UTC (0+2 container statuses recorded)
Nov 13 03:55:57.321: INFO: 	Container kube-rbac-proxy ready: true, restart count 0
Nov 13 03:55:57.321: INFO: 	Container node-exporter ready: true, restart count 0
Nov 13 03:55:57.321: INFO: nginx-proxy-node1 started at 2021-11-12 21:07:36 +0000 UTC (0+1 container statuses recorded)
Nov 13 03:55:57.321: INFO: 	Container nginx-proxy ready: true, restart count 2
Nov 13 03:55:57.321: INFO: kube-flannel-r7bbp started at 2021-11-12 21:08:36 +0000 UTC (1+1 container statuses recorded)
Nov 13 03:55:57.321: INFO: 	Init container install-cni ready: true, restart count 2
Nov 13 03:55:57.322: INFO: 	Container kube-flannel ready: true, restart count 3
Nov 13 03:55:57.322: INFO: kube-multus-ds-amd64-4wqsv started at 2021-11-12 21:08:45 +0000 UTC (0+1 container statuses recorded)
Nov 13 03:55:57.322: INFO: 	Container kube-multus ready: true, restart count 1
Nov 13 03:55:57.322: INFO: prometheus-operator-585ccfb458-qcz7s started at 2021-11-12 21:21:55 +0000 UTC (0+2 container statuses recorded)
Nov 13 03:55:57.322: INFO: 	Container kube-rbac-proxy ready: true, restart count 0
Nov 13 03:55:57.322: INFO: 	Container prometheus-operator ready: true, restart count 0
Nov 13 03:55:57.322: INFO: nodeport-update-service-cr2sn started at 2021-11-13 03:53:34 +0000 UTC (0+1 container statuses recorded)
Nov 13 03:55:57.322: INFO: 	Container nodeport-update-service ready: true, restart count 0
Nov 13 03:55:57.322: INFO: collectd-74xkn started at 2021-11-12 21:25:58 +0000 UTC (0+3 container statuses recorded)
Nov 13 03:55:57.322: INFO: 	Container collectd ready: true, restart count 0
Nov 13 03:55:57.322: INFO: 	Container collectd-exporter ready: true, restart count 0
Nov 13 03:55:57.322: INFO: 	Container rbac-proxy ready: true, restart count 0
Nov 13 03:55:57.322: INFO: nodeport-update-service-2phmx started at 2021-11-13 03:53:34 +0000 UTC (0+1 container statuses recorded)
Nov 13 03:55:57.322: INFO: 	Container nodeport-update-service ready: true, restart count 0
W1113 03:55:57.335700      24 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled.
Nov 13 03:55:57.574: INFO: 
Latency metrics for node node1
Nov 13 03:55:57.574: INFO: 
Logging node info for node node2
Nov 13 03:55:57.577: INFO: Node Info: &Node{ObjectMeta:{node2    652722dd-12b1-4529-ba4d-a00c590e4a68 150803 0 2021-11-12 21:07:36 +0000 UTC   map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux cmk.intel.com/cmk-node:true feature.node.kubernetes.io/cpu-cpuid.ADX:true feature.node.kubernetes.io/cpu-cpuid.AESNI:true feature.node.kubernetes.io/cpu-cpuid.AVX:true feature.node.kubernetes.io/cpu-cpuid.AVX2:true feature.node.kubernetes.io/cpu-cpuid.AVX512BW:true feature.node.kubernetes.io/cpu-cpuid.AVX512CD:true feature.node.kubernetes.io/cpu-cpuid.AVX512DQ:true feature.node.kubernetes.io/cpu-cpuid.AVX512F:true feature.node.kubernetes.io/cpu-cpuid.AVX512VL:true feature.node.kubernetes.io/cpu-cpuid.FMA3:true feature.node.kubernetes.io/cpu-cpuid.HLE:true feature.node.kubernetes.io/cpu-cpuid.IBPB:true feature.node.kubernetes.io/cpu-cpuid.MPX:true feature.node.kubernetes.io/cpu-cpuid.RTM:true feature.node.kubernetes.io/cpu-cpuid.SSE4:true feature.node.kubernetes.io/cpu-cpuid.SSE42:true feature.node.kubernetes.io/cpu-cpuid.STIBP:true feature.node.kubernetes.io/cpu-cpuid.VMX:true feature.node.kubernetes.io/cpu-cstate.enabled:true feature.node.kubernetes.io/cpu-hardware_multithreading:true feature.node.kubernetes.io/cpu-pstate.status:active feature.node.kubernetes.io/cpu-pstate.turbo:true feature.node.kubernetes.io/cpu-rdt.RDTCMT:true feature.node.kubernetes.io/cpu-rdt.RDTL3CA:true feature.node.kubernetes.io/cpu-rdt.RDTMBA:true feature.node.kubernetes.io/cpu-rdt.RDTMBM:true feature.node.kubernetes.io/cpu-rdt.RDTMON:true feature.node.kubernetes.io/kernel-config.NO_HZ:true feature.node.kubernetes.io/kernel-config.NO_HZ_FULL:true feature.node.kubernetes.io/kernel-selinux.enabled:true feature.node.kubernetes.io/kernel-version.full:3.10.0-1160.45.1.el7.x86_64 feature.node.kubernetes.io/kernel-version.major:3 feature.node.kubernetes.io/kernel-version.minor:10 feature.node.kubernetes.io/kernel-version.revision:0 feature.node.kubernetes.io/memory-numa:true feature.node.kubernetes.io/network-sriov.capable:true feature.node.kubernetes.io/network-sriov.configured:true feature.node.kubernetes.io/pci-0300_1a03.present:true feature.node.kubernetes.io/storage-nonrotationaldisk:true feature.node.kubernetes.io/system-os_release.ID:centos feature.node.kubernetes.io/system-os_release.VERSION_ID:7 feature.node.kubernetes.io/system-os_release.VERSION_ID.major:7 kubernetes.io/arch:amd64 kubernetes.io/hostname:node2 kubernetes.io/os:linux] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.208 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/extended-resources: nfd.node.kubernetes.io/feature-labels:cpu-cpuid.ADX,cpu-cpuid.AESNI,cpu-cpuid.AVX,cpu-cpuid.AVX2,cpu-cpuid.AVX512BW,cpu-cpuid.AVX512CD,cpu-cpuid.AVX512DQ,cpu-cpuid.AVX512F,cpu-cpuid.AVX512VL,cpu-cpuid.FMA3,cpu-cpuid.HLE,cpu-cpuid.IBPB,cpu-cpuid.MPX,cpu-cpuid.RTM,cpu-cpuid.SSE4,cpu-cpuid.SSE42,cpu-cpuid.STIBP,cpu-cpuid.VMX,cpu-cstate.enabled,cpu-hardware_multithreading,cpu-pstate.status,cpu-pstate.turbo,cpu-rdt.RDTCMT,cpu-rdt.RDTL3CA,cpu-rdt.RDTMBA,cpu-rdt.RDTMBM,cpu-rdt.RDTMON,kernel-config.NO_HZ,kernel-config.NO_HZ_FULL,kernel-selinux.enabled,kernel-version.full,kernel-version.major,kernel-version.minor,kernel-version.revision,memory-numa,network-sriov.capable,network-sriov.configured,pci-0300_1a03.present,storage-nonrotationaldisk,system-os_release.ID,system-os_release.VERSION_ID,system-os_release.VERSION_ID.major nfd.node.kubernetes.io/worker.version:v0.8.2 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  [{kube-controller-manager Update v1 2021-11-12 21:07:36 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.4.0/24\"":{}}}}} {kubeadm Update v1 2021-11-12 21:07:36 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {flanneld Update v1 2021-11-12 21:08:41 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {nfd-master Update v1 2021-11-12 21:16:44 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/extended-resources":{},"f:nfd.node.kubernetes.io/feature-labels":{},"f:nfd.node.kubernetes.io/worker.version":{}},"f:labels":{"f:feature.node.kubernetes.io/cpu-cpuid.ADX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AESNI":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX2":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512BW":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512CD":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512DQ":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512F":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512VL":{},"f:feature.node.kubernetes.io/cpu-cpuid.FMA3":{},"f:feature.node.kubernetes.io/cpu-cpuid.HLE":{},"f:feature.node.kubernetes.io/cpu-cpuid.IBPB":{},"f:feature.node.kubernetes.io/cpu-cpuid.MPX":{},"f:feature.node.kubernetes.io/cpu-cpuid.RTM":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE4":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE42":{},"f:feature.node.kubernetes.io/cpu-cpuid.STIBP":{},"f:feature.node.kubernetes.io/cpu-cpuid.VMX":{},"f:feature.node.kubernetes.io/cpu-cstate.enabled":{},"f:feature.node.kubernetes.io/cpu-hardware_multithreading":{},"f:feature.node.kubernetes.io/cpu-pstate.status":{},"f:feature.node.kubernetes.io/cpu-pstate.turbo":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTCMT":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTL3CA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBM":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMON":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ_FULL":{},"f:feature.node.kubernetes.io/kernel-selinux.enabled":{},"f:feature.node.kubernetes.io/kernel-version.full":{},"f:feature.node.kubernetes.io/kernel-version.major":{},"f:feature.node.kubernetes.io/kernel-version.minor":{},"f:feature.node.kubernetes.io/kernel-version.revision":{},"f:feature.node.kubernetes.io/memory-numa":{},"f:feature.node.kubernetes.io/network-sriov.capable":{},"f:feature.node.kubernetes.io/network-sriov.configured":{},"f:feature.node.kubernetes.io/pci-0300_1a03.present":{},"f:feature.node.kubernetes.io/storage-nonrotationaldisk":{},"f:feature.node.kubernetes.io/system-os_release.ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID.major":{}}}}} {Swagger-Codegen Update v1 2021-11-12 21:20:44 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:cmk.intel.com/cmk-node":{}}},"f:status":{"f:capacity":{"f:cmk.intel.com/exclusive-cores":{}}}}} {kubelet Update v1 2021-11-13 01:56:40 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:cmk.intel.com/exclusive-cores":{},"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{},"f:scheduling.k8s.io/foo":{}},"f:capacity":{"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}} {e2e.test Update v1 2021-11-13 02:52:24 +0000 UTC FieldsV1 {"f:status":{"f:capacity":{"f:scheduling.k8s.io/foo":{}}}}}]},Spec:NodeSpec{PodCIDR:10.244.4.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.4.0/24],},Status:NodeStatus{Capacity:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{201269628928 0} {} 196552372Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},scheduling.k8s.io/foo: {{3 0} {} 3 DecimalSI},},Allocatable:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{77 0} {} 77 DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{178884628480 0} {} 174692020Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},scheduling.k8s.io/foo: {{3 0} {} 3 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-11-12 21:11:26 +0000 UTC,LastTransitionTime:2021-11-12 21:11:26 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-11-13 03:55:50 +0000 UTC,LastTransitionTime:2021-11-12 21:07:36 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-11-13 03:55:50 +0000 UTC,LastTransitionTime:2021-11-12 21:07:36 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-11-13 03:55:50 +0000 UTC,LastTransitionTime:2021-11-12 21:07:36 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-11-13 03:55:50 +0000 UTC,LastTransitionTime:2021-11-12 21:08:47 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.208,},NodeAddress{Type:Hostname,Address:node2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:fec67f7547064c508c27d44a9bf99ae7,SystemUUID:80B3CD56-852F-E711-906E-0017A4403562,BootID:0a05ac00-ff21-4518-bf68-3564c7a8cf65,KernelVersion:3.10.0-1160.45.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.10,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[opnfv/barometer-collectd@sha256:f30e965aa6195e6ac4ca2410f5a15e3704c92e4afa5208178ca22a7911975d66],SizeBytes:1075575763,},ContainerImage{Names:[cmk:v1.5.1],SizeBytes:724566215,},ContainerImage{Names:[localhost:30500/cmk@sha256:9c8712e686132463f8f4d9787e57cff8c3c47bb77fca5fc1d96f6763d2717f29 localhost:30500/cmk:v1.5.1],SizeBytes:724566215,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[aquasec/kube-hunter@sha256:2be6820bc1d7e0f57193a9a27d5a3e16b2fd93c53747b03ce8ca48c6fc323781 aquasec/kube-hunter:0.3.1],SizeBytes:347611549,},ContainerImage{Names:[sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 sirot/netperf-latest:latest],SizeBytes:282025213,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:702a992280fb7c3303e84a5801acbb4c9c7fcf48cffe0e9c8be3f0c60f74cf89 k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.4],SizeBytes:253371792,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[nginx@sha256:a05b0cdd4fc1be3b224ba9662ebdf98fe44c09c0c9215b45f84344c12867002e nginx:1.21.1],SizeBytes:133175493,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:716d2f68314c5c4ddd5ecdb45183fcb4ed8019015982c1321571f863989b70b0 k8s.gcr.io/e2e-test-images/httpd:2.4.39-1],SizeBytes:126894770,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:125930239,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:1f36a24cfb5e0c3f725d7565a867c2384282fcbeccc77b07b423c9da95763a9a k8s.gcr.io/e2e-test-images/nautilus:1.4],SizeBytes:121748345,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[k8s.gcr.io/nfd/node-feature-discovery@sha256:74a1cbd82354f148277e20cdce25d57816e355a896bc67f67a0f722164b16945 k8s.gcr.io/nfd/node-feature-discovery:v0.8.2],SizeBytes:108486428,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/regression-issue-74839@sha256:b4f1d8d61bdad84bd50442d161d5460e4019d53e989b64220fdbc62fc87d76bf k8s.gcr.io/e2e-test-images/regression-issue-74839:1.2],SizeBytes:44576952,},ContainerImage{Names:[localhost:30500/sriov-device-plugin@sha256:760040499bb9aa55e93c6074d538c7fb6c32ef9fc567e4edcc0ef55197276560 localhost:30500/sriov-device-plugin:v3.3.2],SizeBytes:42686989,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[localhost:30500/tasextender@sha256:93c03fd0e56363624f0df9368752e6e7c270d969da058ae5066fe8d668541f51 localhost:30500/tasextender:0.4],SizeBytes:28910791,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 aquasec/kube-bench:0.3.1],SizeBytes:19301876,},ContainerImage{Names:[prom/collectd-exporter@sha256:73fbda4d24421bff3b741c27efc36f1b6fbe7c57c378d56d4ff78101cd556654],SizeBytes:17463681,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:16032814,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:11443478,},ContainerImage{Names:[appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172cead6502fec989198b98d8bb appropriate/curl:edge],SizeBytes:5654234,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:1154361,},ContainerImage{Names:[busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 busybox:1.28],SizeBytes:1146369,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},}
Nov 13 03:55:57.577: INFO: 
Logging kubelet events for node node2
Nov 13 03:55:57.580: INFO: 
Logging pods the kubelet thinks is on node node2
Nov 13 03:55:58.020: INFO: cmk-init-discover-node2-5f4hp started at 2021-11-12 21:20:38 +0000 UTC (0+3 container statuses recorded)
Nov 13 03:55:58.020: INFO: 	Container discover ready: false, restart count 0
Nov 13 03:55:58.020: INFO: 	Container init ready: false, restart count 0
Nov 13 03:55:58.020: INFO: 	Container install ready: false, restart count 0
Nov 13 03:55:58.020: INFO: kubernetes-metrics-scraper-5558854cb-jmbpk started at 2021-11-12 21:09:15 +0000 UTC (0+1 container statuses recorded)
Nov 13 03:55:58.020: INFO: 	Container kubernetes-metrics-scraper ready: true, restart count 1
Nov 13 03:55:58.020: INFO: node-exporter-hstd9 started at 2021-11-12 21:22:03 +0000 UTC (0+2 container statuses recorded)
Nov 13 03:55:58.020: INFO: 	Container kube-rbac-proxy ready: true, restart count 0
Nov 13 03:55:58.020: INFO: 	Container node-exporter ready: true, restart count 0
Nov 13 03:55:58.020: INFO: tas-telemetry-aware-scheduling-84ff454dfb-q7m54 started at 2021-11-12 21:25:09 +0000 UTC (0+1 container statuses recorded)
Nov 13 03:55:58.020: INFO: 	Container tas-extender ready: true, restart count 0
Nov 13 03:55:58.020: INFO: nginx-proxy-node2 started at 2021-11-12 21:07:36 +0000 UTC (0+1 container statuses recorded)
Nov 13 03:55:58.020: INFO: 	Container nginx-proxy ready: true, restart count 2
Nov 13 03:55:58.020: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-7brrh started at 2021-11-12 21:17:59 +0000 UTC (0+1 container statuses recorded)
Nov 13 03:55:58.020: INFO: 	Container kube-sriovdp ready: true, restart count 0
Nov 13 03:55:58.020: INFO: kubernetes-dashboard-785dcbb76d-w2mls started at 2021-11-12 21:09:15 +0000 UTC (0+1 container statuses recorded)
Nov 13 03:55:58.020: INFO: 	Container kubernetes-dashboard ready: true, restart count 1
Nov 13 03:55:58.020: INFO: kube-proxy-pzhf2 started at 2021-11-12 21:07:39 +0000 UTC (0+1 container statuses recorded)
Nov 13 03:55:58.020: INFO: 	Container kube-proxy ready: true, restart count 1
Nov 13 03:55:58.020: INFO: kube-flannel-mg66r started at 2021-11-12 21:08:36 +0000 UTC (1+1 container statuses recorded)
Nov 13 03:55:58.020: INFO: 	Init container install-cni ready: true, restart count 2
Nov 13 03:55:58.020: INFO: 	Container kube-flannel ready: true, restart count 2
Nov 13 03:55:58.020: INFO: kube-multus-ds-amd64-2wqj5 started at 2021-11-12 21:08:45 +0000 UTC (0+1 container statuses recorded)
Nov 13 03:55:58.020: INFO: 	Container kube-multus ready: true, restart count 1
Nov 13 03:55:58.020: INFO: node-feature-discovery-worker-mm7xs started at 2021-11-12 21:16:36 +0000 UTC (0+1 container statuses recorded)
Nov 13 03:55:58.020: INFO: 	Container nfd-worker ready: true, restart count 0
Nov 13 03:55:58.020: INFO: collectd-mp2z6 started at 2021-11-12 21:25:58 +0000 UTC (0+3 container statuses recorded)
Nov 13 03:55:58.020: INFO: 	Container collectd ready: true, restart count 0
Nov 13 03:55:58.020: INFO: 	Container collectd-exporter ready: true, restart count 0
Nov 13 03:55:58.020: INFO: 	Container rbac-proxy ready: true, restart count 0
Nov 13 03:55:58.020: INFO: cmk-qhvr7 started at 2021-11-12 21:21:01 +0000 UTC (0+2 container statuses recorded)
Nov 13 03:55:58.020: INFO: 	Container nodereport ready: true, restart count 0
Nov 13 03:55:58.020: INFO: 	Container reconcile ready: true, restart count 0
W1113 03:55:58.033943      24 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled.
Nov 13 03:55:58.276: INFO: 
Latency metrics for node node2
Nov 13 03:55:58.276: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-9199" for this suite.
[AfterEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750


• Failure [144.102 seconds]
[sig-network] Services
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should be able to update service type to NodePort listening on same port number but different protocols [It]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1211

  Nov 13 03:55:55.787: Unexpected error:
      <*errors.errorString | 0xc0046895c0>: {
          s: "service is not reachable within 2m0s timeout on endpoint 10.10.190.207:30630 over TCP protocol",
      }
      service is not reachable within 2m0s timeout on endpoint 10.10.190.207:30630 over TCP protocol
  occurred

  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1245
------------------------------
{"msg":"FAILED [sig-network] Services should be able to update service type to NodePort listening on same port number but different protocols","total":-1,"completed":1,"skipped":405,"failed":1,"failures":["[sig-network] Services should be able to update service type to NodePort listening on same port number but different protocols"]}
Nov 13 03:55:58.294: INFO: Running AfterSuite actions on all nodes


{"msg":"PASSED [sig-network] Networking Granular Checks: Services should update endpoints: http","total":-1,"completed":2,"skipped":348,"failed":0}
Nov 13 03:54:52.567: INFO: Running AfterSuite actions on all nodes
Nov 13 03:55:58.320: INFO: Running AfterSuite actions on node 1
Nov 13 03:55:58.320: INFO: Skipping dumping logs from cluster



Summarizing 2 Failures:

[Fail] [sig-network] Conntrack [It] should be able to preserve UDP traffic when server pod cycles for a NodePort service 
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes/runner.go:113

[Fail] [sig-network] Services [It] should be able to update service type to NodePort listening on same port number but different protocols 
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1245

Ran 27 of 5770 Specs in 218.848 seconds
FAIL! -- 25 Passed | 2 Failed | 0 Pending | 5743 Skipped


Ginkgo ran 1 suite in 3m40.499850407s
Test Suite Failed