Running Suite: Kubernetes e2e suite =================================== Random Seed: 1635565882 - Will randomize all specs Will run 5770 specs Running in parallel across 10 nodes Oct 30 03:51:24.212: INFO: >>> kubeConfig: /root/.kube/config Oct 30 03:51:24.218: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Oct 30 03:51:24.246: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Oct 30 03:51:24.309: INFO: The status of Pod cmk-init-discover-node1-n4mcc is Succeeded, skipping waiting Oct 30 03:51:24.309: INFO: The status of Pod cmk-init-discover-node2-2fmmt is Succeeded, skipping waiting Oct 30 03:51:24.309: INFO: 40 / 42 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Oct 30 03:51:24.309: INFO: expected 8 pod replicas in namespace 'kube-system', 8 are Running and Ready. Oct 30 03:51:24.309: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Oct 30 03:51:24.327: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'cmk' (0 seconds elapsed) Oct 30 03:51:24.327: INFO: 5 / 5 pods ready in namespace 'kube-system' in daemonset 'kube-flannel' (0 seconds elapsed) Oct 30 03:51:24.327: INFO: 0 / 0 pods ready in namespace 'kube-system' in daemonset 'kube-flannel-ds-arm' (0 seconds elapsed) Oct 30 03:51:24.327: INFO: 0 / 0 pods ready in namespace 'kube-system' in daemonset 'kube-flannel-ds-arm64' (0 seconds elapsed) Oct 30 03:51:24.327: INFO: 0 / 0 pods ready in namespace 'kube-system' in daemonset 'kube-flannel-ds-ppc64le' (0 seconds elapsed) Oct 30 03:51:24.327: INFO: 0 / 0 pods ready in namespace 'kube-system' in daemonset 'kube-flannel-ds-s390x' (0 seconds elapsed) Oct 30 03:51:24.327: INFO: 5 / 5 pods ready in namespace 'kube-system' in daemonset 'kube-multus-ds-amd64' (0 seconds elapsed) Oct 30 03:51:24.327: INFO: 5 / 5 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Oct 30 03:51:24.327: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'node-feature-discovery-worker' (0 seconds elapsed) Oct 30 03:51:24.327: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'sriov-net-dp-kube-sriov-device-plugin-amd64' (0 seconds elapsed) Oct 30 03:51:24.327: INFO: e2e test version: v1.21.5 Oct 30 03:51:24.328: INFO: kube-apiserver version: v1.21.1 Oct 30 03:51:24.329: INFO: >>> kubeConfig: /root/.kube/config Oct 30 03:51:24.335: INFO: Cluster IP family: ipv4 Oct 30 03:51:24.334: INFO: >>> kubeConfig: /root/.kube/config Oct 30 03:51:24.354: INFO: Cluster IP family: ipv4 Oct 30 03:51:24.336: INFO: >>> kubeConfig: /root/.kube/config Oct 30 03:51:24.357: INFO: Cluster IP family: ipv4 Oct 30 03:51:24.339: INFO: >>> kubeConfig: /root/.kube/config Oct 30 03:51:24.359: INFO: Cluster IP family: ipv4 Oct 30 03:51:24.347: INFO: >>> kubeConfig: /root/.kube/config Oct 30 03:51:24.369: INFO: Cluster IP family: ipv4 Oct 30 03:51:24.350: INFO: >>> kubeConfig: /root/.kube/config Oct 30 03:51:24.372: INFO: Cluster IP family: ipv4 Oct 30 03:51:24.354: INFO: >>> kubeConfig: /root/.kube/config Oct 30 03:51:24.376: INFO: Cluster IP family: ipv4 Oct 30 03:51:24.356: INFO: >>> kubeConfig: /root/.kube/config Oct 30 03:51:24.379: INFO: Cluster IP family: ipv4 SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ Oct 30 03:51:24.372: INFO: >>> kubeConfig: /root/.kube/config Oct 30 03:51:24.393: INFO: Cluster IP family: ipv4 SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ Oct 30 03:51:24.383: INFO: >>> kubeConfig: /root/.kube/config Oct 30 03:51:24.403: INFO: Cluster IP family: ipv4 SSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] ESIPP [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 03:51:24.376: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename esipp W1030 03:51:24.396465 28 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Oct 30 03:51:24.396: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Oct 30 03:51:24.399: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] ESIPP [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/loadbalancer.go:858 Oct 30 03:51:24.402: INFO: Only supported for providers [gce gke] (not local) [AfterEach] [sig-network] ESIPP [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 03:51:24.403: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "esipp-4322" for this suite. [AfterEach] [sig-network] ESIPP [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/loadbalancer.go:866 S [SKIPPING] in Spec Setup (BeforeEach) [0.034 seconds] [sig-network] ESIPP [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should work from pods [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/loadbalancer.go:1036 Only supported for providers [gce gke] (not local) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/loadbalancer.go:860 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] NetworkPolicy API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 03:51:24.390: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename networkpolicies W1030 03:51:24.409028 30 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Oct 30 03:51:24.409: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Oct 30 03:51:24.411: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [It] should support creating NetworkPolicy API operations /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/netpol/network_legacy.go:2196 STEP: getting /apis STEP: getting /apis/networking.k8s.io STEP: getting /apis/networking.k8s.iov1 STEP: creating STEP: getting STEP: listing STEP: watching Oct 30 03:51:24.434: INFO: starting watch STEP: cluster-wide listing STEP: cluster-wide watching Oct 30 03:51:24.436: INFO: starting watch STEP: patching STEP: updating Oct 30 03:51:24.443: INFO: waiting for watch events with expected annotations Oct 30 03:51:24.443: INFO: missing expected annotations, waiting: map[string]string{"patched":"true"} Oct 30 03:51:24.444: INFO: saw patched and updated annotations STEP: deleting STEP: deleting a collection [AfterEach] [sig-network] NetworkPolicy API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 03:51:24.458: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "networkpolicies-4033" for this suite. •SSSSSS ------------------------------ {"msg":"PASSED [sig-network] NetworkPolicy API should support creating NetworkPolicy API operations","total":-1,"completed":1,"skipped":4,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] Loadbalancing: L7 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 03:51:24.645: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename ingress STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Loadbalancing: L7 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/ingress.go:69 Oct 30 03:51:24.669: INFO: Found ClusterRoles; assuming RBAC is enabled. [BeforeEach] [Slow] Nginx /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/ingress.go:688 Oct 30 03:51:24.773: INFO: Only supported for providers [gce gke] (not local) [AfterEach] [Slow] Nginx /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/ingress.go:706 STEP: No ingress created, no cleanup necessary [AfterEach] [sig-network] Loadbalancing: L7 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 03:51:24.776: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "ingress-123" for this suite. S [SKIPPING] in Spec Setup (BeforeEach) [0.139 seconds] [sig-network] Loadbalancing: L7 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 [Slow] Nginx /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/ingress.go:685 should conform to Ingress spec [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/ingress.go:722 Only supported for providers [gce gke] (not local) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/ingress.go:689 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 03:51:24.968: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746 [It] should check NodePort out-of-range /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1494 STEP: creating service nodeport-range-test with type NodePort in namespace services-7150 STEP: changing service nodeport-range-test to out-of-range NodePort 39276 STEP: deleting original service nodeport-range-test STEP: creating service nodeport-range-test with out-of-range NodePort 39276 [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 03:51:25.016: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-7150" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750 •SS ------------------------------ {"msg":"PASSED [sig-network] Services should check NodePort out-of-range","total":-1,"completed":1,"skipped":232,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 03:51:24.917: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns W1030 03:51:24.936157 22 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Oct 30 03:51:24.936: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Oct 30 03:51:24.938: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [It] should resolve DNS of partial qualified names for the cluster [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns.go:90 STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default;check="$$(dig +tcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc;test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-6093.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.dns-6093.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-6093.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default;check="$$(dig +tcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc;test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-6093.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.dns-6093.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-6093.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Oct 30 03:51:37.015: INFO: DNS probes using dns-6093/dns-test-9bec9ac9-14ad-4dda-b0fe-0b035f9d93bd succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 03:51:37.022: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-6093" for this suite. • [SLOW TEST:12.113 seconds] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should resolve DNS of partial qualified names for the cluster [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns.go:90 ------------------------------ {"msg":"PASSED [sig-network] DNS should resolve DNS of partial qualified names for the cluster [LinuxOnly]","total":-1,"completed":1,"skipped":202,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 03:51:37.951: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for the cluster [Provider:GCE] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns.go:68 Oct 30 03:51:37.974: INFO: Only supported for providers [gce gke] (not local) [AfterEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 03:51:37.976: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-4419" for this suite. S [SKIPPING] [0.032 seconds] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should provide DNS for the cluster [Provider:GCE] [It] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns.go:68 Only supported for providers [gce gke] (not local) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns.go:69 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 03:51:25.337: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should support configurable pod resolv.conf /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns.go:458 STEP: Preparing a test DNS service with injected DNS names... Oct 30 03:51:25.372: INFO: Created pod &Pod{ObjectMeta:{e2e-configmap-dns-server-f35f86cc-c2ad-47bc-9b66-319bb07a4535 dns-2105 3b6edf2f-01e9-4f2b-92ee-98e985756171 142764 0 2021-10-30 03:51:25 +0000 UTC map[] map[kubernetes.io/psp:collectd] [] [] [{e2e.test Update v1 2021-10-30 03:51:25 +0000 UTC FieldsV1 {"f:spec":{"f:containers":{"k:{\"name\":\"agnhost-container\"}":{".":{},"f:args":{},"f:command":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{},"f:volumeMounts":{".":{},"k:{\"mountPath\":\"/etc/coredns\"}":{".":{},"f:mountPath":{},"f:name":{},"f:readOnly":{}}}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{},"f:volumes":{".":{},"k:{\"name\":\"coredns-config\"}":{".":{},"f:configMap":{".":{},"f:defaultMode":{},"f:name":{}},"f:name":{}}}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:coredns-config,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:&ConfigMapVolumeSource{LocalObjectReference:LocalObjectReference{Name:e2e-coredns-configmap-cdkn8,},Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},Volume{Name:kube-api-access-xp9z8,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:agnhost-container,Image:k8s.gcr.io/e2e-test-images/agnhost:2.32,Command:[/coredns],Args:[-conf /etc/coredns/Corefile],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:coredns-config,ReadOnly:true,MountPath:/etc/coredns,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-xp9z8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:Default,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Oct 30 03:51:37.382: INFO: testServerIP is 10.244.3.101 STEP: Creating a pod with dnsPolicy=None and customized dnsConfig... Oct 30 03:51:37.391: INFO: Created pod &Pod{ObjectMeta:{e2e-dns-utils dns-2105 6a7ebc74-ac8e-4992-b263-f3d0ae58e222 142980 0 2021-10-30 03:51:37 +0000 UTC map[] map[kubernetes.io/psp:collectd] [] [] [{e2e.test Update v1 2021-10-30 03:51:37 +0000 UTC FieldsV1 {"f:spec":{"f:containers":{"k:{\"name\":\"agnhost-container\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsConfig":{".":{},"f:nameservers":{},"f:options":{},"f:searches":{}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-th9t4,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:agnhost-container,Image:k8s.gcr.io/e2e-test-images/agnhost:2.32,Command:[],Args:[pause],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-th9t4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:None,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:&PodDNSConfig{Nameservers:[10.244.3.101],Searches:[resolv.conf.local],Options:[]PodDNSConfigOption{PodDNSConfigOption{Name:ndots,Value:*2,},},},ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} STEP: Verifying customized DNS option is configured on pod... Oct 30 03:51:41.398: INFO: ExecWithOptions {Command:[cat /etc/resolv.conf] Namespace:dns-2105 PodName:e2e-dns-utils ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Oct 30 03:51:41.398: INFO: >>> kubeConfig: /root/.kube/config STEP: Verifying customized name server and search path are working... Oct 30 03:51:41.970: INFO: ExecWithOptions {Command:[dig +short +search notexistname] Namespace:dns-2105 PodName:e2e-dns-utils ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Oct 30 03:51:41.970: INFO: >>> kubeConfig: /root/.kube/config Oct 30 03:51:42.328: INFO: Deleting pod e2e-dns-utils... Oct 30 03:51:42.334: INFO: Deleting pod e2e-configmap-dns-server-f35f86cc-c2ad-47bc-9b66-319bb07a4535... Oct 30 03:51:42.339: INFO: Deleting configmap e2e-coredns-configmap-cdkn8... [AfterEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 03:51:42.342: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-2105" for this suite. • [SLOW TEST:17.013 seconds] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should support configurable pod resolv.conf /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns.go:458 ------------------------------ {"msg":"PASSED [sig-network] DNS should support configurable pod resolv.conf","total":-1,"completed":2,"skipped":398,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 03:51:25.143: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services W1030 03:51:25.163394 37 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Oct 30 03:51:25.163: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Oct 30 03:51:25.165: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746 [It] should be possible to connect to a service via ExternalIP when the external IP is not assigned to a node /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1177 STEP: creating service externalip-test with type=clusterIP in namespace services-362 STEP: creating replication controller externalip-test in namespace services-362 I1030 03:51:25.176635 37 runners.go:190] Created replication controller with name: externalip-test, namespace: services-362, replica count: 2 I1030 03:51:28.229350 37 runners.go:190] externalip-test Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1030 03:51:31.229656 37 runners.go:190] externalip-test Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1030 03:51:34.230803 37 runners.go:190] externalip-test Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1030 03:51:37.231638 37 runners.go:190] externalip-test Pods: 2 out of 2 created, 1 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1030 03:51:40.232775 37 runners.go:190] externalip-test Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Oct 30 03:51:40.232: INFO: Creating new exec pod Oct 30 03:51:45.249: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-362 exec execpodlq6sq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalip-test 80' Oct 30 03:51:45.497: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 externalip-test 80\nConnection to externalip-test 80 port [tcp/http] succeeded!\n" Oct 30 03:51:45.497: INFO: stdout: "externalip-test-qd78h" Oct 30 03:51:45.497: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-362 exec execpodlq6sq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.233.16.94 80' Oct 30 03:51:45.731: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 10.233.16.94 80\nConnection to 10.233.16.94 80 port [tcp/http] succeeded!\n" Oct 30 03:51:45.731: INFO: stdout: "externalip-test-qd78h" Oct 30 03:51:45.731: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-362 exec execpodlq6sq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 203.0.113.250 80' Oct 30 03:51:45.969: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 203.0.113.250 80\nConnection to 203.0.113.250 80 port [tcp/http] succeeded!\n" Oct 30 03:51:45.969: INFO: stdout: "externalip-test-qd78h" [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 03:51:45.969: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-362" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750 • [SLOW TEST:20.835 seconds] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should be possible to connect to a service via ExternalIP when the external IP is not assigned to a node /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1177 ------------------------------ {"msg":"PASSED [sig-network] Services should be possible to connect to a service via ExternalIP when the external IP is not assigned to a node","total":-1,"completed":1,"skipped":315,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 03:51:24.855: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services W1030 03:51:24.875266 31 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Oct 30 03:51:24.875: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Oct 30 03:51:24.877: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746 [It] should create endpoints for unready pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1624 STEP: creating RC slow-terminating-unready-pod with selectors map[name:slow-terminating-unready-pod] STEP: creating Service tolerate-unready with selectors map[name:slow-terminating-unready-pod testid:tolerate-unready-ff578e0e-b6c6-459d-aa5f-5d91526c2a3f] STEP: Verifying pods for RC slow-terminating-unready-pod Oct 30 03:51:24.891: INFO: Pod name slow-terminating-unready-pod: Found 0 pods out of 1 Oct 30 03:51:29.894: INFO: Pod name slow-terminating-unready-pod: Found 1 pods out of 1 STEP: ensuring each pod is running STEP: trying to dial each unique pod Oct 30 03:51:35.908: INFO: Controller slow-terminating-unready-pod: Got non-empty result from replica 1 [slow-terminating-unready-pod-whfqz]: "NOW: 2021-10-30 03:51:35.907573455 +0000 UTC m=+1.065326380", 1 of 1 required successes so far STEP: Waiting for endpoints of Service with DNS name tolerate-unready.services-6968.svc.cluster.local Oct 30 03:51:35.908: INFO: Creating new exec pod Oct 30 03:51:39.925: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6968 exec execpod-nkrxw -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://tolerate-unready.services-6968.svc.cluster.local:80/' Oct 30 03:51:40.267: INFO: stderr: "+ curl -q -s --connect-timeout 2 http://tolerate-unready.services-6968.svc.cluster.local:80/\n" Oct 30 03:51:40.267: INFO: stdout: "NOW: 2021-10-30 03:51:40.255082179 +0000 UTC m=+5.412835105" STEP: Scaling down replication controller to zero STEP: Scaling ReplicationController slow-terminating-unready-pod in namespace services-6968 to 0 STEP: Update service to not tolerate unready services STEP: Check if pod is unreachable Oct 30 03:51:45.306: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6968 exec execpod-nkrxw -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://tolerate-unready.services-6968.svc.cluster.local:80/; test "$?" -ne "0"' Oct 30 03:51:46.549: INFO: stderr: "+ curl -q -s --connect-timeout 2 http://tolerate-unready.services-6968.svc.cluster.local:80/\n+ test 7 -ne 0\n" Oct 30 03:51:46.549: INFO: stdout: "" STEP: Update service to tolerate unready services again STEP: Check if terminating pod is available through service Oct 30 03:51:46.558: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6968 exec execpod-nkrxw -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://tolerate-unready.services-6968.svc.cluster.local:80/' Oct 30 03:51:46.860: INFO: stderr: "+ curl -q -s --connect-timeout 2 http://tolerate-unready.services-6968.svc.cluster.local:80/\n" Oct 30 03:51:46.860: INFO: stdout: "NOW: 2021-10-30 03:51:46.854010879 +0000 UTC m=+12.011763811" STEP: Remove pods immediately STEP: stopping RC slow-terminating-unready-pod in namespace services-6968 STEP: deleting service tolerate-unready in namespace services-6968 [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 03:51:46.886: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-6968" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750 • [SLOW TEST:22.044 seconds] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should create endpoints for unready pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1624 ------------------------------ {"msg":"PASSED [sig-network] Services should create endpoints for unready pods","total":-1,"completed":1,"skipped":179,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 03:51:24.471: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename nettest W1030 03:51:24.489613 29 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Oct 30 03:51:24.489: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Oct 30 03:51:24.491: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:83 STEP: Executing a successful http request from the external internet [It] should function for endpoint-Service: http /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:242 STEP: Performing setup for networking test in namespace nettest-5138 STEP: creating a selector STEP: Creating the service pods in kubernetes Oct 30 03:51:24.608: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Oct 30 03:51:24.639: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Oct 30 03:51:26.642: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Oct 30 03:51:28.643: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Oct 30 03:51:30.643: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Oct 30 03:51:32.643: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Oct 30 03:51:34.644: INFO: The status of Pod netserver-0 is Running (Ready = false) Oct 30 03:51:36.647: INFO: The status of Pod netserver-0 is Running (Ready = false) Oct 30 03:51:38.643: INFO: The status of Pod netserver-0 is Running (Ready = false) Oct 30 03:51:40.645: INFO: The status of Pod netserver-0 is Running (Ready = false) Oct 30 03:51:42.643: INFO: The status of Pod netserver-0 is Running (Ready = false) Oct 30 03:51:44.643: INFO: The status of Pod netserver-0 is Running (Ready = false) Oct 30 03:51:46.646: INFO: The status of Pod netserver-0 is Running (Ready = true) Oct 30 03:51:46.650: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods Oct 30 03:51:52.670: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2 STEP: Getting node addresses Oct 30 03:51:52.670: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Oct 30 03:51:52.678: INFO: Requires at least 2 nodes (not -1) [AfterEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 03:51:52.680: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "nettest-5138" for this suite. S [SKIPPING] [28.216 seconds] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 Granular Checks: Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:151 should function for endpoint-Service: http [It] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:242 Requires at least 2 nodes (not -1) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/network/utils.go:782 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 03:51:24.509: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename nettest W1030 03:51:24.530826 36 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Oct 30 03:51:24.531: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Oct 30 03:51:24.532: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:83 STEP: Executing a successful http request from the external internet [It] should function for node-Service: http /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:198 STEP: Performing setup for networking test in namespace nettest-2033 STEP: creating a selector STEP: Creating the service pods in kubernetes Oct 30 03:51:24.652: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Oct 30 03:51:24.683: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Oct 30 03:51:26.687: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Oct 30 03:51:28.688: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Oct 30 03:51:30.689: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Oct 30 03:51:32.687: INFO: The status of Pod netserver-0 is Running (Ready = false) Oct 30 03:51:34.687: INFO: The status of Pod netserver-0 is Running (Ready = false) Oct 30 03:51:36.688: INFO: The status of Pod netserver-0 is Running (Ready = false) Oct 30 03:51:38.686: INFO: The status of Pod netserver-0 is Running (Ready = false) Oct 30 03:51:40.688: INFO: The status of Pod netserver-0 is Running (Ready = false) Oct 30 03:51:42.687: INFO: The status of Pod netserver-0 is Running (Ready = false) Oct 30 03:51:44.687: INFO: The status of Pod netserver-0 is Running (Ready = false) Oct 30 03:51:46.688: INFO: The status of Pod netserver-0 is Running (Ready = true) Oct 30 03:51:46.693: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods Oct 30 03:51:52.728: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2 STEP: Getting node addresses Oct 30 03:51:52.728: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Oct 30 03:51:52.734: INFO: Requires at least 2 nodes (not -1) [AfterEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 03:51:52.736: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "nettest-2033" for this suite. S [SKIPPING] [28.235 seconds] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 Granular Checks: Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:151 should function for node-Service: http [It] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:198 Requires at least 2 nodes (not -1) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/network/utils.go:782 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 03:51:24.634: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename nettest W1030 03:51:24.654307 26 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Oct 30 03:51:24.654: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Oct 30 03:51:24.656: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:83 STEP: Executing a successful http request from the external internet [It] should support basic nodePort: udp functionality /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:387 STEP: Performing setup for networking test in namespace nettest-9341 STEP: creating a selector STEP: Creating the service pods in kubernetes Oct 30 03:51:24.768: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Oct 30 03:51:24.799: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Oct 30 03:51:26.804: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Oct 30 03:51:28.803: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Oct 30 03:51:30.803: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Oct 30 03:51:32.803: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Oct 30 03:51:34.803: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Oct 30 03:51:36.803: INFO: The status of Pod netserver-0 is Running (Ready = false) Oct 30 03:51:38.803: INFO: The status of Pod netserver-0 is Running (Ready = false) Oct 30 03:51:40.804: INFO: The status of Pod netserver-0 is Running (Ready = false) Oct 30 03:51:42.803: INFO: The status of Pod netserver-0 is Running (Ready = false) Oct 30 03:51:44.804: INFO: The status of Pod netserver-0 is Running (Ready = false) Oct 30 03:51:46.803: INFO: The status of Pod netserver-0 is Running (Ready = true) Oct 30 03:51:46.808: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods Oct 30 03:51:52.840: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2 STEP: Getting node addresses Oct 30 03:51:52.840: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Oct 30 03:51:52.846: INFO: Requires at least 2 nodes (not -1) [AfterEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 03:51:52.848: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "nettest-9341" for this suite. S [SKIPPING] [28.222 seconds] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 Granular Checks: Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:151 should support basic nodePort: udp functionality [It] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:387 Requires at least 2 nodes (not -1) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/network/utils.go:782 ------------------------------ SSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] ESIPP [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 03:51:52.840: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename esipp STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] ESIPP [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/loadbalancer.go:858 Oct 30 03:51:52.861: INFO: Only supported for providers [gce gke] (not local) [AfterEach] [sig-network] ESIPP [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 03:51:52.862: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "esipp-2893" for this suite. [AfterEach] [sig-network] ESIPP [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/loadbalancer.go:866 S [SKIPPING] in Spec Setup (BeforeEach) [0.030 seconds] [sig-network] ESIPP [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should work for type=NodePort [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/loadbalancer.go:927 Only supported for providers [gce gke] (not local) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/loadbalancer.go:860 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 03:51:25.063: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename nettest STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:83 STEP: Executing a successful http request from the external internet [It] should be able to handle large requests: udp /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:461 STEP: Performing setup for networking test in namespace nettest-7153 STEP: creating a selector STEP: Creating the service pods in kubernetes Oct 30 03:51:25.196: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Oct 30 03:51:25.227: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Oct 30 03:51:27.234: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Oct 30 03:51:29.230: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Oct 30 03:51:31.236: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Oct 30 03:51:33.232: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Oct 30 03:51:35.233: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Oct 30 03:51:37.235: INFO: The status of Pod netserver-0 is Running (Ready = false) Oct 30 03:51:39.231: INFO: The status of Pod netserver-0 is Running (Ready = false) Oct 30 03:51:41.232: INFO: The status of Pod netserver-0 is Running (Ready = false) Oct 30 03:51:43.231: INFO: The status of Pod netserver-0 is Running (Ready = false) Oct 30 03:51:45.230: INFO: The status of Pod netserver-0 is Running (Ready = false) Oct 30 03:51:47.231: INFO: The status of Pod netserver-0 is Running (Ready = true) Oct 30 03:51:47.236: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods Oct 30 03:51:53.257: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2 STEP: Getting node addresses Oct 30 03:51:53.257: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Oct 30 03:51:53.266: INFO: Requires at least 2 nodes (not -1) [AfterEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 03:51:53.268: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "nettest-7153" for this suite. S [SKIPPING] [28.214 seconds] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 Granular Checks: Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:151 should be able to handle large requests: udp [It] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:461 Requires at least 2 nodes (not -1) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/network/utils.go:782 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] version v1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 03:51:53.301: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node using proxy subresource /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:91 Oct 30 03:51:53.335: INFO: (0) /api/v1/nodes/node1/proxy/logs/:
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename proxy
STEP: Waiting for a default service account to be provisioned in namespace
[It] should proxy logs on node with explicit kubelet port using proxy subresource 
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:85
Oct 30 03:51:53.877: INFO: (0) /api/v1/nodes/node1:10250/proxy/logs/: 
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename nettest
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:83
STEP: Executing a successful http request from the external internet
[It] should update endpoints: udp
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:351
STEP: Performing setup for networking test in namespace nettest-9000
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Oct 30 03:51:38.179: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
Oct 30 03:51:38.210: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Oct 30 03:51:40.214: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Oct 30 03:51:42.216: INFO: The status of Pod netserver-0 is Running (Ready = false)
Oct 30 03:51:44.214: INFO: The status of Pod netserver-0 is Running (Ready = false)
Oct 30 03:51:46.214: INFO: The status of Pod netserver-0 is Running (Ready = false)
Oct 30 03:51:48.214: INFO: The status of Pod netserver-0 is Running (Ready = false)
Oct 30 03:51:50.214: INFO: The status of Pod netserver-0 is Running (Ready = false)
Oct 30 03:51:52.216: INFO: The status of Pod netserver-0 is Running (Ready = false)
Oct 30 03:51:54.215: INFO: The status of Pod netserver-0 is Running (Ready = false)
Oct 30 03:51:56.214: INFO: The status of Pod netserver-0 is Running (Ready = false)
Oct 30 03:51:58.214: INFO: The status of Pod netserver-0 is Running (Ready = false)
Oct 30 03:52:00.214: INFO: The status of Pod netserver-0 is Running (Ready = true)
Oct 30 03:52:00.219: INFO: The status of Pod netserver-1 is Running (Ready = true)
STEP: Creating test pods
Oct 30 03:52:06.241: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2
STEP: Getting node addresses
Oct 30 03:52:06.241: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
Oct 30 03:52:06.248: INFO: Requires at least 2 nodes (not -1)
[AfterEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct 30 03:52:06.250: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "nettest-9000" for this suite.


S [SKIPPING] [28.209 seconds]
[sig-network] Networking
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  Granular Checks: Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:151
    should update endpoints: udp [It]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:351

    Requires at least 2 nodes (not -1)

    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/network/utils.go:782
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct 30 03:51:46.012: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename nettest
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:83
STEP: Executing a successful http request from the external internet
[It] should update nodePort: udp [Slow]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:397
STEP: Performing setup for networking test in namespace nettest-1900
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Oct 30 03:51:46.146: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
Oct 30 03:51:46.179: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Oct 30 03:51:48.183: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Oct 30 03:51:50.183: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Oct 30 03:51:52.184: INFO: The status of Pod netserver-0 is Running (Ready = false)
Oct 30 03:51:54.184: INFO: The status of Pod netserver-0 is Running (Ready = false)
Oct 30 03:51:56.185: INFO: The status of Pod netserver-0 is Running (Ready = false)
Oct 30 03:51:58.183: INFO: The status of Pod netserver-0 is Running (Ready = false)
Oct 30 03:52:00.183: INFO: The status of Pod netserver-0 is Running (Ready = false)
Oct 30 03:52:02.185: INFO: The status of Pod netserver-0 is Running (Ready = false)
Oct 30 03:52:04.182: INFO: The status of Pod netserver-0 is Running (Ready = false)
Oct 30 03:52:06.185: INFO: The status of Pod netserver-0 is Running (Ready = false)
Oct 30 03:52:08.182: INFO: The status of Pod netserver-0 is Running (Ready = true)
Oct 30 03:52:08.187: INFO: The status of Pod netserver-1 is Running (Ready = true)
STEP: Creating test pods
Oct 30 03:52:18.224: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2
STEP: Getting node addresses
Oct 30 03:52:18.224: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
Oct 30 03:52:18.230: INFO: Requires at least 2 nodes (not -1)
[AfterEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct 30 03:52:18.232: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "nettest-1900" for this suite.


S [SKIPPING] [32.228 seconds]
[sig-network] Networking
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  Granular Checks: Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:151
    should update nodePort: udp [Slow] [It]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:397

    Requires at least 2 nodes (not -1)

    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/network/utils.go:782
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct 30 03:51:46.973: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename nettest
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:83
STEP: Executing a successful http request from the external internet
[It] should function for pod-Service: http
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:153
STEP: Performing setup for networking test in namespace nettest-4823
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Oct 30 03:51:47.108: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
Oct 30 03:51:47.142: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Oct 30 03:51:49.148: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Oct 30 03:51:51.148: INFO: The status of Pod netserver-0 is Running (Ready = false)
Oct 30 03:51:53.147: INFO: The status of Pod netserver-0 is Running (Ready = false)
Oct 30 03:51:55.146: INFO: The status of Pod netserver-0 is Running (Ready = false)
Oct 30 03:51:57.147: INFO: The status of Pod netserver-0 is Running (Ready = false)
Oct 30 03:51:59.146: INFO: The status of Pod netserver-0 is Running (Ready = false)
Oct 30 03:52:01.146: INFO: The status of Pod netserver-0 is Running (Ready = false)
Oct 30 03:52:03.147: INFO: The status of Pod netserver-0 is Running (Ready = false)
Oct 30 03:52:05.145: INFO: The status of Pod netserver-0 is Running (Ready = false)
Oct 30 03:52:07.147: INFO: The status of Pod netserver-0 is Running (Ready = false)
Oct 30 03:52:09.146: INFO: The status of Pod netserver-0 is Running (Ready = true)
Oct 30 03:52:09.151: INFO: The status of Pod netserver-1 is Running (Ready = true)
STEP: Creating test pods
Oct 30 03:52:19.172: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2
STEP: Getting node addresses
Oct 30 03:52:19.172: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
Oct 30 03:52:19.179: INFO: Requires at least 2 nodes (not -1)
[AfterEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct 30 03:52:19.180: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "nettest-4823" for this suite.


S [SKIPPING] [32.215 seconds]
[sig-network] Networking
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  Granular Checks: Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:151
    should function for pod-Service: http [It]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:153

    Requires at least 2 nodes (not -1)

    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/network/utils.go:782
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct 30 03:51:54.205: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746
[It] should preserve source pod IP for traffic thru service cluster IP [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:903
Oct 30 03:51:54.240: INFO: The status of Pod kube-proxy-mode-detector is Pending, waiting for it to be Running (with Ready = true)
Oct 30 03:51:56.244: INFO: The status of Pod kube-proxy-mode-detector is Pending, waiting for it to be Running (with Ready = true)
Oct 30 03:51:58.243: INFO: The status of Pod kube-proxy-mode-detector is Pending, waiting for it to be Running (with Ready = true)
Oct 30 03:52:00.244: INFO: The status of Pod kube-proxy-mode-detector is Pending, waiting for it to be Running (with Ready = true)
Oct 30 03:52:02.244: INFO: The status of Pod kube-proxy-mode-detector is Pending, waiting for it to be Running (with Ready = true)
Oct 30 03:52:04.244: INFO: The status of Pod kube-proxy-mode-detector is Running (Ready = true)
Oct 30 03:52:04.247: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1290 exec kube-proxy-mode-detector -- /bin/sh -x -c curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode'
Oct 30 03:52:05.290: INFO: stderr: "+ curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode\n"
Oct 30 03:52:05.290: INFO: stdout: "iptables"
Oct 30 03:52:05.290: INFO: proxyMode: iptables
Oct 30 03:52:05.297: INFO: Waiting for pod kube-proxy-mode-detector to disappear
Oct 30 03:52:05.299: INFO: Pod kube-proxy-mode-detector no longer exists
STEP: creating a TCP service sourceip-test with type=ClusterIP in namespace services-1290
Oct 30 03:52:05.304: INFO: sourceip-test cluster ip: 10.233.21.8
STEP: Picking 2 Nodes to test whether source IP is preserved or not
STEP: Creating a webserver pod to be part of the TCP service which echoes back source ip
Oct 30 03:52:05.324: INFO: The status of Pod echo-sourceip is Pending, waiting for it to be Running (with Ready = true)
Oct 30 03:52:07.331: INFO: The status of Pod echo-sourceip is Pending, waiting for it to be Running (with Ready = true)
Oct 30 03:52:09.328: INFO: The status of Pod echo-sourceip is Pending, waiting for it to be Running (with Ready = true)
Oct 30 03:52:11.327: INFO: The status of Pod echo-sourceip is Running (Ready = true)
STEP: waiting up to 3m0s for service sourceip-test in namespace services-1290 to expose endpoints map[echo-sourceip:[8080]]
Oct 30 03:52:11.336: INFO: successfully validated that service sourceip-test in namespace services-1290 exposes endpoints map[echo-sourceip:[8080]]
STEP: Creating pause pod deployment
Oct 30 03:52:11.342: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:0, Replicas:0, UpdatedReplicas:0, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:0, Conditions:[]v1.DeploymentCondition(nil), CollisionCount:(*int32)(nil)}
Oct 30 03:52:13.346: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:2, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:2, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63771162731, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63771162731, loc:(*time.Location)(0x9e12f00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63771162731, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63771162731, loc:(*time.Location)(0x9e12f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"pause-pod-5567d84c4f\" is progressing."}}, CollisionCount:(*int32)(nil)}
Oct 30 03:52:15.346: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:2, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:2, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63771162731, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63771162731, loc:(*time.Location)(0x9e12f00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63771162731, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63771162731, loc:(*time.Location)(0x9e12f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"pause-pod-5567d84c4f\" is progressing."}}, CollisionCount:(*int32)(nil)}
Oct 30 03:52:17.346: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:2, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63771162731, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63771162731, loc:(*time.Location)(0x9e12f00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63771162736, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63771162731, loc:(*time.Location)(0x9e12f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"pause-pod-5567d84c4f\" is progressing."}}, CollisionCount:(*int32)(nil)}
Oct 30 03:52:19.346: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:2, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63771162731, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63771162731, loc:(*time.Location)(0x9e12f00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63771162736, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63771162731, loc:(*time.Location)(0x9e12f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"pause-pod-5567d84c4f\" is progressing."}}, CollisionCount:(*int32)(nil)}
Oct 30 03:52:21.351: INFO: Waiting up to 2m0s to get response from 10.233.21.8:8080
Oct 30 03:52:21.351: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1290 exec pause-pod-5567d84c4f-fpkc5 -- /bin/sh -x -c curl -q -s --connect-timeout 30 10.233.21.8:8080/clientip'
Oct 30 03:52:21.939: INFO: stderr: "+ curl -q -s --connect-timeout 30 10.233.21.8:8080/clientip\n"
Oct 30 03:52:21.939: INFO: stdout: "10.244.3.119:51184"
STEP: Verifying the preserved source ip
Oct 30 03:52:21.939: INFO: Waiting up to 2m0s to get response from 10.233.21.8:8080
Oct 30 03:52:21.939: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1290 exec pause-pod-5567d84c4f-wxk6j -- /bin/sh -x -c curl -q -s --connect-timeout 30 10.233.21.8:8080/clientip'
Oct 30 03:52:22.418: INFO: stderr: "+ curl -q -s --connect-timeout 30 10.233.21.8:8080/clientip\n"
Oct 30 03:52:22.418: INFO: stdout: "10.244.4.2:36740"
STEP: Verifying the preserved source ip
Oct 30 03:52:22.418: INFO: Deleting deployment
Oct 30 03:52:22.423: INFO: Cleaning up the echo server pod
Oct 30 03:52:22.430: INFO: Cleaning up the sourceip test service
[AfterEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct 30 03:52:22.439: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-1290" for this suite.
[AfterEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750


• [SLOW TEST:28.242 seconds]
[sig-network] Services
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should preserve source pod IP for traffic thru service cluster IP [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:903
------------------------------
{"msg":"PASSED [sig-network] Services should preserve source pod IP for traffic thru service cluster IP [LinuxOnly]","total":-1,"completed":2,"skipped":624,"failed":0}

SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-network] Netpol API
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct 30 03:52:22.498: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename netpol
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support creating NetworkPolicy API operations
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/netpol/network_policy_api.go:48
STEP: getting /apis
STEP: getting /apis/networking.k8s.io
STEP: getting /apis/networking.k8s.iov1
STEP: creating
STEP: getting
STEP: listing
STEP: watching
Oct 30 03:52:22.534: INFO: starting watch
STEP: cluster-wide listing
STEP: cluster-wide watching
Oct 30 03:52:22.537: INFO: starting watch
STEP: patching
STEP: updating
Oct 30 03:52:22.544: INFO: waiting for watch events with expected annotations
Oct 30 03:52:22.544: INFO: missing expected annotations, waiting: map[string]string{"patched":"true"}
Oct 30 03:52:22.544: INFO: saw patched and updated annotations
STEP: deleting
STEP: deleting a collection
[AfterEach] [sig-network] Netpol API
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct 30 03:52:22.560: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "netpol-1817" for this suite.

•
------------------------------
{"msg":"PASSED [sig-network] Netpol API should support creating NetworkPolicy API operations","total":-1,"completed":3,"skipped":648,"failed":0}

SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct 30 03:52:22.674: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746
[It] should prevent NodePort collisions
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1440
STEP: creating service nodeport-collision-1 with type NodePort in namespace services-4969
STEP: creating service nodeport-collision-2 with conflicting NodePort
STEP: deleting service nodeport-collision-1 to release NodePort
STEP: creating service nodeport-collision-2 with no-longer-conflicting NodePort
STEP: deleting service nodeport-collision-2 in namespace services-4969
[AfterEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct 30 03:52:22.736: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-4969" for this suite.
[AfterEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750

•
------------------------------
{"msg":"PASSED [sig-network] Services should prevent NodePort collisions","total":-1,"completed":4,"skipped":704,"failed":0}

SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct 30 03:52:22.850: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename nettest
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:83
STEP: Executing a successful http request from the external internet
[It] should provide unchanging, static URL paths for kubernetes api services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:112
STEP: testing: /healthz
STEP: testing: /api
STEP: testing: /apis
STEP: testing: /metrics
STEP: testing: /openapi/v2
STEP: testing: /version
STEP: testing: /logs
[AfterEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct 30 03:52:23.123: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "nettest-958" for this suite.

•
------------------------------
{"msg":"PASSED [sig-network] Networking should provide unchanging, static URL paths for kubernetes api services","total":-1,"completed":5,"skipped":758,"failed":0}

SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct 30 03:51:53.147: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename nettest
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:83
STEP: Executing a successful http request from the external internet
[It] should update endpoints: http
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:334
STEP: Performing setup for networking test in namespace nettest-8205
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Oct 30 03:51:53.250: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
Oct 30 03:51:53.281: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Oct 30 03:51:55.285: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Oct 30 03:51:57.287: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Oct 30 03:51:59.285: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Oct 30 03:52:01.285: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Oct 30 03:52:03.284: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Oct 30 03:52:05.286: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Oct 30 03:52:07.286: INFO: The status of Pod netserver-0 is Running (Ready = false)
Oct 30 03:52:09.285: INFO: The status of Pod netserver-0 is Running (Ready = false)
Oct 30 03:52:11.286: INFO: The status of Pod netserver-0 is Running (Ready = false)
Oct 30 03:52:13.285: INFO: The status of Pod netserver-0 is Running (Ready = false)
Oct 30 03:52:15.285: INFO: The status of Pod netserver-0 is Running (Ready = false)
Oct 30 03:52:17.286: INFO: The status of Pod netserver-0 is Running (Ready = true)
Oct 30 03:52:17.291: INFO: The status of Pod netserver-1 is Running (Ready = true)
STEP: Creating test pods
Oct 30 03:52:27.311: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2
STEP: Getting node addresses
Oct 30 03:52:27.311: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
Oct 30 03:52:27.317: INFO: Requires at least 2 nodes (not -1)
[AfterEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct 30 03:52:27.319: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "nettest-8205" for this suite.


S [SKIPPING] [34.181 seconds]
[sig-network] Networking
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  Granular Checks: Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:151
    should update endpoints: http [It]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:334

    Requires at least 2 nodes (not -1)

    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/network/utils.go:782
------------------------------
SSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct 30 03:51:53.136: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename nettest
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:83
STEP: Executing a successful http request from the external internet
[It] should function for client IP based session affinity: udp [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:434
STEP: Performing setup for networking test in namespace nettest-4051
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Oct 30 03:51:53.241: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
Oct 30 03:51:53.276: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Oct 30 03:51:55.281: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Oct 30 03:51:57.281: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Oct 30 03:51:59.279: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Oct 30 03:52:01.280: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Oct 30 03:52:03.280: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Oct 30 03:52:05.280: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Oct 30 03:52:07.280: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Oct 30 03:52:09.280: INFO: The status of Pod netserver-0 is Running (Ready = false)
Oct 30 03:52:11.281: INFO: The status of Pod netserver-0 is Running (Ready = false)
Oct 30 03:52:13.280: INFO: The status of Pod netserver-0 is Running (Ready = false)
Oct 30 03:52:15.281: INFO: The status of Pod netserver-0 is Running (Ready = false)
Oct 30 03:52:17.281: INFO: The status of Pod netserver-0 is Running (Ready = false)
Oct 30 03:52:19.279: INFO: The status of Pod netserver-0 is Running (Ready = false)
Oct 30 03:52:21.281: INFO: The status of Pod netserver-0 is Running (Ready = false)
Oct 30 03:52:23.279: INFO: The status of Pod netserver-0 is Running (Ready = false)
Oct 30 03:52:25.279: INFO: The status of Pod netserver-0 is Running (Ready = true)
Oct 30 03:52:25.284: INFO: The status of Pod netserver-1 is Running (Ready = true)
STEP: Creating test pods
Oct 30 03:52:29.307: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2
STEP: Getting node addresses
Oct 30 03:52:29.307: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
Oct 30 03:52:29.314: INFO: Requires at least 2 nodes (not -1)
[AfterEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct 30 03:52:29.316: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "nettest-4051" for this suite.


S [SKIPPING] [36.191 seconds]
[sig-network] Networking
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  Granular Checks: Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:151
    should function for client IP based session affinity: udp [LinuxOnly] [It]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:434

    Requires at least 2 nodes (not -1)

    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/network/utils.go:782
------------------------------
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct 30 03:52:19.264: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746
[It] should allow pods to hairpin back to themselves through services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:986
STEP: creating a TCP service hairpin-test with type=ClusterIP in namespace services-931
Oct 30 03:52:19.288: INFO: hairpin-test cluster ip: 10.233.45.196
STEP: creating a client/server pod
Oct 30 03:52:19.301: INFO: The status of Pod hairpin is Pending, waiting for it to be Running (with Ready = true)
Oct 30 03:52:21.304: INFO: The status of Pod hairpin is Pending, waiting for it to be Running (with Ready = true)
Oct 30 03:52:23.304: INFO: The status of Pod hairpin is Pending, waiting for it to be Running (with Ready = true)
Oct 30 03:52:25.304: INFO: The status of Pod hairpin is Pending, waiting for it to be Running (with Ready = true)
Oct 30 03:52:27.305: INFO: The status of Pod hairpin is Pending, waiting for it to be Running (with Ready = true)
Oct 30 03:52:29.305: INFO: The status of Pod hairpin is Pending, waiting for it to be Running (with Ready = true)
Oct 30 03:52:31.305: INFO: The status of Pod hairpin is Running (Ready = true)
STEP: waiting for the service to expose an endpoint
STEP: waiting up to 3m0s for service hairpin-test in namespace services-931 to expose endpoints map[hairpin:[8080]]
Oct 30 03:52:31.314: INFO: successfully validated that service hairpin-test in namespace services-931 exposes endpoints map[hairpin:[8080]]
STEP: Checking if the pod can reach itself
Oct 30 03:52:32.315: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-931 exec hairpin -- /bin/sh -x -c echo hostName | nc -v -t -w 2 hairpin-test 8080'
Oct 30 03:52:32.571: INFO: stderr: "+ nc -v -t -w 2 hairpin-test 8080\n+ echo hostName\nConnection to hairpin-test 8080 port [tcp/http-alt] succeeded!\n"
Oct 30 03:52:32.571: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request"
Oct 30 03:52:32.571: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-931 exec hairpin -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.233.45.196 8080'
Oct 30 03:52:32.872: INFO: stderr: "+ nc -v -t -w 2 10.233.45.196 8080\nConnection to 10.233.45.196 8080 port [tcp/http-alt] succeeded!\n+ echo hostName\n"
Oct 30 03:52:32.873: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request"
[AfterEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct 30 03:52:32.873: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-931" for this suite.
[AfterEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750


• [SLOW TEST:13.685 seconds]
[sig-network] Services
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should allow pods to hairpin back to themselves through services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:986
------------------------------
{"msg":"PASSED [sig-network] Services should allow pods to hairpin back to themselves through services","total":-1,"completed":2,"skipped":253,"failed":0}

SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-network] Conntrack
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct 30 03:51:24.489: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename conntrack
W1030 03:51:24.509175      27 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
Oct 30 03:51:24.509: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled
Oct 30 03:51:24.510: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Conntrack
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/conntrack.go:96
[It] should drop INVALID conntrack entries
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/conntrack.go:282
Oct 30 03:51:24.531: INFO: The status of Pod boom-server is Pending, waiting for it to be Running (with Ready = true)
Oct 30 03:51:26.535: INFO: The status of Pod boom-server is Pending, waiting for it to be Running (with Ready = true)
Oct 30 03:51:28.536: INFO: The status of Pod boom-server is Pending, waiting for it to be Running (with Ready = true)
Oct 30 03:51:30.536: INFO: The status of Pod boom-server is Pending, waiting for it to be Running (with Ready = true)
Oct 30 03:51:32.537: INFO: The status of Pod boom-server is Running (Ready = true)
STEP: Server pod created on node node2
STEP: Server service created
Oct 30 03:51:32.560: INFO: The status of Pod startup-script is Pending, waiting for it to be Running (with Ready = true)
Oct 30 03:51:34.563: INFO: The status of Pod startup-script is Pending, waiting for it to be Running (with Ready = true)
Oct 30 03:51:36.564: INFO: The status of Pod startup-script is Pending, waiting for it to be Running (with Ready = true)
Oct 30 03:51:38.564: INFO: The status of Pod startup-script is Running (Ready = true)
STEP: Client pod created
STEP: checking client pod does not RST the TCP connection because it receives and INVALID packet
Oct 30 03:52:38.650: INFO: boom-server pod logs: 2021/10/30 03:51:31 external ip: 10.244.4.229
2021/10/30 03:51:31 listen on 0.0.0.0:9000
2021/10/30 03:51:31 probing 10.244.4.229
2021/10/30 03:51:38 tcp packet: &{SrcPort:33455 DestPort:9000 Seq:3250205789 Ack:0 Flags:40962 WindowSize:29200 Checksum:25946 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.3.103
2021/10/30 03:51:38 tcp packet: &{SrcPort:33455 DestPort:9000 Seq:3250205790 Ack:2683033272 Flags:32784 WindowSize:229 Checksum:49049 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.103
2021/10/30 03:51:38 connection established
2021/10/30 03:51:38 calling checksumTCP: 10.244.4.229 10.244.3.103 [35 40 130 175 159 234 80 24 193 186 52 94 80 24 0 229 0 0 0 0] [98 111 111 109 33 33 33]
2021/10/30 03:51:38 checksumer: &{sum:541670 oddByte:33 length:39}
2021/10/30 03:51:38 ret:  541703
2021/10/30 03:51:38 ret:  17423
2021/10/30 03:51:38 ret:  17423
2021/10/30 03:51:38 boom packet injected
2021/10/30 03:51:38 tcp packet: &{SrcPort:33455 DestPort:9000 Seq:3250205790 Ack:2683033272 Flags:32785 WindowSize:229 Checksum:49048 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.3.103
2021/10/30 03:51:40 tcp packet: &{SrcPort:36113 DestPort:9000 Seq:1982307419 Ack:0 Flags:40962 WindowSize:29200 Checksum:14012 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.3.103
2021/10/30 03:51:40 tcp packet: &{SrcPort:36113 DestPort:9000 Seq:1982307420 Ack:3903819247 Flags:32784 WindowSize:229 Checksum:35119 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.103
2021/10/30 03:51:40 connection established
2021/10/30 03:51:40 calling checksumTCP: 10.244.4.229 10.244.3.103 [35 40 141 17 232 174 7 79 118 39 156 92 80 24 0 229 0 0 0 0] [98 111 111 109 33 33 33]
2021/10/30 03:51:40 checksumer: &{sum:461838 oddByte:33 length:39}
2021/10/30 03:51:40 ret:  461871
2021/10/30 03:51:40 ret:  3126
2021/10/30 03:51:40 ret:  3126
2021/10/30 03:51:40 boom packet injected
2021/10/30 03:51:40 tcp packet: &{SrcPort:36113 DestPort:9000 Seq:1982307420 Ack:3903819247 Flags:32785 WindowSize:229 Checksum:35118 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.3.103
2021/10/30 03:51:42 tcp packet: &{SrcPort:46613 DestPort:9000 Seq:3464415699 Ack:0 Flags:40962 WindowSize:29200 Checksum:32792 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.3.103
2021/10/30 03:51:42 tcp packet: &{SrcPort:46613 DestPort:9000 Seq:3464415700 Ack:855085109 Flags:32784 WindowSize:229 Checksum:32302 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.103
2021/10/30 03:51:42 connection established
2021/10/30 03:51:42 calling checksumTCP: 10.244.4.229 10.244.3.103 [35 40 182 21 50 246 9 149 206 126 201 212 80 24 0 229 0 0 0 0] [98 111 111 109 33 33 33]
2021/10/30 03:51:42 checksumer: &{sum:552200 oddByte:33 length:39}
2021/10/30 03:51:42 ret:  552233
2021/10/30 03:51:42 ret:  27953
2021/10/30 03:51:42 ret:  27953
2021/10/30 03:51:42 boom packet injected
2021/10/30 03:51:42 tcp packet: &{SrcPort:46613 DestPort:9000 Seq:3464415700 Ack:855085109 Flags:32785 WindowSize:229 Checksum:32301 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.3.103
2021/10/30 03:51:44 tcp packet: &{SrcPort:40256 DestPort:9000 Seq:859456590 Ack:0 Flags:40962 WindowSize:29200 Checksum:45542 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.3.103
2021/10/30 03:51:44 tcp packet: &{SrcPort:40256 DestPort:9000 Seq:859456591 Ack:249823489 Flags:32784 WindowSize:229 Checksum:23411 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.103
2021/10/30 03:51:44 connection established
2021/10/30 03:51:44 calling checksumTCP: 10.244.4.229 10.244.3.103 [35 40 157 64 14 226 122 97 51 58 68 79 80 24 0 229 0 0 0 0] [98 111 111 109 33 33 33]
2021/10/30 03:51:44 checksumer: &{sum:493084 oddByte:33 length:39}
2021/10/30 03:51:44 ret:  493117
2021/10/30 03:51:44 ret:  34372
2021/10/30 03:51:44 ret:  34372
2021/10/30 03:51:44 boom packet injected
2021/10/30 03:51:44 tcp packet: &{SrcPort:40256 DestPort:9000 Seq:859456591 Ack:249823489 Flags:32785 WindowSize:229 Checksum:23410 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.3.103
2021/10/30 03:51:46 tcp packet: &{SrcPort:39128 DestPort:9000 Seq:3702892243 Ack:0 Flags:40962 WindowSize:29200 Checksum:41597 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.3.103
2021/10/30 03:51:46 tcp packet: &{SrcPort:39128 DestPort:9000 Seq:3702892244 Ack:268250987 Flags:32784 WindowSize:229 Checksum:5303 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.103
2021/10/30 03:51:46 connection established
2021/10/30 03:51:46 calling checksumTCP: 10.244.4.229 10.244.3.103 [35 40 152 216 15 251 168 203 220 181 166 212 80 24 0 229 0 0 0 0] [98 111 111 109 33 33 33]
2021/10/30 03:51:46 checksumer: &{sum:631377 oddByte:33 length:39}
2021/10/30 03:51:46 ret:  631410
2021/10/30 03:51:46 ret:  41595
2021/10/30 03:51:46 ret:  41595
2021/10/30 03:51:46 boom packet injected
2021/10/30 03:51:46 tcp packet: &{SrcPort:39128 DestPort:9000 Seq:3702892244 Ack:268250987 Flags:32785 WindowSize:229 Checksum:5302 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.3.103
2021/10/30 03:51:48 tcp packet: &{SrcPort:44917 DestPort:9000 Seq:2699374323 Ack:0 Flags:40962 WindowSize:29200 Checksum:14272 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.3.103
2021/10/30 03:51:48 tcp packet: &{SrcPort:44917 DestPort:9000 Seq:2699374324 Ack:2391715894 Flags:32784 WindowSize:229 Checksum:41675 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.103
2021/10/30 03:51:48 connection established
2021/10/30 03:51:48 calling checksumTCP: 10.244.4.229 10.244.3.103 [35 40 175 117 142 141 41 150 160 229 46 244 80 24 0 229 0 0 0 0] [98 111 111 109 33 33 33]
2021/10/30 03:51:48 checksumer: &{sum:584628 oddByte:33 length:39}
2021/10/30 03:51:48 ret:  584661
2021/10/30 03:51:48 ret:  60381
2021/10/30 03:51:48 ret:  60381
2021/10/30 03:51:48 boom packet injected
2021/10/30 03:51:48 tcp packet: &{SrcPort:44917 DestPort:9000 Seq:2699374324 Ack:2391715894 Flags:32785 WindowSize:229 Checksum:41674 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.3.103
2021/10/30 03:51:48 tcp packet: &{SrcPort:33455 DestPort:9000 Seq:3250205791 Ack:2683033273 Flags:32784 WindowSize:229 Checksum:28949 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.103
2021/10/30 03:51:50 tcp packet: &{SrcPort:33360 DestPort:9000 Seq:1992617355 Ack:0 Flags:40962 WindowSize:29200 Checksum:51357 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.3.103
2021/10/30 03:51:50 tcp packet: &{SrcPort:33360 DestPort:9000 Seq:1992617356 Ack:1465508083 Flags:32784 WindowSize:229 Checksum:12881 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.103
2021/10/30 03:51:50 connection established
2021/10/30 03:51:50 calling checksumTCP: 10.244.4.229 10.244.3.103 [35 40 130 80 87 88 90 83 118 196 237 140 80 24 0 229 0 0 0 0] [98 111 111 109 33 33 33]
2021/10/30 03:51:50 checksumer: &{sum:509462 oddByte:33 length:39}
2021/10/30 03:51:50 ret:  509495
2021/10/30 03:51:50 ret:  50750
2021/10/30 03:51:50 ret:  50750
2021/10/30 03:51:50 boom packet injected
2021/10/30 03:51:50 tcp packet: &{SrcPort:33360 DestPort:9000 Seq:1992617356 Ack:1465508083 Flags:32785 WindowSize:229 Checksum:12880 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.3.103
2021/10/30 03:51:50 tcp packet: &{SrcPort:36113 DestPort:9000 Seq:1982307421 Ack:3903819248 Flags:32784 WindowSize:229 Checksum:14955 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.103
2021/10/30 03:51:52 tcp packet: &{SrcPort:46613 DestPort:9000 Seq:3464415701 Ack:855085110 Flags:32784 WindowSize:229 Checksum:12300 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.103
2021/10/30 03:51:52 tcp packet: &{SrcPort:44991 DestPort:9000 Seq:223331524 Ack:0 Flags:40962 WindowSize:29200 Checksum:9626 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.3.103
2021/10/30 03:51:52 tcp packet: &{SrcPort:44991 DestPort:9000 Seq:223331525 Ack:902841787 Flags:32784 WindowSize:229 Checksum:17470 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.103
2021/10/30 03:51:52 connection established
2021/10/30 03:51:52 calling checksumTCP: 10.244.4.229 10.244.3.103 [35 40 175 191 53 206 191 27 13 79 196 197 80 24 0 229 0 0 0 0] [98 111 111 109 33 33 33]
2021/10/30 03:51:52 checksumer: &{sum:538356 oddByte:33 length:39}
2021/10/30 03:51:52 ret:  538389
2021/10/30 03:51:52 ret:  14109
2021/10/30 03:51:52 ret:  14109
2021/10/30 03:51:52 boom packet injected
2021/10/30 03:51:52 tcp packet: &{SrcPort:44991 DestPort:9000 Seq:223331525 Ack:902841787 Flags:32785 WindowSize:229 Checksum:17469 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.3.103
2021/10/30 03:51:54 tcp packet: &{SrcPort:40256 DestPort:9000 Seq:859456592 Ack:249823490 Flags:32784 WindowSize:229 Checksum:3409 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.103
2021/10/30 03:51:54 tcp packet: &{SrcPort:40911 DestPort:9000 Seq:417181101 Ack:0 Flags:40962 WindowSize:29200 Checksum:14658 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.3.103
2021/10/30 03:51:54 tcp packet: &{SrcPort:40911 DestPort:9000 Seq:417181102 Ack:78247565 Flags:32784 WindowSize:229 Checksum:53353 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.103
2021/10/30 03:51:54 connection established
2021/10/30 03:51:54 calling checksumTCP: 10.244.4.229 10.244.3.103 [35 40 159 207 4 168 111 237 24 221 173 174 80 24 0 229 0 0 0 0] [98 111 111 109 33 33 33]
2021/10/30 03:51:54 checksumer: &{sum:616791 oddByte:33 length:39}
2021/10/30 03:51:54 ret:  616824
2021/10/30 03:51:54 ret:  27009
2021/10/30 03:51:54 ret:  27009
2021/10/30 03:51:54 boom packet injected
2021/10/30 03:51:54 tcp packet: &{SrcPort:40911 DestPort:9000 Seq:417181102 Ack:78247565 Flags:32785 WindowSize:229 Checksum:53352 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.3.103
2021/10/30 03:51:56 tcp packet: &{SrcPort:39128 DestPort:9000 Seq:3702892245 Ack:268250988 Flags:32784 WindowSize:229 Checksum:50834 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.103
2021/10/30 03:51:56 tcp packet: &{SrcPort:44200 DestPort:9000 Seq:3145761678 Ack:0 Flags:40962 WindowSize:29200 Checksum:44052 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.3.103
2021/10/30 03:51:56 tcp packet: &{SrcPort:44200 DestPort:9000 Seq:3145761679 Ack:442471101 Flags:32784 WindowSize:229 Checksum:35206 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.103
2021/10/30 03:51:56 connection established
2021/10/30 03:51:56 calling checksumTCP: 10.244.4.229 10.244.3.103 [35 40 172 168 26 94 12 29 187 128 131 143 80 24 0 229 0 0 0 0] [98 111 111 109 33 33 33]
2021/10/30 03:51:56 checksumer: &{sum:502928 oddByte:33 length:39}
2021/10/30 03:51:56 ret:  502961
2021/10/30 03:51:56 ret:  44216
2021/10/30 03:51:56 ret:  44216
2021/10/30 03:51:56 boom packet injected
2021/10/30 03:51:56 tcp packet: &{SrcPort:44200 DestPort:9000 Seq:3145761679 Ack:442471101 Flags:32785 WindowSize:229 Checksum:35205 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.3.103
2021/10/30 03:51:58 tcp packet: &{SrcPort:46245 DestPort:9000 Seq:1274724588 Ack:0 Flags:40962 WindowSize:29200 Checksum:52844 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.3.103
2021/10/30 03:51:58 tcp packet: &{SrcPort:46245 DestPort:9000 Seq:1274724589 Ack:12807364 Flags:32784 WindowSize:229 Checksum:58272 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.103
2021/10/30 03:51:58 connection established
2021/10/30 03:51:58 calling checksumTCP: 10.244.4.229 10.244.3.103 [35 40 180 165 0 193 230 36 75 250 192 237 80 24 0 229 0 0 0 0] [98 111 111 109 33 33 33]
2021/10/30 03:51:58 checksumer: &{sum:584741 oddByte:33 length:39}
2021/10/30 03:51:58 ret:  584774
2021/10/30 03:51:58 ret:  60494
2021/10/30 03:51:58 ret:  60494
2021/10/30 03:51:58 boom packet injected
2021/10/30 03:51:58 tcp packet: &{SrcPort:46245 DestPort:9000 Seq:1274724589 Ack:12807364 Flags:32785 WindowSize:229 Checksum:58271 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.3.103
2021/10/30 03:51:58 tcp packet: &{SrcPort:44917 DestPort:9000 Seq:2699374325 Ack:2391715895 Flags:32784 WindowSize:229 Checksum:21581 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.103
2021/10/30 03:52:00 tcp packet: &{SrcPort:36483 DestPort:9000 Seq:2307490967 Ack:0 Flags:40962 WindowSize:29200 Checksum:60292 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.3.103
2021/10/30 03:52:00 tcp packet: &{SrcPort:36483 DestPort:9000 Seq:2307490968 Ack:895039033 Flags:32784 WindowSize:229 Checksum:64221 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.103
2021/10/30 03:52:00 connection established
2021/10/30 03:52:00 calling checksumTCP: 10.244.4.229 10.244.3.103 [35 40 142 131 53 87 175 153 137 137 132 152 80 24 0 229 0 0 0 0] [98 111 111 109 33 33 33]
2021/10/30 03:52:00 checksumer: &{sum:528127 oddByte:33 length:39}
2021/10/30 03:52:00 ret:  528160
2021/10/30 03:52:00 ret:  3880
2021/10/30 03:52:00 ret:  3880
2021/10/30 03:52:00 boom packet injected
2021/10/30 03:52:00 tcp packet: &{SrcPort:36483 DestPort:9000 Seq:2307490968 Ack:895039033 Flags:32785 WindowSize:229 Checksum:64220 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.3.103
2021/10/30 03:52:00 tcp packet: &{SrcPort:33360 DestPort:9000 Seq:1992617357 Ack:1465508084 Flags:32784 WindowSize:229 Checksum:58375 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.103
2021/10/30 03:52:02 tcp packet: &{SrcPort:44991 DestPort:9000 Seq:223331526 Ack:902841788 Flags:32784 WindowSize:229 Checksum:63002 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.103
2021/10/30 03:52:02 tcp packet: &{SrcPort:32847 DestPort:9000 Seq:368102319 Ack:0 Flags:40962 WindowSize:29200 Checksum:7786 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.3.103
2021/10/30 03:52:02 tcp packet: &{SrcPort:32847 DestPort:9000 Seq:368102320 Ack:1175539870 Flags:32784 WindowSize:229 Checksum:64212 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.103
2021/10/30 03:52:02 connection established
2021/10/30 03:52:02 calling checksumTCP: 10.244.4.229 10.244.3.103 [35 40 128 79 70 15 201 254 21 240 203 176 80 24 0 229 0 0 0 0] [98 111 111 109 33 33 33]
2021/10/30 03:52:02 checksumer: &{sum:554735 oddByte:33 length:39}
2021/10/30 03:52:02 ret:  554768
2021/10/30 03:52:02 ret:  30488
2021/10/30 03:52:02 ret:  30488
2021/10/30 03:52:02 boom packet injected
2021/10/30 03:52:02 tcp packet: &{SrcPort:32847 DestPort:9000 Seq:368102320 Ack:1175539870 Flags:32785 WindowSize:229 Checksum:64211 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.3.103
2021/10/30 03:52:04 tcp packet: &{SrcPort:40911 DestPort:9000 Seq:417181103 Ack:78247566 Flags:32784 WindowSize:229 Checksum:33350 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.103
2021/10/30 03:52:04 tcp packet: &{SrcPort:36768 DestPort:9000 Seq:3033615423 Ack:0 Flags:40962 WindowSize:29200 Checksum:59350 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.3.103
2021/10/30 03:52:04 tcp packet: &{SrcPort:36768 DestPort:9000 Seq:3033615424 Ack:353315484 Flags:32784 WindowSize:229 Checksum:4982 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.103
2021/10/30 03:52:04 connection established
2021/10/30 03:52:04 calling checksumTCP: 10.244.4.229 10.244.3.103 [35 40 143 160 21 13 163 252 180 209 76 64 80 24 0 229 0 0 0 0] [98 111 111 109 33 33 33]
2021/10/30 03:52:04 checksumer: &{sum:537799 oddByte:33 length:39}
2021/10/30 03:52:04 ret:  537832
2021/10/30 03:52:04 ret:  13552
2021/10/30 03:52:04 ret:  13552
2021/10/30 03:52:04 boom packet injected
2021/10/30 03:52:04 tcp packet: &{SrcPort:36768 DestPort:9000 Seq:3033615424 Ack:353315484 Flags:32785 WindowSize:229 Checksum:4981 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.3.103
2021/10/30 03:52:06 tcp packet: &{SrcPort:44200 DestPort:9000 Seq:3145761680 Ack:442471102 Flags:32784 WindowSize:229 Checksum:15202 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.103
2021/10/30 03:52:06 tcp packet: &{SrcPort:43465 DestPort:9000 Seq:4057172165 Ack:0 Flags:40962 WindowSize:29200 Checksum:19540 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.3.103
2021/10/30 03:52:06 tcp packet: &{SrcPort:43465 DestPort:9000 Seq:4057172166 Ack:2734198143 Flags:32784 WindowSize:229 Checksum:32596 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.103
2021/10/30 03:52:06 connection established
2021/10/30 03:52:06 calling checksumTCP: 10.244.4.229 10.244.3.103 [35 40 169 201 162 247 6 223 241 211 136 198 80 24 0 229 0 0 0 0] [98 111 111 109 33 33 33]
2021/10/30 03:52:06 checksumer: &{sum:635722 oddByte:33 length:39}
2021/10/30 03:52:06 ret:  635755
2021/10/30 03:52:06 ret:  45940
2021/10/30 03:52:06 ret:  45940
2021/10/30 03:52:06 boom packet injected
2021/10/30 03:52:06 tcp packet: &{SrcPort:43465 DestPort:9000 Seq:4057172166 Ack:2734198143 Flags:32785 WindowSize:229 Checksum:32595 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.3.103
2021/10/30 03:52:08 tcp packet: &{SrcPort:46245 DestPort:9000 Seq:1274724590 Ack:12807365 Flags:32784 WindowSize:229 Checksum:38270 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.103
2021/10/30 03:52:08 tcp packet: &{SrcPort:37599 DestPort:9000 Seq:4269933692 Ack:0 Flags:40962 WindowSize:29200 Checksum:54024 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.3.103
2021/10/30 03:52:08 tcp packet: &{SrcPort:37599 DestPort:9000 Seq:4269933693 Ack:1976524001 Flags:32784 WindowSize:229 Checksum:23553 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.103
2021/10/30 03:52:08 connection established
2021/10/30 03:52:08 calling checksumTCP: 10.244.4.229 10.244.3.103 [35 40 146 223 117 205 214 65 254 130 4 125 80 24 0 229 0 0 0 0] [98 111 111 109 33 33 33]
2021/10/30 03:52:08 checksumer: &{sum:550751 oddByte:33 length:39}
2021/10/30 03:52:08 ret:  550784
2021/10/30 03:52:08 ret:  26504
2021/10/30 03:52:08 ret:  26504
2021/10/30 03:52:08 boom packet injected
2021/10/30 03:52:08 tcp packet: &{SrcPort:37599 DestPort:9000 Seq:4269933693 Ack:1976524001 Flags:32785 WindowSize:229 Checksum:23552 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.3.103
2021/10/30 03:52:10 tcp packet: &{SrcPort:36483 DestPort:9000 Seq:2307490969 Ack:895039034 Flags:32784 WindowSize:229 Checksum:44218 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.103
2021/10/30 03:52:10 tcp packet: &{SrcPort:45875 DestPort:9000 Seq:418088821 Ack:0 Flags:40962 WindowSize:29200 Checksum:3458 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.3.103
2021/10/30 03:52:10 tcp packet: &{SrcPort:45875 DestPort:9000 Seq:418088822 Ack:2172137395 Flags:32784 WindowSize:229 Checksum:45102 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.103
2021/10/30 03:52:10 connection established
2021/10/30 03:52:10 calling checksumTCP: 10.244.4.229 10.244.3.103 [35 40 179 51 129 118 169 19 24 235 135 118 80 24 0 229 0 0 0 0] [98 111 111 109 33 33 33]
2021/10/30 03:52:10 checksumer: &{sum:497660 oddByte:33 length:39}
2021/10/30 03:52:10 ret:  497693
2021/10/30 03:52:10 ret:  38948
2021/10/30 03:52:10 ret:  38948
2021/10/30 03:52:10 boom packet injected
2021/10/30 03:52:10 tcp packet: &{SrcPort:45875 DestPort:9000 Seq:418088822 Ack:2172137395 Flags:32785 WindowSize:229 Checksum:45101 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.3.103
2021/10/30 03:52:12 tcp packet: &{SrcPort:32847 DestPort:9000 Seq:368102321 Ack:1175539871 Flags:32784 WindowSize:229 Checksum:44209 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.103
2021/10/30 03:52:12 tcp packet: &{SrcPort:39941 DestPort:9000 Seq:2119624845 Ack:0 Flags:40962 WindowSize:29200 Checksum:22108 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.3.103
2021/10/30 03:52:12 tcp packet: &{SrcPort:39941 DestPort:9000 Seq:2119624846 Ack:1699388872 Flags:32784 WindowSize:229 Checksum:40785 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.103
2021/10/30 03:52:12 connection established
2021/10/30 03:52:12 calling checksumTCP: 10.244.4.229 10.244.3.103 [35 40 156 5 101 73 23 40 126 86 232 142 80 24 0 229 0 0 0 0] [98 111 111 109 33 33 33]
2021/10/30 03:52:12 checksumer: &{sum:447742 oddByte:33 length:39}
2021/10/30 03:52:12 ret:  447775
2021/10/30 03:52:12 ret:  54565
2021/10/30 03:52:12 ret:  54565
2021/10/30 03:52:12 boom packet injected
2021/10/30 03:52:12 tcp packet: &{SrcPort:39941 DestPort:9000 Seq:2119624846 Ack:1699388872 Flags:32785 WindowSize:229 Checksum:40784 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.3.103
2021/10/30 03:52:14 tcp packet: &{SrcPort:36768 DestPort:9000 Seq:3033615425 Ack:353315485 Flags:32784 WindowSize:229 Checksum:50515 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.103
2021/10/30 03:52:14 tcp packet: &{SrcPort:34159 DestPort:9000 Seq:767502682 Ack:0 Flags:40962 WindowSize:29200 Checksum:29932 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.3.103
2021/10/30 03:52:14 tcp packet: &{SrcPort:34159 DestPort:9000 Seq:767502683 Ack:3179515575 Flags:32784 WindowSize:229 Checksum:27880 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.103
2021/10/30 03:52:14 connection established
2021/10/30 03:52:14 calling checksumTCP: 10.244.4.229 10.244.3.103 [35 40 133 111 189 130 8 23 45 191 41 91 80 24 0 229 0 0 0 0] [98 111 111 109 33 33 33]
2021/10/30 03:52:14 checksumer: &{sum:498720 oddByte:33 length:39}
2021/10/30 03:52:14 ret:  498753
2021/10/30 03:52:14 ret:  40008
2021/10/30 03:52:14 ret:  40008
2021/10/30 03:52:14 boom packet injected
2021/10/30 03:52:14 tcp packet: &{SrcPort:34159 DestPort:9000 Seq:767502683 Ack:3179515575 Flags:32785 WindowSize:229 Checksum:27879 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.3.103
2021/10/30 03:52:16 tcp packet: &{SrcPort:43465 DestPort:9000 Seq:4057172167 Ack:2734198144 Flags:32784 WindowSize:229 Checksum:12593 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.103
2021/10/30 03:52:16 tcp packet: &{SrcPort:41248 DestPort:9000 Seq:2259963684 Ack:0 Flags:40962 WindowSize:29200 Checksum:53930 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.3.103
2021/10/30 03:52:16 tcp packet: &{SrcPort:41248 DestPort:9000 Seq:2259963685 Ack:2946573383 Flags:32784 WindowSize:229 Checksum:15144 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.103
2021/10/30 03:52:16 connection established
2021/10/30 03:52:16 calling checksumTCP: 10.244.4.229 10.244.3.103 [35 40 161 32 175 159 157 167 134 180 79 37 80 24 0 229 0 0 0 0] [98 111 111 109 33 33 33]
2021/10/30 03:52:16 checksumer: &{sum:506434 oddByte:33 length:39}
2021/10/30 03:52:16 ret:  506467
2021/10/30 03:52:16 ret:  47722
2021/10/30 03:52:16 ret:  47722
2021/10/30 03:52:16 boom packet injected
2021/10/30 03:52:16 tcp packet: &{SrcPort:41248 DestPort:9000 Seq:2259963685 Ack:2946573383 Flags:32785 WindowSize:229 Checksum:15143 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.3.103
2021/10/30 03:52:18 tcp packet: &{SrcPort:37599 DestPort:9000 Seq:4269933694 Ack:1976524002 Flags:32784 WindowSize:229 Checksum:3551 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.103
2021/10/30 03:52:18 tcp packet: &{SrcPort:36956 DestPort:9000 Seq:1267018314 Ack:0 Flags:40962 WindowSize:29200 Checksum:15271 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.3.103
2021/10/30 03:52:18 tcp packet: &{SrcPort:36956 DestPort:9000 Seq:1267018315 Ack:1825616837 Flags:32784 WindowSize:229 Checksum:20388 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.103
2021/10/30 03:52:18 connection established
2021/10/30 03:52:18 calling checksumTCP: 10.244.4.229 10.244.3.103 [35 40 144 92 108 207 45 37 75 133 42 75 80 24 0 229 0 0 0 0] [98 111 111 109 33 33 33]
2021/10/30 03:52:18 checksumer: &{sum:498206 oddByte:33 length:39}
2021/10/30 03:52:18 ret:  498239
2021/10/30 03:52:18 ret:  39494
2021/10/30 03:52:18 ret:  39494
2021/10/30 03:52:18 boom packet injected
2021/10/30 03:52:18 tcp packet: &{SrcPort:36956 DestPort:9000 Seq:1267018315 Ack:1825616837 Flags:32785 WindowSize:229 Checksum:20387 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.3.103
2021/10/30 03:52:20 tcp packet: &{SrcPort:45875 DestPort:9000 Seq:418088823 Ack:2172137396 Flags:32784 WindowSize:229 Checksum:25100 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.103
2021/10/30 03:52:20 tcp packet: &{SrcPort:39419 DestPort:9000 Seq:1047249828 Ack:0 Flags:40962 WindowSize:29200 Checksum:40439 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.3.103
2021/10/30 03:52:20 tcp packet: &{SrcPort:39419 DestPort:9000 Seq:1047249829 Ack:3870399153 Flags:32784 WindowSize:229 Checksum:18777 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.103
2021/10/30 03:52:20 connection established
2021/10/30 03:52:20 calling checksumTCP: 10.244.4.229 10.244.3.103 [35 40 153 251 230 176 20 17 62 107 195 165 80 24 0 229 0 0 0 0] [98 111 111 109 33 33 33]
2021/10/30 03:52:20 checksumer: &{sum:542484 oddByte:33 length:39}
2021/10/30 03:52:20 ret:  542517
2021/10/30 03:52:20 ret:  18237
2021/10/30 03:52:20 ret:  18237
2021/10/30 03:52:20 boom packet injected
2021/10/30 03:52:20 tcp packet: &{SrcPort:39419 DestPort:9000 Seq:1047249829 Ack:3870399153 Flags:32785 WindowSize:229 Checksum:18776 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.3.103
2021/10/30 03:52:22 tcp packet: &{SrcPort:39941 DestPort:9000 Seq:2119624847 Ack:1699388873 Flags:32784 WindowSize:229 Checksum:20782 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.103
2021/10/30 03:52:22 tcp packet: &{SrcPort:32776 DestPort:9000 Seq:3247833014 Ack:0 Flags:40962 WindowSize:29200 Checksum:61660 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.3.103
2021/10/30 03:52:22 tcp packet: &{SrcPort:32776 DestPort:9000 Seq:3247833015 Ack:3933792069 Flags:32784 WindowSize:229 Checksum:17427 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.103
2021/10/30 03:52:22 connection established
2021/10/30 03:52:22 calling checksumTCP: 10.244.4.229 10.244.3.103 [35 40 128 8 234 119 96 165 193 149 255 183 80 24 0 229 0 0 0 0] [98 111 111 109 33 33 33]
2021/10/30 03:52:22 checksumer: &{sum:519178 oddByte:33 length:39}
2021/10/30 03:52:22 ret:  519211
2021/10/30 03:52:22 ret:  60466
2021/10/30 03:52:22 ret:  60466
2021/10/30 03:52:22 boom packet injected
2021/10/30 03:52:22 tcp packet: &{SrcPort:32776 DestPort:9000 Seq:3247833015 Ack:3933792069 Flags:32785 WindowSize:229 Checksum:17426 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.3.103
2021/10/30 03:52:24 tcp packet: &{SrcPort:34159 DestPort:9000 Seq:767502684 Ack:3179515576 Flags:32784 WindowSize:229 Checksum:7877 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.103
2021/10/30 03:52:24 tcp packet: &{SrcPort:34499 DestPort:9000 Seq:4022818197 Ack:0 Flags:40962 WindowSize:29200 Checksum:24129 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.3.103
2021/10/30 03:52:24 tcp packet: &{SrcPort:34499 DestPort:9000 Seq:4022818198 Ack:160223684 Flags:32784 WindowSize:229 Checksum:40980 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.103
2021/10/30 03:52:24 connection established
2021/10/30 03:52:24 calling checksumTCP: 10.244.4.229 10.244.3.103 [35 40 134 195 9 139 75 36 239 199 85 150 80 24 0 229 0 0 0 0] [98 111 111 109 33 33 33]
2021/10/30 03:52:24 checksumer: &{sum:543134 oddByte:33 length:39}
2021/10/30 03:52:24 ret:  543167
2021/10/30 03:52:24 ret:  18887
2021/10/30 03:52:24 ret:  18887
2021/10/30 03:52:24 boom packet injected
2021/10/30 03:52:24 tcp packet: &{SrcPort:34499 DestPort:9000 Seq:4022818198 Ack:160223684 Flags:32785 WindowSize:229 Checksum:40979 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.3.103
2021/10/30 03:52:26 tcp packet: &{SrcPort:41248 DestPort:9000 Seq:2259963686 Ack:2946573384 Flags:32784 WindowSize:229 Checksum:60675 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.103
2021/10/30 03:52:26 tcp packet: &{SrcPort:42764 DestPort:9000 Seq:466818791 Ack:0 Flags:40962 WindowSize:29200 Checksum:18634 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.3.103
2021/10/30 03:52:26 tcp packet: &{SrcPort:42764 DestPort:9000 Seq:466818792 Ack:2893806895 Flags:32784 WindowSize:229 Checksum:46193 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.103
2021/10/30 03:52:26 connection established
2021/10/30 03:52:26 calling checksumTCP: 10.244.4.229 10.244.3.103 [35 40 167 12 172 122 118 143 27 211 22 232 80 24 0 229 0 0 0 0] [98 111 111 109 33 33 33]
2021/10/30 03:52:26 checksumer: &{sum:543354 oddByte:33 length:39}
2021/10/30 03:52:26 ret:  543387
2021/10/30 03:52:26 ret:  19107
2021/10/30 03:52:26 ret:  19107
2021/10/30 03:52:26 boom packet injected
2021/10/30 03:52:26 tcp packet: &{SrcPort:42764 DestPort:9000 Seq:466818792 Ack:2893806895 Flags:32785 WindowSize:229 Checksum:46192 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.3.103
2021/10/30 03:52:28 tcp packet: &{SrcPort:38192 DestPort:9000 Seq:2852013545 Ack:0 Flags:40962 WindowSize:29200 Checksum:38312 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.3.103
2021/10/30 03:52:28 tcp packet: &{SrcPort:38192 DestPort:9000 Seq:2852013546 Ack:3358257663 Flags:32784 WindowSize:229 Checksum:59648 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.103
2021/10/30 03:52:28 connection established
2021/10/30 03:52:28 calling checksumTCP: 10.244.4.229 10.244.3.103 [35 40 149 48 200 41 107 95 169 254 69 234 80 24 0 229 0 0 0 0] [98 111 111 109 33 33 33]
2021/10/30 03:52:28 checksumer: &{sum:531254 oddByte:33 length:39}
2021/10/30 03:52:28 ret:  531287
2021/10/30 03:52:28 ret:  7007
2021/10/30 03:52:28 ret:  7007
2021/10/30 03:52:28 boom packet injected
2021/10/30 03:52:28 tcp packet: &{SrcPort:38192 DestPort:9000 Seq:2852013546 Ack:3358257663 Flags:32785 WindowSize:229 Checksum:59647 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.3.103
2021/10/30 03:52:28 tcp packet: &{SrcPort:36956 DestPort:9000 Seq:1267018316 Ack:1825616838 Flags:32784 WindowSize:229 Checksum:300 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.103
2021/10/30 03:52:30 tcp packet: &{SrcPort:39419 DestPort:9000 Seq:1047249830 Ack:3870399154 Flags:32784 WindowSize:229 Checksum:64308 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.103
2021/10/30 03:52:30 tcp packet: &{SrcPort:39582 DestPort:9000 Seq:3569585649 Ack:0 Flags:40962 WindowSize:29200 Checksum:5532 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.3.103
2021/10/30 03:52:30 tcp packet: &{SrcPort:39582 DestPort:9000 Seq:3569585650 Ack:2770206285 Flags:32784 WindowSize:229 Checksum:30690 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.103
2021/10/30 03:52:30 connection established
2021/10/30 03:52:30 calling checksumTCP: 10.244.4.229 10.244.3.103 [35 40 154 158 165 28 119 173 212 195 141 242 80 24 0 229 0 0 0 0] [98 111 111 109 33 33 33]
2021/10/30 03:52:30 checksumer: &{sum:563095 oddByte:33 length:39}
2021/10/30 03:52:30 ret:  563128
2021/10/30 03:52:30 ret:  38848
2021/10/30 03:52:30 ret:  38848
2021/10/30 03:52:30 boom packet injected
2021/10/30 03:52:30 tcp packet: &{SrcPort:39582 DestPort:9000 Seq:3569585650 Ack:2770206285 Flags:32785 WindowSize:229 Checksum:30689 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.3.103
2021/10/30 03:52:32 tcp packet: &{SrcPort:32776 DestPort:9000 Seq:3247833016 Ack:3933792070 Flags:32784 WindowSize:229 Checksum:62959 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.103
2021/10/30 03:52:32 tcp packet: &{SrcPort:39617 DestPort:9000 Seq:3726634471 Ack:0 Flags:40962 WindowSize:29200 Checksum:42070 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.3.103
2021/10/30 03:52:32 tcp packet: &{SrcPort:39617 DestPort:9000 Seq:3726634472 Ack:678206034 Flags:32784 WindowSize:229 Checksum:57209 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.103
2021/10/30 03:52:32 connection established
2021/10/30 03:52:32 calling checksumTCP: 10.244.4.229 10.244.3.103 [35 40 154 193 40 107 19 178 222 31 237 232 80 24 0 229 0 0 0 0] [98 111 111 109 33 33 33]
2021/10/30 03:52:32 checksumer: &{sum:548896 oddByte:33 length:39}
2021/10/30 03:52:32 ret:  548929
2021/10/30 03:52:32 ret:  24649
2021/10/30 03:52:32 ret:  24649
2021/10/30 03:52:32 boom packet injected
2021/10/30 03:52:32 tcp packet: &{SrcPort:39617 DestPort:9000 Seq:3726634472 Ack:678206034 Flags:32785 WindowSize:229 Checksum:57208 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.3.103
2021/10/30 03:52:34 tcp packet: &{SrcPort:34499 DestPort:9000 Seq:4022818199 Ack:160223685 Flags:32784 WindowSize:229 Checksum:20977 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.103
2021/10/30 03:52:34 tcp packet: &{SrcPort:41413 DestPort:9000 Seq:4004680503 Ack:0 Flags:40962 WindowSize:29200 Checksum:57247 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.3.103
2021/10/30 03:52:34 tcp packet: &{SrcPort:41413 DestPort:9000 Seq:4004680504 Ack:2640439880 Flags:32784 WindowSize:229 Checksum:19974 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.103
2021/10/30 03:52:34 connection established
2021/10/30 03:52:34 calling checksumTCP: 10.244.4.229 10.244.3.103 [35 40 161 197 157 96 99 168 238 178 147 56 80 24 0 229 0 0 0 0] [98 111 111 109 33 33 33]
2021/10/30 03:52:34 checksumer: &{sum:537250 oddByte:33 length:39}
2021/10/30 03:52:34 ret:  537283
2021/10/30 03:52:34 ret:  13003
2021/10/30 03:52:34 ret:  13003
2021/10/30 03:52:34 boom packet injected
2021/10/30 03:52:34 tcp packet: &{SrcPort:41413 DestPort:9000 Seq:4004680504 Ack:2640439880 Flags:32785 WindowSize:229 Checksum:19973 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.3.103
2021/10/30 03:52:36 tcp packet: &{SrcPort:42764 DestPort:9000 Seq:466818793 Ack:2893806896 Flags:32784 WindowSize:229 Checksum:26191 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.103
2021/10/30 03:52:36 tcp packet: &{SrcPort:36114 DestPort:9000 Seq:446627950 Ack:0 Flags:40962 WindowSize:29200 Checksum:21343 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.3.103
2021/10/30 03:52:36 tcp packet: &{SrcPort:36114 DestPort:9000 Seq:446627951 Ack:845604346 Flags:32784 WindowSize:229 Checksum:10559 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.103
2021/10/30 03:52:36 connection established
2021/10/30 03:52:36 calling checksumTCP: 10.244.4.229 10.244.3.103 [35 40 141 18 50 101 95 90 26 159 0 111 80 24 0 229 0 0 0 0] [98 111 111 109 33 33 33]
2021/10/30 03:52:36 checksumer: &{sum:481464 oddByte:33 length:39}
2021/10/30 03:52:36 ret:  481497
2021/10/30 03:52:36 ret:  22752
2021/10/30 03:52:36 ret:  22752
2021/10/30 03:52:36 boom packet injected
2021/10/30 03:52:36 tcp packet: &{SrcPort:36114 DestPort:9000 Seq:446627951 Ack:845604346 Flags:32785 WindowSize:229 Checksum:10558 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.3.103
2021/10/30 03:52:38 tcp packet: &{SrcPort:39161 DestPort:9000 Seq:2676957390 Ack:0 Flags:40962 WindowSize:29200 Checksum:39511 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.3.103
2021/10/30 03:52:38 tcp packet: &{SrcPort:39161 DestPort:9000 Seq:2676957391 Ack:564279970 Flags:32784 WindowSize:229 Checksum:9348 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.103
2021/10/30 03:52:38 connection established
2021/10/30 03:52:38 calling checksumTCP: 10.244.4.229 10.244.3.103 [35 40 152 249 33 160 180 2 159 143 32 207 80 24 0 229 0 0 0 0] [98 111 111 109 33 33 33]
2021/10/30 03:52:38 checksumer: &{sum:553900 oddByte:33 length:39}
2021/10/30 03:52:38 ret:  553933
2021/10/30 03:52:38 ret:  29653
2021/10/30 03:52:38 ret:  29653
2021/10/30 03:52:38 boom packet injected
2021/10/30 03:52:38 tcp packet: &{SrcPort:39161 DestPort:9000 Seq:2676957391 Ack:564279970 Flags:32785 WindowSize:229 Checksum:9347 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.3.103
2021/10/30 03:52:38 tcp packet: &{SrcPort:38192 DestPort:9000 Seq:2852013547 Ack:3358257664 Flags:32784 WindowSize:229 Checksum:39562 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.103

Oct 30 03:52:38.650: INFO: boom-server OK: did not receive any RST packet
[AfterEach] [sig-network] Conntrack
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct 30 03:52:38.650: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "conntrack-6299" for this suite.


• [SLOW TEST:74.169 seconds]
[sig-network] Conntrack
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should drop INVALID conntrack entries
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/conntrack.go:282
------------------------------
{"msg":"PASSED [sig-network] Conntrack should drop INVALID conntrack entries","total":-1,"completed":1,"skipped":38,"failed":0}

SSSSSS
------------------------------
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct 30 03:52:29.375: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename nettest
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:83
STEP: Executing a successful http request from the external internet
[It] should provide Internet connection for containers [Feature:Networking-IPv4]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:97
STEP: Running container which tries to connect to 8.8.8.8
Oct 30 03:52:29.509: INFO: Waiting up to 5m0s for pod "connectivity-test" in namespace "nettest-4293" to be "Succeeded or Failed"
Oct 30 03:52:29.511: INFO: Pod "connectivity-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.743839ms
Oct 30 03:52:31.515: INFO: Pod "connectivity-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006150452s
Oct 30 03:52:33.518: INFO: Pod "connectivity-test": Phase="Pending", Reason="", readiness=false. Elapsed: 4.009458116s
Oct 30 03:52:35.523: INFO: Pod "connectivity-test": Phase="Pending", Reason="", readiness=false. Elapsed: 6.013953523s
Oct 30 03:52:37.527: INFO: Pod "connectivity-test": Phase="Pending", Reason="", readiness=false. Elapsed: 8.018196045s
Oct 30 03:52:39.531: INFO: Pod "connectivity-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.022403006s
STEP: Saw pod success
Oct 30 03:52:39.531: INFO: Pod "connectivity-test" satisfied condition "Succeeded or Failed"
[AfterEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct 30 03:52:39.531: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "nettest-4293" for this suite.


• [SLOW TEST:10.167 seconds]
[sig-network] Networking
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should provide Internet connection for containers [Feature:Networking-IPv4]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:97
------------------------------
{"msg":"PASSED [sig-network] Networking should provide Internet connection for containers [Feature:Networking-IPv4]","total":-1,"completed":1,"skipped":245,"failed":0}

SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct 30 03:52:18.460: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename nettest
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:83
STEP: Executing a successful http request from the external internet
[It] should be able to handle large requests: http
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:451
STEP: Performing setup for networking test in namespace nettest-667
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Oct 30 03:52:18.572: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
Oct 30 03:52:18.602: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Oct 30 03:52:20.605: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Oct 30 03:52:22.606: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Oct 30 03:52:24.606: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Oct 30 03:52:26.608: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Oct 30 03:52:28.613: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Oct 30 03:52:30.607: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Oct 30 03:52:32.604: INFO: The status of Pod netserver-0 is Running (Ready = false)
Oct 30 03:52:34.605: INFO: The status of Pod netserver-0 is Running (Ready = false)
Oct 30 03:52:36.607: INFO: The status of Pod netserver-0 is Running (Ready = false)
Oct 30 03:52:38.607: INFO: The status of Pod netserver-0 is Running (Ready = false)
Oct 30 03:52:40.607: INFO: The status of Pod netserver-0 is Running (Ready = false)
Oct 30 03:52:42.605: INFO: The status of Pod netserver-0 is Running (Ready = true)
Oct 30 03:52:42.610: INFO: The status of Pod netserver-1 is Running (Ready = true)
STEP: Creating test pods
Oct 30 03:52:48.631: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2
STEP: Getting node addresses
Oct 30 03:52:48.631: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
Oct 30 03:52:48.637: INFO: Requires at least 2 nodes (not -1)
[AfterEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct 30 03:52:48.639: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "nettest-667" for this suite.


S [SKIPPING] [30.188 seconds]
[sig-network] Networking
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  Granular Checks: Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:151
    should be able to handle large requests: http [It]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:451

    Requires at least 2 nodes (not -1)

    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/network/utils.go:782
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-network] NoSNAT [Feature:NoSNAT] [Slow]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct 30 03:52:27.357: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename no-snat-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] Should be able to send traffic between Pods without SNAT
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/no_snat.go:64
STEP: creating a test pod on each Node
STEP: waiting for all of the no-snat-test pods to be scheduled and running
STEP: sending traffic from each pod to the others and checking that SNAT does not occur
Oct 30 03:52:47.445: INFO: Waiting up to 2m0s to get response from 10.244.1.4:8080
Oct 30 03:52:47.445: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=no-snat-test-2393 exec no-snat-testcv7j5 -- /bin/sh -x -c curl -q -s --connect-timeout 30 10.244.1.4:8080/clientip'
Oct 30 03:52:48.018: INFO: stderr: "+ curl -q -s --connect-timeout 30 10.244.1.4:8080/clientip\n"
Oct 30 03:52:48.018: INFO: stdout: "10.244.3.126:37748"
STEP: Verifying the preserved source ip
Oct 30 03:52:48.018: INFO: Waiting up to 2m0s to get response from 10.244.2.10:8080
Oct 30 03:52:48.018: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=no-snat-test-2393 exec no-snat-testcv7j5 -- /bin/sh -x -c curl -q -s --connect-timeout 30 10.244.2.10:8080/clientip'
Oct 30 03:52:48.615: INFO: stderr: "+ curl -q -s --connect-timeout 30 10.244.2.10:8080/clientip\n"
Oct 30 03:52:48.615: INFO: stdout: "10.244.3.126:49710"
STEP: Verifying the preserved source ip
Oct 30 03:52:48.616: INFO: Waiting up to 2m0s to get response from 10.244.4.12:8080
Oct 30 03:52:48.616: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=no-snat-test-2393 exec no-snat-testcv7j5 -- /bin/sh -x -c curl -q -s --connect-timeout 30 10.244.4.12:8080/clientip'
Oct 30 03:52:49.056: INFO: stderr: "+ curl -q -s --connect-timeout 30 10.244.4.12:8080/clientip\n"
Oct 30 03:52:49.056: INFO: stdout: "10.244.3.126:55286"
STEP: Verifying the preserved source ip
Oct 30 03:52:49.056: INFO: Waiting up to 2m0s to get response from 10.244.0.9:8080
Oct 30 03:52:49.056: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=no-snat-test-2393 exec no-snat-testcv7j5 -- /bin/sh -x -c curl -q -s --connect-timeout 30 10.244.0.9:8080/clientip'
Oct 30 03:52:49.412: INFO: stderr: "+ curl -q -s --connect-timeout 30 10.244.0.9:8080/clientip\n"
Oct 30 03:52:49.412: INFO: stdout: "10.244.3.126:47038"
STEP: Verifying the preserved source ip
Oct 30 03:52:49.412: INFO: Waiting up to 2m0s to get response from 10.244.3.126:8080
Oct 30 03:52:49.412: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=no-snat-test-2393 exec no-snat-testf8ktc -- /bin/sh -x -c curl -q -s --connect-timeout 30 10.244.3.126:8080/clientip'
Oct 30 03:52:49.653: INFO: stderr: "+ curl -q -s --connect-timeout 30 10.244.3.126:8080/clientip\n"
Oct 30 03:52:49.653: INFO: stdout: "10.244.1.4:47216"
STEP: Verifying the preserved source ip
Oct 30 03:52:49.653: INFO: Waiting up to 2m0s to get response from 10.244.2.10:8080
Oct 30 03:52:49.653: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=no-snat-test-2393 exec no-snat-testf8ktc -- /bin/sh -x -c curl -q -s --connect-timeout 30 10.244.2.10:8080/clientip'
Oct 30 03:52:49.888: INFO: stderr: "+ curl -q -s --connect-timeout 30 10.244.2.10:8080/clientip\n"
Oct 30 03:52:49.888: INFO: stdout: "10.244.1.4:47238"
STEP: Verifying the preserved source ip
Oct 30 03:52:49.888: INFO: Waiting up to 2m0s to get response from 10.244.4.12:8080
Oct 30 03:52:49.888: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=no-snat-test-2393 exec no-snat-testf8ktc -- /bin/sh -x -c curl -q -s --connect-timeout 30 10.244.4.12:8080/clientip'
Oct 30 03:52:50.130: INFO: stderr: "+ curl -q -s --connect-timeout 30 10.244.4.12:8080/clientip\n"
Oct 30 03:52:50.130: INFO: stdout: "10.244.1.4:45360"
STEP: Verifying the preserved source ip
Oct 30 03:52:50.130: INFO: Waiting up to 2m0s to get response from 10.244.0.9:8080
Oct 30 03:52:50.130: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=no-snat-test-2393 exec no-snat-testf8ktc -- /bin/sh -x -c curl -q -s --connect-timeout 30 10.244.0.9:8080/clientip'
Oct 30 03:52:50.356: INFO: stderr: "+ curl -q -s --connect-timeout 30 10.244.0.9:8080/clientip\n"
Oct 30 03:52:50.356: INFO: stdout: "10.244.1.4:58610"
STEP: Verifying the preserved source ip
Oct 30 03:52:50.356: INFO: Waiting up to 2m0s to get response from 10.244.3.126:8080
Oct 30 03:52:50.356: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=no-snat-test-2393 exec no-snat-testgzfgv -- /bin/sh -x -c curl -q -s --connect-timeout 30 10.244.3.126:8080/clientip'
Oct 30 03:52:50.615: INFO: stderr: "+ curl -q -s --connect-timeout 30 10.244.3.126:8080/clientip\n"
Oct 30 03:52:50.615: INFO: stdout: "10.244.2.10:52854"
STEP: Verifying the preserved source ip
Oct 30 03:52:50.616: INFO: Waiting up to 2m0s to get response from 10.244.1.4:8080
Oct 30 03:52:50.616: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=no-snat-test-2393 exec no-snat-testgzfgv -- /bin/sh -x -c curl -q -s --connect-timeout 30 10.244.1.4:8080/clientip'
Oct 30 03:52:50.875: INFO: stderr: "+ curl -q -s --connect-timeout 30 10.244.1.4:8080/clientip\n"
Oct 30 03:52:50.875: INFO: stdout: "10.244.2.10:49482"
STEP: Verifying the preserved source ip
Oct 30 03:52:50.875: INFO: Waiting up to 2m0s to get response from 10.244.4.12:8080
Oct 30 03:52:50.875: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=no-snat-test-2393 exec no-snat-testgzfgv -- /bin/sh -x -c curl -q -s --connect-timeout 30 10.244.4.12:8080/clientip'
Oct 30 03:52:51.115: INFO: stderr: "+ curl -q -s --connect-timeout 30 10.244.4.12:8080/clientip\n"
Oct 30 03:52:51.115: INFO: stdout: "10.244.2.10:54386"
STEP: Verifying the preserved source ip
Oct 30 03:52:51.115: INFO: Waiting up to 2m0s to get response from 10.244.0.9:8080
Oct 30 03:52:51.115: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=no-snat-test-2393 exec no-snat-testgzfgv -- /bin/sh -x -c curl -q -s --connect-timeout 30 10.244.0.9:8080/clientip'
Oct 30 03:52:51.361: INFO: stderr: "+ curl -q -s --connect-timeout 30 10.244.0.9:8080/clientip\n"
Oct 30 03:52:51.361: INFO: stdout: "10.244.2.10:40386"
STEP: Verifying the preserved source ip
Oct 30 03:52:51.361: INFO: Waiting up to 2m0s to get response from 10.244.3.126:8080
Oct 30 03:52:51.361: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=no-snat-test-2393 exec no-snat-testhzsg4 -- /bin/sh -x -c curl -q -s --connect-timeout 30 10.244.3.126:8080/clientip'
Oct 30 03:52:51.940: INFO: stderr: "+ curl -q -s --connect-timeout 30 10.244.3.126:8080/clientip\n"
Oct 30 03:52:51.940: INFO: stdout: "10.244.4.12:50596"
STEP: Verifying the preserved source ip
Oct 30 03:52:51.940: INFO: Waiting up to 2m0s to get response from 10.244.1.4:8080
Oct 30 03:52:51.940: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=no-snat-test-2393 exec no-snat-testhzsg4 -- /bin/sh -x -c curl -q -s --connect-timeout 30 10.244.1.4:8080/clientip'
Oct 30 03:52:52.224: INFO: stderr: "+ curl -q -s --connect-timeout 30 10.244.1.4:8080/clientip\n"
Oct 30 03:52:52.224: INFO: stdout: "10.244.4.12:46112"
STEP: Verifying the preserved source ip
Oct 30 03:52:52.224: INFO: Waiting up to 2m0s to get response from 10.244.2.10:8080
Oct 30 03:52:52.224: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=no-snat-test-2393 exec no-snat-testhzsg4 -- /bin/sh -x -c curl -q -s --connect-timeout 30 10.244.2.10:8080/clientip'
Oct 30 03:52:52.483: INFO: stderr: "+ curl -q -s --connect-timeout 30 10.244.2.10:8080/clientip\n"
Oct 30 03:52:52.483: INFO: stdout: "10.244.4.12:33160"
STEP: Verifying the preserved source ip
Oct 30 03:52:52.483: INFO: Waiting up to 2m0s to get response from 10.244.0.9:8080
Oct 30 03:52:52.483: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=no-snat-test-2393 exec no-snat-testhzsg4 -- /bin/sh -x -c curl -q -s --connect-timeout 30 10.244.0.9:8080/clientip'
Oct 30 03:52:52.733: INFO: stderr: "+ curl -q -s --connect-timeout 30 10.244.0.9:8080/clientip\n"
Oct 30 03:52:52.733: INFO: stdout: "10.244.4.12:48284"
STEP: Verifying the preserved source ip
Oct 30 03:52:52.733: INFO: Waiting up to 2m0s to get response from 10.244.3.126:8080
Oct 30 03:52:52.733: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=no-snat-test-2393 exec no-snat-testzz9rm -- /bin/sh -x -c curl -q -s --connect-timeout 30 10.244.3.126:8080/clientip'
Oct 30 03:52:53.035: INFO: stderr: "+ curl -q -s --connect-timeout 30 10.244.3.126:8080/clientip\n"
Oct 30 03:52:53.035: INFO: stdout: "10.244.0.9:44774"
STEP: Verifying the preserved source ip
Oct 30 03:52:53.035: INFO: Waiting up to 2m0s to get response from 10.244.1.4:8080
Oct 30 03:52:53.035: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=no-snat-test-2393 exec no-snat-testzz9rm -- /bin/sh -x -c curl -q -s --connect-timeout 30 10.244.1.4:8080/clientip'
Oct 30 03:52:53.266: INFO: stderr: "+ curl -q -s --connect-timeout 30 10.244.1.4:8080/clientip\n"
Oct 30 03:52:53.266: INFO: stdout: "10.244.0.9:48406"
STEP: Verifying the preserved source ip
Oct 30 03:52:53.266: INFO: Waiting up to 2m0s to get response from 10.244.2.10:8080
Oct 30 03:52:53.266: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=no-snat-test-2393 exec no-snat-testzz9rm -- /bin/sh -x -c curl -q -s --connect-timeout 30 10.244.2.10:8080/clientip'
Oct 30 03:52:53.507: INFO: stderr: "+ curl -q -s --connect-timeout 30 10.244.2.10:8080/clientip\n"
Oct 30 03:52:53.507: INFO: stdout: "10.244.0.9:54288"
STEP: Verifying the preserved source ip
Oct 30 03:52:53.507: INFO: Waiting up to 2m0s to get response from 10.244.4.12:8080
Oct 30 03:52:53.507: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=no-snat-test-2393 exec no-snat-testzz9rm -- /bin/sh -x -c curl -q -s --connect-timeout 30 10.244.4.12:8080/clientip'
Oct 30 03:52:53.756: INFO: stderr: "+ curl -q -s --connect-timeout 30 10.244.4.12:8080/clientip\n"
Oct 30 03:52:53.756: INFO: stdout: "10.244.0.9:35478"
STEP: Verifying the preserved source ip
[AfterEach] [sig-network] NoSNAT [Feature:NoSNAT] [Slow]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct 30 03:52:53.757: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "no-snat-test-2393" for this suite.


• [SLOW TEST:26.407 seconds]
[sig-network] NoSNAT [Feature:NoSNAT] [Slow]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  Should be able to send traffic between Pods without SNAT
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/no_snat.go:64
------------------------------
{"msg":"PASSED [sig-network] NoSNAT [Feature:NoSNAT] [Slow] Should be able to send traffic between Pods without SNAT","total":-1,"completed":1,"skipped":260,"failed":0}

SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-network] Firewall rule
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct 30 03:52:53.956: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename firewall-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Firewall rule
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/firewall.go:61
Oct 30 03:52:53.979: INFO: Only supported for providers [gce] (not local)
[AfterEach] [sig-network] Firewall rule
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct 30 03:52:53.981: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "firewall-test-7983" for this suite.


S [SKIPPING] in Spec Setup (BeforeEach) [0.033 seconds]
[sig-network] Firewall rule
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  control plane should not expose well-known ports [BeforeEach]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/firewall.go:214

  Only supported for providers [gce] (not local)

  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/firewall.go:62
------------------------------
SSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-network] Conntrack
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct 30 03:52:23.285: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename conntrack
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Conntrack
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/conntrack.go:96
[It] should be able to preserve UDP traffic when server pod cycles for a ClusterIP service
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/conntrack.go:203
STEP: creating a UDP service svc-udp with type=ClusterIP in conntrack-6956
STEP: creating a client pod for probing the service svc-udp
Oct 30 03:52:23.326: INFO: The status of Pod pod-client is Pending, waiting for it to be Running (with Ready = true)
Oct 30 03:52:25.328: INFO: The status of Pod pod-client is Pending, waiting for it to be Running (with Ready = true)
Oct 30 03:52:27.329: INFO: The status of Pod pod-client is Pending, waiting for it to be Running (with Ready = true)
Oct 30 03:52:29.329: INFO: The status of Pod pod-client is Pending, waiting for it to be Running (with Ready = true)
Oct 30 03:52:31.331: INFO: The status of Pod pod-client is Pending, waiting for it to be Running (with Ready = true)
Oct 30 03:52:33.330: INFO: The status of Pod pod-client is Running (Ready = true)
Oct 30 03:52:33.806: INFO: Pod client logs: Sat Oct 30 03:52:28 UTC 2021
Sat Oct 30 03:52:28 UTC 2021 Try: 1

Sat Oct 30 03:52:28 UTC 2021 Try: 2

Sat Oct 30 03:52:28 UTC 2021 Try: 3

Sat Oct 30 03:52:28 UTC 2021 Try: 4

Sat Oct 30 03:52:28 UTC 2021 Try: 5

Sat Oct 30 03:52:28 UTC 2021 Try: 6

Sat Oct 30 03:52:28 UTC 2021 Try: 7

Sat Oct 30 03:52:33 UTC 2021 Try: 8

Sat Oct 30 03:52:33 UTC 2021 Try: 9

Sat Oct 30 03:52:33 UTC 2021 Try: 10

Sat Oct 30 03:52:33 UTC 2021 Try: 11

Sat Oct 30 03:52:33 UTC 2021 Try: 12

Sat Oct 30 03:52:33 UTC 2021 Try: 13

STEP: creating a backend pod pod-server-1 for the service svc-udp
Oct 30 03:52:33.820: INFO: The status of Pod pod-server-1 is Pending, waiting for it to be Running (with Ready = true)
Oct 30 03:52:35.825: INFO: The status of Pod pod-server-1 is Pending, waiting for it to be Running (with Ready = true)
Oct 30 03:52:37.824: INFO: The status of Pod pod-server-1 is Pending, waiting for it to be Running (with Ready = true)
Oct 30 03:52:39.823: INFO: The status of Pod pod-server-1 is Running (Ready = true)
STEP: waiting up to 3m0s for service svc-udp in namespace conntrack-6956 to expose endpoints map[pod-server-1:[80]]
Oct 30 03:52:39.832: INFO: successfully validated that service svc-udp in namespace conntrack-6956 exposes endpoints map[pod-server-1:[80]]
STEP: checking client pod connected to the backend 1 on Node IP 10.10.190.208
STEP: creating a second backend pod pod-server-2 for the service svc-udp
Oct 30 03:52:49.862: INFO: The status of Pod pod-server-2 is Pending, waiting for it to be Running (with Ready = true)
Oct 30 03:52:51.867: INFO: The status of Pod pod-server-2 is Pending, waiting for it to be Running (with Ready = true)
Oct 30 03:52:53.865: INFO: The status of Pod pod-server-2 is Running (Ready = true)
Oct 30 03:52:53.867: INFO: Cleaning up pod-server-1 pod
Oct 30 03:52:53.873: INFO: Waiting for pod pod-server-1 to disappear
Oct 30 03:52:53.876: INFO: Pod pod-server-1 no longer exists
STEP: waiting up to 3m0s for service svc-udp in namespace conntrack-6956 to expose endpoints map[pod-server-2:[80]]
Oct 30 03:52:53.882: INFO: successfully validated that service svc-udp in namespace conntrack-6956 exposes endpoints map[pod-server-2:[80]]
STEP: checking client pod connected to the backend 2 on Node IP 10.10.190.208
[AfterEach] [sig-network] Conntrack
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct 30 03:53:04.192: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "conntrack-6956" for this suite.


• [SLOW TEST:40.915 seconds]
[sig-network] Conntrack
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should be able to preserve UDP traffic when server pod cycles for a ClusterIP service
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/conntrack.go:203
------------------------------
{"msg":"PASSED [sig-network] Conntrack should be able to preserve UDP traffic when server pod cycles for a ClusterIP service","total":-1,"completed":6,"skipped":836,"failed":0}

SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct 30 03:52:38.676: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename nettest
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:83
STEP: Executing a successful http request from the external internet
[It] should function for endpoint-Service: udp
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:256
STEP: Performing setup for networking test in namespace nettest-492
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Oct 30 03:52:38.811: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
Oct 30 03:52:38.845: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Oct 30 03:52:40.851: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Oct 30 03:52:42.849: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Oct 30 03:52:44.850: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Oct 30 03:52:46.850: INFO: The status of Pod netserver-0 is Running (Ready = false)
Oct 30 03:52:48.849: INFO: The status of Pod netserver-0 is Running (Ready = false)
Oct 30 03:52:50.850: INFO: The status of Pod netserver-0 is Running (Ready = false)
Oct 30 03:52:52.849: INFO: The status of Pod netserver-0 is Running (Ready = false)
Oct 30 03:52:54.849: INFO: The status of Pod netserver-0 is Running (Ready = false)
Oct 30 03:52:56.851: INFO: The status of Pod netserver-0 is Running (Ready = false)
Oct 30 03:52:58.849: INFO: The status of Pod netserver-0 is Running (Ready = false)
Oct 30 03:53:00.849: INFO: The status of Pod netserver-0 is Running (Ready = true)
Oct 30 03:53:00.853: INFO: The status of Pod netserver-1 is Running (Ready = true)
STEP: Creating test pods
Oct 30 03:53:04.873: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2
STEP: Getting node addresses
Oct 30 03:53:04.873: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
Oct 30 03:53:04.880: INFO: Requires at least 2 nodes (not -1)
[AfterEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct 30 03:53:04.882: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "nettest-492" for this suite.


S [SKIPPING] [26.215 seconds]
[sig-network] Networking
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  Granular Checks: Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:151
    should function for endpoint-Service: udp [It]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:256

    Requires at least 2 nodes (not -1)

    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/network/utils.go:782
------------------------------
SSSSSSSS
------------------------------
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct 30 03:52:33.052: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename nettest
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:83
STEP: Executing a successful http request from the external internet
[It] should function for multiple endpoint-Services with same selector
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:289
STEP: Performing setup for networking test in namespace nettest-2599
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Oct 30 03:52:33.164: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
Oct 30 03:52:33.200: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Oct 30 03:52:35.205: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Oct 30 03:52:37.418: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Oct 30 03:52:39.204: INFO: The status of Pod netserver-0 is Running (Ready = false)
Oct 30 03:52:41.206: INFO: The status of Pod netserver-0 is Running (Ready = false)
Oct 30 03:52:43.205: INFO: The status of Pod netserver-0 is Running (Ready = false)
Oct 30 03:52:45.205: INFO: The status of Pod netserver-0 is Running (Ready = false)
Oct 30 03:52:47.206: INFO: The status of Pod netserver-0 is Running (Ready = false)
Oct 30 03:52:49.204: INFO: The status of Pod netserver-0 is Running (Ready = false)
Oct 30 03:52:51.210: INFO: The status of Pod netserver-0 is Running (Ready = false)
Oct 30 03:52:53.204: INFO: The status of Pod netserver-0 is Running (Ready = false)
Oct 30 03:52:55.204: INFO: The status of Pod netserver-0 is Running (Ready = true)
Oct 30 03:52:55.210: INFO: The status of Pod netserver-1 is Running (Ready = true)
STEP: Creating test pods
Oct 30 03:53:07.231: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2
STEP: Getting node addresses
Oct 30 03:53:07.231: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
Oct 30 03:53:07.238: INFO: Requires at least 2 nodes (not -1)
[AfterEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct 30 03:53:07.239: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "nettest-2599" for this suite.


S [SKIPPING] [34.197 seconds]
[sig-network] Networking
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  Granular Checks: Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:151
    should function for multiple endpoint-Services with same selector [It]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:289

    Requires at least 2 nodes (not -1)

    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/network/utils.go:782
------------------------------
S
------------------------------
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct 30 03:52:48.729: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename nettest
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:83
STEP: Executing a successful http request from the external internet
[It] should update nodePort: http [Slow]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:369
STEP: Performing setup for networking test in namespace nettest-2067
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Oct 30 03:52:48.841: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
Oct 30 03:52:48.872: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Oct 30 03:52:50.875: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Oct 30 03:52:52.874: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Oct 30 03:52:54.875: INFO: The status of Pod netserver-0 is Running (Ready = false)
Oct 30 03:52:56.876: INFO: The status of Pod netserver-0 is Running (Ready = false)
Oct 30 03:52:58.875: INFO: The status of Pod netserver-0 is Running (Ready = false)
Oct 30 03:53:00.875: INFO: The status of Pod netserver-0 is Running (Ready = false)
Oct 30 03:53:02.874: INFO: The status of Pod netserver-0 is Running (Ready = false)
Oct 30 03:53:04.875: INFO: The status of Pod netserver-0 is Running (Ready = false)
Oct 30 03:53:06.876: INFO: The status of Pod netserver-0 is Running (Ready = false)
Oct 30 03:53:08.876: INFO: The status of Pod netserver-0 is Running (Ready = false)
Oct 30 03:53:10.878: INFO: The status of Pod netserver-0 is Running (Ready = true)
Oct 30 03:53:10.883: INFO: The status of Pod netserver-1 is Running (Ready = true)
STEP: Creating test pods
Oct 30 03:53:16.946: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2
STEP: Getting node addresses
Oct 30 03:53:16.946: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
Oct 30 03:53:16.952: INFO: Requires at least 2 nodes (not -1)
[AfterEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct 30 03:53:16.954: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "nettest-2067" for this suite.


S [SKIPPING] [28.233 seconds]
[sig-network] Networking
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  Granular Checks: Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:151
    should update nodePort: http [Slow] [It]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:369

    Requires at least 2 nodes (not -1)

    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/network/utils.go:782
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-network] Firewall rule
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct 30 03:53:17.162: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename firewall-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Firewall rule
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/firewall.go:61
Oct 30 03:53:17.185: INFO: Only supported for providers [gce] (not local)
[AfterEach] [sig-network] Firewall rule
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct 30 03:53:17.187: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "firewall-test-6544" for this suite.


S [SKIPPING] in Spec Setup (BeforeEach) [0.033 seconds]
[sig-network] Firewall rule
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should have correct firewall rules for e2e cluster [BeforeEach]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/firewall.go:204

  Only supported for providers [gce] (not local)

  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/firewall.go:62
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-network] ESIPP [Slow]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct 30 03:53:17.483: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename esipp
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] ESIPP [Slow]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/loadbalancer.go:858
Oct 30 03:53:17.504: INFO: Only supported for providers [gce gke] (not local)
[AfterEach] [sig-network] ESIPP [Slow]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct 30 03:53:17.505: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "esipp-4982" for this suite.
[AfterEach] [sig-network] ESIPP [Slow]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/loadbalancer.go:866


S [SKIPPING] in Spec Setup (BeforeEach) [0.029 seconds]
[sig-network] ESIPP [Slow]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should only target nodes with endpoints [BeforeEach]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/loadbalancer.go:959

  Only supported for providers [gce gke] (not local)

  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/loadbalancer.go:860
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-network] ESIPP [Slow]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct 30 03:53:17.575: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename esipp
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] ESIPP [Slow]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/loadbalancer.go:858
Oct 30 03:53:17.593: INFO: Only supported for providers [gce gke] (not local)
[AfterEach] [sig-network] ESIPP [Slow]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct 30 03:53:17.594: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "esipp-7430" for this suite.
[AfterEach] [sig-network] ESIPP [Slow]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/loadbalancer.go:866


S [SKIPPING] in Spec Setup (BeforeEach) [0.026 seconds]
[sig-network] ESIPP [Slow]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should handle updates to ExternalTrafficPolicy field [BeforeEach]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/loadbalancer.go:1095

  Only supported for providers [gce gke] (not local)

  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/loadbalancer.go:860
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-network] KubeProxy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct 30 03:52:54.018: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kube-proxy
STEP: Waiting for a default service account to be provisioned in namespace
[It] should set TCP CLOSE_WAIT timeout [Privileged]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/kube_proxy.go:53
Oct 30 03:52:54.054: INFO: The status of Pod e2e-net-exec is Pending, waiting for it to be Running (with Ready = true)
Oct 30 03:52:56.059: INFO: The status of Pod e2e-net-exec is Pending, waiting for it to be Running (with Ready = true)
Oct 30 03:52:58.058: INFO: The status of Pod e2e-net-exec is Pending, waiting for it to be Running (with Ready = true)
Oct 30 03:53:00.058: INFO: The status of Pod e2e-net-exec is Pending, waiting for it to be Running (with Ready = true)
Oct 30 03:53:02.059: INFO: The status of Pod e2e-net-exec is Pending, waiting for it to be Running (with Ready = true)
Oct 30 03:53:04.058: INFO: The status of Pod e2e-net-exec is Pending, waiting for it to be Running (with Ready = true)
Oct 30 03:53:06.059: INFO: The status of Pod e2e-net-exec is Running (Ready = true)
STEP: Launching a server daemon on node node2 (node ip: 10.10.190.208, image: k8s.gcr.io/e2e-test-images/agnhost:2.32)
Oct 30 03:53:06.076: INFO: The status of Pod e2e-net-server is Pending, waiting for it to be Running (with Ready = true)
Oct 30 03:53:08.079: INFO: The status of Pod e2e-net-server is Pending, waiting for it to be Running (with Ready = true)
Oct 30 03:53:10.079: INFO: The status of Pod e2e-net-server is Running (Ready = true)
STEP: Launching a client connection on node node1 (node ip: 10.10.190.207, image: k8s.gcr.io/e2e-test-images/agnhost:2.32)
Oct 30 03:53:12.127: INFO: The status of Pod e2e-net-client is Pending, waiting for it to be Running (with Ready = true)
Oct 30 03:53:14.131: INFO: The status of Pod e2e-net-client is Pending, waiting for it to be Running (with Ready = true)
Oct 30 03:53:16.131: INFO: The status of Pod e2e-net-client is Pending, waiting for it to be Running (with Ready = true)
Oct 30 03:53:18.130: INFO: The status of Pod e2e-net-client is Pending, waiting for it to be Running (with Ready = true)
Oct 30 03:53:20.130: INFO: The status of Pod e2e-net-client is Running (Ready = true)
STEP: Checking conntrack entries for the timeout
Oct 30 03:53:20.134: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kube-proxy-9254 exec e2e-net-exec -- /bin/sh -x -c conntrack -L -f ipv4 -d 10.10.190.208 | grep -m 1 'CLOSE_WAIT.*dport=11302' '
Oct 30 03:53:20.405: INFO: stderr: "+ grep -m 1 CLOSE_WAIT.*dport=11302\n+ conntrack -L -f ipv4 -d 10.10.190.208\nconntrack v1.4.5 (conntrack-tools): 7 flow entries have been shown.\n"
Oct 30 03:53:20.405: INFO: stdout: "tcp      6 3597 CLOSE_WAIT src=10.244.3.139 dst=10.10.190.208 sport=53418 dport=11302 src=10.10.190.208 dst=10.10.190.207 sport=11302 dport=7592 [ASSURED] mark=0 secctx=system_u:object_r:unlabeled_t:s0 use=1\n"
Oct 30 03:53:20.405: INFO: conntrack entry for node 10.10.190.208 and port 11302:  tcp      6 3597 CLOSE_WAIT src=10.244.3.139 dst=10.10.190.208 sport=53418 dport=11302 src=10.10.190.208 dst=10.10.190.207 sport=11302 dport=7592 [ASSURED] mark=0 secctx=system_u:object_r:unlabeled_t:s0 use=1

[AfterEach] [sig-network] KubeProxy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct 30 03:53:20.405: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kube-proxy-9254" for this suite.


• [SLOW TEST:26.396 seconds]
[sig-network] KubeProxy
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should set TCP CLOSE_WAIT timeout [Privileged]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/kube_proxy.go:53
------------------------------
{"msg":"PASSED [sig-network] KubeProxy should set TCP CLOSE_WAIT timeout [Privileged]","total":-1,"completed":2,"skipped":378,"failed":0}

SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-network] Networking IPerf2 [Feature:Networking-Performance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct 30 03:52:40.252: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename network-perf
STEP: Waiting for a default service account to be provisioned in namespace
[It] should run iperf2
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking_perf.go:188
Oct 30 03:52:40.277: INFO: deploying iperf2 server
Oct 30 03:52:40.280: INFO: Waiting for deployment "iperf2-server-deployment" to complete
Oct 30 03:52:40.282: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:0, Replicas:0, UpdatedReplicas:0, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:0, Conditions:[]v1.DeploymentCondition(nil), CollisionCount:(*int32)(nil)}
Oct 30 03:52:42.286: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63771162760, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63771162760, loc:(*time.Location)(0x9e12f00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63771162760, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63771162760, loc:(*time.Location)(0x9e12f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"iperf2-server-deployment-59979d877\" is progressing."}}, CollisionCount:(*int32)(nil)}
Oct 30 03:52:44.286: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63771162760, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63771162760, loc:(*time.Location)(0x9e12f00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63771162760, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63771162760, loc:(*time.Location)(0x9e12f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"iperf2-server-deployment-59979d877\" is progressing."}}, CollisionCount:(*int32)(nil)}
Oct 30 03:52:46.298: INFO: waiting for iperf2 server endpoints
Oct 30 03:52:48.301: INFO: found iperf2 server endpoints
Oct 30 03:52:48.301: INFO: waiting for client pods to be running
Oct 30 03:52:50.306: INFO: all client pods are ready: 2 pods
Oct 30 03:52:50.309: INFO: server pod phase Running
Oct 30 03:52:50.309: INFO: server pod condition 0: {Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-10-30 03:52:40 +0000 UTC Reason: Message:}
Oct 30 03:52:50.309: INFO: server pod condition 1: {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-10-30 03:52:45 +0000 UTC Reason: Message:}
Oct 30 03:52:50.309: INFO: server pod condition 2: {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-10-30 03:52:45 +0000 UTC Reason: Message:}
Oct 30 03:52:50.309: INFO: server pod condition 3: {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-10-30 03:52:40 +0000 UTC Reason: Message:}
Oct 30 03:52:50.309: INFO: server pod container status 0: {Name:iperf2-server State:{Waiting:nil Running:&ContainerStateRunning{StartedAt:2021-10-30 03:52:45 +0000 UTC,} Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:true RestartCount:0 Image:k8s.gcr.io/e2e-test-images/agnhost:2.32 ImageID:docker-pullable://k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 ContainerID:docker://fd2ab7e5a6e2e44a2bfe62d9da164941d3bbe29fb01e265e2e171043ca45d5ee Started:0xc000b0531c}
Oct 30 03:52:50.309: INFO: found 2 matching client pods
Oct 30 03:52:50.312: INFO: ExecWithOptions {Command:[/bin/sh -c iperf -v || true] Namespace:network-perf-7125 PodName:iperf2-clients-lvvkr ContainerName:iperf2-client Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Oct 30 03:52:50.312: INFO: >>> kubeConfig: /root/.kube/config
Oct 30 03:52:50.531: INFO: Exec stderr: "iperf version 2.0.13 (21 Jan 2019) pthreads"
Oct 30 03:52:50.531: INFO: iperf version: 
Oct 30 03:52:50.531: INFO: attempting to run command 'iperf  -e -p 6789 --reportstyle C -i 1 -c iperf2-server && sleep 5' in client pod iperf2-clients-lvvkr (node node1)
Oct 30 03:52:50.533: INFO: ExecWithOptions {Command:[/bin/sh -c iperf  -e -p 6789 --reportstyle C -i 1 -c iperf2-server && sleep 5] Namespace:network-perf-7125 PodName:iperf2-clients-lvvkr ContainerName:iperf2-client Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Oct 30 03:52:50.533: INFO: >>> kubeConfig: /root/.kube/config
Oct 30 03:53:05.694: INFO: Exec stderr: ""
Oct 30 03:53:05.694: INFO: output from exec on client pod iperf2-clients-lvvkr (node node1): 
20211030035251.663,10.244.3.133,51484,10.233.17.45,6789,3,0.0-1.0,4045537280,32364298240
20211030035252.666,10.244.3.133,51484,10.233.17.45,6789,3,1.0-2.0,3475767296,27806138368
20211030035253.661,10.244.3.133,51484,10.233.17.45,6789,3,2.0-3.0,3451518976,27612151808
20211030035254.651,10.244.3.133,51484,10.233.17.45,6789,3,3.0-4.0,3613130752,28905046016
20211030035255.648,10.244.3.133,51484,10.233.17.45,6789,3,4.0-5.0,4209508352,33676066816
20211030035256.672,10.244.3.133,51484,10.233.17.45,6789,3,5.0-6.0,3521380352,28171042816
20211030035257.719,10.244.3.133,51484,10.233.17.45,6789,3,6.0-7.0,2311716864,18493734912
20211030035258.665,10.244.3.133,51484,10.233.17.45,6789,3,7.0-8.0,3873570816,30988566528
20211030035259.660,10.244.3.133,51484,10.233.17.45,6789,3,8.0-9.0,4045406208,32363249664
20211030035300.648,10.244.3.133,51484,10.233.17.45,6789,3,9.0-10.0,4054712320,32437698560
20211030035300.648,10.244.3.133,51484,10.233.17.45,6789,3,0.0-10.0,36602249216,29281758378

Oct 30 03:53:05.697: INFO: ExecWithOptions {Command:[/bin/sh -c iperf -v || true] Namespace:network-perf-7125 PodName:iperf2-clients-wx58w ContainerName:iperf2-client Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Oct 30 03:53:05.697: INFO: >>> kubeConfig: /root/.kube/config
Oct 30 03:53:05.796: INFO: Exec stderr: "iperf version 2.0.13 (21 Jan 2019) pthreads"
Oct 30 03:53:05.796: INFO: iperf version: 
Oct 30 03:53:05.796: INFO: attempting to run command 'iperf  -e -p 6789 --reportstyle C -i 1 -c iperf2-server && sleep 5' in client pod iperf2-clients-wx58w (node node2)
Oct 30 03:53:05.798: INFO: ExecWithOptions {Command:[/bin/sh -c iperf  -e -p 6789 --reportstyle C -i 1 -c iperf2-server && sleep 5] Namespace:network-perf-7125 PodName:iperf2-clients-wx58w ContainerName:iperf2-client Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Oct 30 03:53:05.798: INFO: >>> kubeConfig: /root/.kube/config
Oct 30 03:53:20.951: INFO: Exec stderr: ""
Oct 30 03:53:20.951: INFO: output from exec on client pod iperf2-clients-wx58w (node node2): 
20211030035306.904,10.244.4.16,50564,10.233.17.45,6789,3,0.0-1.0,79429632,635437056
20211030035307.910,10.244.4.16,50564,10.233.17.45,6789,3,1.0-2.0,117964800,943718400
20211030035308.899,10.244.4.16,50564,10.233.17.45,6789,3,2.0-3.0,98959360,791674880
20211030035309.889,10.244.4.16,50564,10.233.17.45,6789,3,3.0-4.0,110755840,886046720
20211030035310.895,10.244.4.16,50564,10.233.17.45,6789,3,4.0-5.0,114950144,919601152
20211030035311.901,10.244.4.16,50564,10.233.17.45,6789,3,5.0-6.0,117047296,936378368
20211030035312.907,10.244.4.16,50564,10.233.17.45,6789,3,6.0-7.0,117571584,940572672
20211030035313.913,10.244.4.16,50564,10.233.17.45,6789,3,7.0-8.0,118095872,944766976
20211030035314.901,10.244.4.16,50564,10.233.17.45,6789,3,8.0-9.0,115736576,925892608
20211030035315.908,10.244.4.16,50564,10.233.17.45,6789,3,9.0-10.0,117833728,942669824
20211030035315.908,10.244.4.16,50564,10.233.17.45,6789,3,0.0-10.0,1108344832,885826181

Oct 30 03:53:20.951: INFO:                                From                                 To    Bandwidth (MB/s)
Oct 30 03:53:20.951: INFO:                               node1                              node1                3491
Oct 30 03:53:20.951: INFO:                               node2                              node1                 106
[AfterEach] [sig-network] Networking IPerf2 [Feature:Networking-Performance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct 30 03:53:20.951: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "network-perf-7125" for this suite.


• [SLOW TEST:40.708 seconds]
[sig-network] Networking IPerf2 [Feature:Networking-Performance]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should run iperf2
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking_perf.go:188
------------------------------
{"msg":"PASSED [sig-network] Networking IPerf2 [Feature:Networking-Performance] should run iperf2","total":-1,"completed":2,"skipped":631,"failed":0}

SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct 30 03:52:06.466: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746
[It] should implement service.kubernetes.io/headless
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1916
STEP: creating service-headless in namespace services-1302
STEP: creating service service-headless in namespace services-1302
STEP: creating replication controller service-headless in namespace services-1302
I1030 03:52:06.496576      22 runners.go:190] Created replication controller with name: service-headless, namespace: services-1302, replica count: 3
I1030 03:52:09.548046      22 runners.go:190] service-headless Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1030 03:52:12.549562      22 runners.go:190] service-headless Pods: 3 out of 3 created, 1 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1030 03:52:15.550008      22 runners.go:190] service-headless Pods: 3 out of 3 created, 2 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1030 03:52:18.550565      22 runners.go:190] service-headless Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
STEP: creating service in namespace services-1302
STEP: creating service service-headless-toggled in namespace services-1302
STEP: creating replication controller service-headless-toggled in namespace services-1302
I1030 03:52:18.561884      22 runners.go:190] Created replication controller with name: service-headless-toggled, namespace: services-1302, replica count: 3
I1030 03:52:21.614063      22 runners.go:190] service-headless-toggled Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1030 03:52:24.614423      22 runners.go:190] service-headless-toggled Pods: 3 out of 3 created, 1 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1030 03:52:27.615138      22 runners.go:190] service-headless-toggled Pods: 3 out of 3 created, 2 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1030 03:52:30.616777      22 runners.go:190] service-headless-toggled Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
STEP: verifying service is up
Oct 30 03:52:30.619: INFO: Creating new host exec pod
Oct 30 03:52:30.630: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Oct 30 03:52:32.632: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Oct 30 03:52:34.632: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Oct 30 03:52:36.635: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Oct 30 03:52:38.633: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Oct 30 03:52:40.634: INFO: The status of Pod verify-service-up-host-exec-pod is Running (Ready = true)
Oct 30 03:52:40.634: INFO: Creating new exec pod
STEP: verifying service has 3 reachable backends
Oct 30 03:52:46.656: INFO: Executing cmd "for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.31.42:80 2>&1 || true; echo; done" in pod services-1302/verify-service-up-host-exec-pod
Oct 30 03:52:46.656: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1302 exec verify-service-up-host-exec-pod -- /bin/sh -x -c for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.31.42:80 2>&1 || true; echo; done'
Oct 30 03:52:47.124: INFO: stderr: "+ seq 1 150\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n"
Oct 30 03:52:47.125: INFO: stdout: "service-headless-toggled-qhh25\nservice-headless-toggled-qhh25\nservice-headless-toggled-qhh25\nservice-headless-toggled-qhh25\nservice-headless-toggled-qhh25\nservice-headless-toggled-7vkvw\nservice-headless-toggled-qhh25\nservice-headless-toggled-vq4td\nservice-headless-toggled-vq4td\nservice-headless-toggled-vq4td\nservice-headless-toggled-vq4td\nservice-headless-toggled-vq4td\nservice-headless-toggled-7vkvw\nservice-headless-toggled-qhh25\nservice-headless-toggled-7vkvw\nservice-headless-toggled-vq4td\nservice-headless-toggled-qhh25\nservice-headless-toggled-7vkvw\nservice-headless-toggled-qhh25\nservice-headless-toggled-vq4td\nservice-headless-toggled-vq4td\nservice-headless-toggled-qhh25\nservice-headless-toggled-vq4td\nservice-headless-toggled-vq4td\nservice-headless-toggled-qhh25\nservice-headless-toggled-7vkvw\nservice-headless-toggled-qhh25\nservice-headless-toggled-vq4td\nservice-headless-toggled-qhh25\nservice-headless-toggled-vq4td\nservice-headless-toggled-qhh25\nservice-headless-toggled-vq4td\nservice-headless-toggled-7vkvw\nservice-headless-toggled-7vkvw\nservice-headless-toggled-vq4td\nservice-headless-toggled-7vkvw\nservice-headless-toggled-vq4td\nservice-headless-toggled-qhh25\nservice-headless-toggled-7vkvw\nservice-headless-toggled-qhh25\nservice-headless-toggled-vq4td\nservice-headless-toggled-qhh25\nservice-headless-toggled-7vkvw\nservice-headless-toggled-qhh25\nservice-headless-toggled-vq4td\nservice-headless-toggled-vq4td\nservice-headless-toggled-7vkvw\nservice-headless-toggled-7vkvw\nservice-headless-toggled-vq4td\nservice-headless-toggled-vq4td\nservice-headless-toggled-vq4td\nservice-headless-toggled-7vkvw\nservice-headless-toggled-7vkvw\nservice-headless-toggled-qhh25\nservice-headless-toggled-vq4td\nservice-headless-toggled-7vkvw\nservice-headless-toggled-vq4td\nservice-headless-toggled-vq4td\nservice-headless-toggled-qhh25\nservice-headless-toggled-7vkvw\nservice-headless-toggled-qhh25\nservice-headless-toggled-vq4td\nservice-headless-toggled-vq4td\nservice-headless-toggled-qhh25\nservice-headless-toggled-qhh25\nservice-headless-toggled-7vkvw\nservice-headless-toggled-qhh25\nservice-headless-toggled-vq4td\nservice-headless-toggled-7vkvw\nservice-headless-toggled-7vkvw\nservice-headless-toggled-qhh25\nservice-headless-toggled-7vkvw\nservice-headless-toggled-qhh25\nservice-headless-toggled-vq4td\nservice-headless-toggled-qhh25\nservice-headless-toggled-7vkvw\nservice-headless-toggled-7vkvw\nservice-headless-toggled-vq4td\nservice-headless-toggled-vq4td\nservice-headless-toggled-vq4td\nservice-headless-toggled-7vkvw\nservice-headless-toggled-qhh25\nservice-headless-toggled-7vkvw\nservice-headless-toggled-qhh25\nservice-headless-toggled-qhh25\nservice-headless-toggled-7vkvw\nservice-headless-toggled-vq4td\nservice-headless-toggled-vq4td\nservice-headless-toggled-7vkvw\nservice-headless-toggled-vq4td\nservice-headless-toggled-qhh25\nservice-headless-toggled-vq4td\nservice-headless-toggled-qhh25\nservice-headless-toggled-qhh25\nservice-headless-toggled-vq4td\nservice-headless-toggled-7vkvw\nservice-headless-toggled-vq4td\nservice-headless-toggled-qhh25\nservice-headless-toggled-qhh25\nservice-headless-toggled-qhh25\nservice-headless-toggled-qhh25\nservice-headless-toggled-vq4td\nservice-headless-toggled-vq4td\nservice-headless-toggled-7vkvw\nservice-headless-toggled-7vkvw\nservice-headless-toggled-vq4td\nservice-headless-toggled-7vkvw\nservice-headless-toggled-7vkvw\nservice-headless-toggled-vq4td\nservice-headless-toggled-qhh25\nservice-headless-toggled-7vkvw\nservice-headless-toggled-vq4td\nservice-headless-toggled-qhh25\nservice-headless-toggled-vq4td\nservice-headless-toggled-7vkvw\nservice-headless-toggled-vq4td\nservice-headless-toggled-7vkvw\nservice-headless-toggled-7vkvw\nservice-headless-toggled-7vkvw\nservice-headless-toggled-7vkvw\nservice-headless-toggled-7vkvw\nservice-headless-toggled-vq4td\nservice-headless-toggled-qhh25\nservice-headless-toggled-vq4td\nservice-headless-toggled-qhh25\nservice-headless-toggled-vq4td\nservice-headless-toggled-vq4td\nservice-headless-toggled-qhh25\nservice-headless-toggled-7vkvw\nservice-headless-toggled-vq4td\nservice-headless-toggled-qhh25\nservice-headless-toggled-qhh25\nservice-headless-toggled-vq4td\nservice-headless-toggled-vq4td\nservice-headless-toggled-vq4td\nservice-headless-toggled-vq4td\nservice-headless-toggled-7vkvw\nservice-headless-toggled-vq4td\nservice-headless-toggled-7vkvw\nservice-headless-toggled-7vkvw\nservice-headless-toggled-vq4td\nservice-headless-toggled-7vkvw\nservice-headless-toggled-qhh25\nservice-headless-toggled-7vkvw\nservice-headless-toggled-qhh25\nservice-headless-toggled-vq4td\nservice-headless-toggled-qhh25\nservice-headless-toggled-qhh25\nservice-headless-toggled-qhh25\nservice-headless-toggled-qhh25\n"
Oct 30 03:52:47.125: INFO: Executing cmd "for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.31.42:80 2>&1 || true; echo; done" in pod services-1302/verify-service-up-exec-pod-hsb9g
Oct 30 03:52:47.125: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1302 exec verify-service-up-exec-pod-hsb9g -- /bin/sh -x -c for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.31.42:80 2>&1 || true; echo; done'
Oct 30 03:52:47.575: INFO: stderr: "+ seq 1 150\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n"
Oct 30 03:52:47.575: INFO: stdout: "service-headless-toggled-7vkvw\nservice-headless-toggled-7vkvw\nservice-headless-toggled-vq4td\nservice-headless-toggled-vq4td\nservice-headless-toggled-vq4td\nservice-headless-toggled-vq4td\nservice-headless-toggled-7vkvw\nservice-headless-toggled-vq4td\nservice-headless-toggled-7vkvw\nservice-headless-toggled-vq4td\nservice-headless-toggled-7vkvw\nservice-headless-toggled-vq4td\nservice-headless-toggled-7vkvw\nservice-headless-toggled-qhh25\nservice-headless-toggled-qhh25\nservice-headless-toggled-vq4td\nservice-headless-toggled-7vkvw\nservice-headless-toggled-vq4td\nservice-headless-toggled-vq4td\nservice-headless-toggled-7vkvw\nservice-headless-toggled-7vkvw\nservice-headless-toggled-vq4td\nservice-headless-toggled-vq4td\nservice-headless-toggled-vq4td\nservice-headless-toggled-7vkvw\nservice-headless-toggled-7vkvw\nservice-headless-toggled-7vkvw\nservice-headless-toggled-vq4td\nservice-headless-toggled-7vkvw\nservice-headless-toggled-qhh25\nservice-headless-toggled-qhh25\nservice-headless-toggled-7vkvw\nservice-headless-toggled-vq4td\nservice-headless-toggled-vq4td\nservice-headless-toggled-qhh25\nservice-headless-toggled-7vkvw\nservice-headless-toggled-7vkvw\nservice-headless-toggled-qhh25\nservice-headless-toggled-7vkvw\nservice-headless-toggled-qhh25\nservice-headless-toggled-7vkvw\nservice-headless-toggled-qhh25\nservice-headless-toggled-vq4td\nservice-headless-toggled-vq4td\nservice-headless-toggled-vq4td\nservice-headless-toggled-7vkvw\nservice-headless-toggled-qhh25\nservice-headless-toggled-qhh25\nservice-headless-toggled-vq4td\nservice-headless-toggled-7vkvw\nservice-headless-toggled-vq4td\nservice-headless-toggled-7vkvw\nservice-headless-toggled-qhh25\nservice-headless-toggled-7vkvw\nservice-headless-toggled-qhh25\nservice-headless-toggled-7vkvw\nservice-headless-toggled-qhh25\nservice-headless-toggled-qhh25\nservice-headless-toggled-qhh25\nservice-headless-toggled-7vkvw\nservice-headless-toggled-qhh25\nservice-headless-toggled-vq4td\nservice-headless-toggled-qhh25\nservice-headless-toggled-qhh25\nservice-headless-toggled-qhh25\nservice-headless-toggled-vq4td\nservice-headless-toggled-7vkvw\nservice-headless-toggled-qhh25\nservice-headless-toggled-7vkvw\nservice-headless-toggled-7vkvw\nservice-headless-toggled-qhh25\nservice-headless-toggled-7vkvw\nservice-headless-toggled-qhh25\nservice-headless-toggled-qhh25\nservice-headless-toggled-7vkvw\nservice-headless-toggled-7vkvw\nservice-headless-toggled-qhh25\nservice-headless-toggled-qhh25\nservice-headless-toggled-qhh25\nservice-headless-toggled-7vkvw\nservice-headless-toggled-7vkvw\nservice-headless-toggled-qhh25\nservice-headless-toggled-vq4td\nservice-headless-toggled-qhh25\nservice-headless-toggled-qhh25\nservice-headless-toggled-vq4td\nservice-headless-toggled-qhh25\nservice-headless-toggled-7vkvw\nservice-headless-toggled-vq4td\nservice-headless-toggled-7vkvw\nservice-headless-toggled-vq4td\nservice-headless-toggled-vq4td\nservice-headless-toggled-vq4td\nservice-headless-toggled-vq4td\nservice-headless-toggled-7vkvw\nservice-headless-toggled-qhh25\nservice-headless-toggled-7vkvw\nservice-headless-toggled-7vkvw\nservice-headless-toggled-vq4td\nservice-headless-toggled-qhh25\nservice-headless-toggled-vq4td\nservice-headless-toggled-qhh25\nservice-headless-toggled-qhh25\nservice-headless-toggled-vq4td\nservice-headless-toggled-7vkvw\nservice-headless-toggled-qhh25\nservice-headless-toggled-vq4td\nservice-headless-toggled-vq4td\nservice-headless-toggled-vq4td\nservice-headless-toggled-vq4td\nservice-headless-toggled-vq4td\nservice-headless-toggled-7vkvw\nservice-headless-toggled-7vkvw\nservice-headless-toggled-7vkvw\nservice-headless-toggled-7vkvw\nservice-headless-toggled-7vkvw\nservice-headless-toggled-vq4td\nservice-headless-toggled-vq4td\nservice-headless-toggled-vq4td\nservice-headless-toggled-7vkvw\nservice-headless-toggled-vq4td\nservice-headless-toggled-vq4td\nservice-headless-toggled-7vkvw\nservice-headless-toggled-vq4td\nservice-headless-toggled-vq4td\nservice-headless-toggled-qhh25\nservice-headless-toggled-7vkvw\nservice-headless-toggled-qhh25\nservice-headless-toggled-7vkvw\nservice-headless-toggled-qhh25\nservice-headless-toggled-qhh25\nservice-headless-toggled-7vkvw\nservice-headless-toggled-qhh25\nservice-headless-toggled-qhh25\nservice-headless-toggled-vq4td\nservice-headless-toggled-qhh25\nservice-headless-toggled-qhh25\nservice-headless-toggled-7vkvw\nservice-headless-toggled-7vkvw\nservice-headless-toggled-7vkvw\nservice-headless-toggled-vq4td\nservice-headless-toggled-qhh25\nservice-headless-toggled-qhh25\nservice-headless-toggled-qhh25\nservice-headless-toggled-qhh25\nservice-headless-toggled-7vkvw\nservice-headless-toggled-qhh25\nservice-headless-toggled-7vkvw\nservice-headless-toggled-7vkvw\nservice-headless-toggled-7vkvw\n"
STEP: Deleting pod verify-service-up-host-exec-pod in namespace services-1302
STEP: Deleting pod verify-service-up-exec-pod-hsb9g in namespace services-1302
STEP: verifying service-headless is not up
Oct 30 03:52:47.590: INFO: Creating new host exec pod
Oct 30 03:52:47.602: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Oct 30 03:52:49.606: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Oct 30 03:52:51.607: INFO: The status of Pod verify-service-down-host-exec-pod is Running (Ready = true)
Oct 30 03:52:51.608: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1302 exec verify-service-down-host-exec-pod -- /bin/sh -x -c curl -g -s --connect-timeout 2 http://10.233.3.61:80 && echo service-down-failed'
Oct 30 03:52:54.402: INFO: rc: 28
Oct 30 03:52:54.402: INFO: error while kubectl execing "curl -g -s --connect-timeout 2 http://10.233.3.61:80 && echo service-down-failed" in pod services-1302/verify-service-down-host-exec-pod: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1302 exec verify-service-down-host-exec-pod -- /bin/sh -x -c curl -g -s --connect-timeout 2 http://10.233.3.61:80 && echo service-down-failed:
Command stdout:

stderr:
+ curl -g -s --connect-timeout 2 http://10.233.3.61:80
command terminated with exit code 28

error:
exit status 28
Output: 
STEP: Deleting pod verify-service-down-host-exec-pod in namespace services-1302
STEP: adding service.kubernetes.io/headless label
STEP: verifying service is not up
Oct 30 03:52:54.415: INFO: Creating new host exec pod
Oct 30 03:52:54.428: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Oct 30 03:52:56.433: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Oct 30 03:52:58.432: INFO: The status of Pod verify-service-down-host-exec-pod is Running (Ready = true)
Oct 30 03:52:58.432: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1302 exec verify-service-down-host-exec-pod -- /bin/sh -x -c curl -g -s --connect-timeout 2 http://10.233.31.42:80 && echo service-down-failed'
Oct 30 03:53:00.691: INFO: rc: 28
Oct 30 03:53:00.691: INFO: error while kubectl execing "curl -g -s --connect-timeout 2 http://10.233.31.42:80 && echo service-down-failed" in pod services-1302/verify-service-down-host-exec-pod: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1302 exec verify-service-down-host-exec-pod -- /bin/sh -x -c curl -g -s --connect-timeout 2 http://10.233.31.42:80 && echo service-down-failed:
Command stdout:

stderr:
+ curl -g -s --connect-timeout 2 http://10.233.31.42:80
command terminated with exit code 28

error:
exit status 28
Output: 
STEP: Deleting pod verify-service-down-host-exec-pod in namespace services-1302
STEP: removing service.kubernetes.io/headless annotation
STEP: verifying service is up
Oct 30 03:53:00.704: INFO: Creating new host exec pod
Oct 30 03:53:00.716: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Oct 30 03:53:02.722: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Oct 30 03:53:04.722: INFO: The status of Pod verify-service-up-host-exec-pod is Running (Ready = true)
Oct 30 03:53:04.722: INFO: Creating new exec pod
STEP: verifying service has 3 reachable backends
Oct 30 03:53:16.742: INFO: Executing cmd "for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.31.42:80 2>&1 || true; echo; done" in pod services-1302/verify-service-up-host-exec-pod
Oct 30 03:53:16.742: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1302 exec verify-service-up-host-exec-pod -- /bin/sh -x -c for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.31.42:80 2>&1 || true; echo; done'
Oct 30 03:53:17.483: INFO: stderr: "+ seq 1 150\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n"
Oct 30 03:53:17.484: INFO: stdout: "service-headless-toggled-vq4td\nservice-headless-toggled-vq4td\nservice-headless-toggled-qhh25\nservice-headless-toggled-7vkvw\nservice-headless-toggled-7vkvw\nservice-headless-toggled-vq4td\nservice-headless-toggled-qhh25\nservice-headless-toggled-7vkvw\nservice-headless-toggled-qhh25\nservice-headless-toggled-vq4td\nservice-headless-toggled-qhh25\nservice-headless-toggled-qhh25\nservice-headless-toggled-7vkvw\nservice-headless-toggled-vq4td\nservice-headless-toggled-7vkvw\nservice-headless-toggled-7vkvw\nservice-headless-toggled-7vkvw\nservice-headless-toggled-7vkvw\nservice-headless-toggled-vq4td\nservice-headless-toggled-qhh25\nservice-headless-toggled-vq4td\nservice-headless-toggled-qhh25\nservice-headless-toggled-vq4td\nservice-headless-toggled-vq4td\nservice-headless-toggled-vq4td\nservice-headless-toggled-qhh25\nservice-headless-toggled-7vkvw\nservice-headless-toggled-qhh25\nservice-headless-toggled-qhh25\nservice-headless-toggled-vq4td\nservice-headless-toggled-qhh25\nservice-headless-toggled-7vkvw\nservice-headless-toggled-vq4td\nservice-headless-toggled-7vkvw\nservice-headless-toggled-vq4td\nservice-headless-toggled-7vkvw\nservice-headless-toggled-vq4td\nservice-headless-toggled-vq4td\nservice-headless-toggled-qhh25\nservice-headless-toggled-qhh25\nservice-headless-toggled-7vkvw\nservice-headless-toggled-qhh25\nservice-headless-toggled-7vkvw\nservice-headless-toggled-qhh25\nservice-headless-toggled-vq4td\nservice-headless-toggled-vq4td\nservice-headless-toggled-qhh25\nservice-headless-toggled-qhh25\nservice-headless-toggled-qhh25\nservice-headless-toggled-7vkvw\nservice-headless-toggled-qhh25\nservice-headless-toggled-7vkvw\nservice-headless-toggled-7vkvw\nservice-headless-toggled-vq4td\nservice-headless-toggled-qhh25\nservice-headless-toggled-vq4td\nservice-headless-toggled-qhh25\nservice-headless-toggled-vq4td\nservice-headless-toggled-qhh25\nservice-headless-toggled-qhh25\nservice-headless-toggled-vq4td\nservice-headless-toggled-qhh25\nservice-headless-toggled-vq4td\nservice-headless-toggled-qhh25\nservice-headless-toggled-7vkvw\nservice-headless-toggled-vq4td\nservice-headless-toggled-qhh25\nservice-headless-toggled-vq4td\nservice-headless-toggled-7vkvw\nservice-headless-toggled-vq4td\nservice-headless-toggled-7vkvw\nservice-headless-toggled-7vkvw\nservice-headless-toggled-7vkvw\nservice-headless-toggled-qhh25\nservice-headless-toggled-vq4td\nservice-headless-toggled-qhh25\nservice-headless-toggled-qhh25\nservice-headless-toggled-vq4td\nservice-headless-toggled-vq4td\nservice-headless-toggled-vq4td\nservice-headless-toggled-7vkvw\nservice-headless-toggled-qhh25\nservice-headless-toggled-7vkvw\nservice-headless-toggled-vq4td\nservice-headless-toggled-7vkvw\nservice-headless-toggled-vq4td\nservice-headless-toggled-qhh25\nservice-headless-toggled-qhh25\nservice-headless-toggled-qhh25\nservice-headless-toggled-vq4td\nservice-headless-toggled-qhh25\nservice-headless-toggled-vq4td\nservice-headless-toggled-7vkvw\nservice-headless-toggled-qhh25\nservice-headless-toggled-vq4td\nservice-headless-toggled-7vkvw\nservice-headless-toggled-qhh25\nservice-headless-toggled-7vkvw\nservice-headless-toggled-7vkvw\nservice-headless-toggled-vq4td\nservice-headless-toggled-vq4td\nservice-headless-toggled-qhh25\nservice-headless-toggled-qhh25\nservice-headless-toggled-vq4td\nservice-headless-toggled-7vkvw\nservice-headless-toggled-vq4td\nservice-headless-toggled-7vkvw\nservice-headless-toggled-qhh25\nservice-headless-toggled-qhh25\nservice-headless-toggled-vq4td\nservice-headless-toggled-vq4td\nservice-headless-toggled-vq4td\nservice-headless-toggled-7vkvw\nservice-headless-toggled-vq4td\nservice-headless-toggled-7vkvw\nservice-headless-toggled-7vkvw\nservice-headless-toggled-vq4td\nservice-headless-toggled-qhh25\nservice-headless-toggled-qhh25\nservice-headless-toggled-qhh25\nservice-headless-toggled-qhh25\nservice-headless-toggled-qhh25\nservice-headless-toggled-qhh25\nservice-headless-toggled-7vkvw\nservice-headless-toggled-7vkvw\nservice-headless-toggled-qhh25\nservice-headless-toggled-vq4td\nservice-headless-toggled-vq4td\nservice-headless-toggled-7vkvw\nservice-headless-toggled-vq4td\nservice-headless-toggled-7vkvw\nservice-headless-toggled-vq4td\nservice-headless-toggled-vq4td\nservice-headless-toggled-qhh25\nservice-headless-toggled-qhh25\nservice-headless-toggled-qhh25\nservice-headless-toggled-7vkvw\nservice-headless-toggled-vq4td\nservice-headless-toggled-qhh25\nservice-headless-toggled-vq4td\nservice-headless-toggled-qhh25\nservice-headless-toggled-qhh25\nservice-headless-toggled-qhh25\nservice-headless-toggled-vq4td\nservice-headless-toggled-vq4td\nservice-headless-toggled-7vkvw\nservice-headless-toggled-7vkvw\nservice-headless-toggled-7vkvw\nservice-headless-toggled-vq4td\nservice-headless-toggled-7vkvw\n"
Oct 30 03:53:17.484: INFO: Executing cmd "for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.31.42:80 2>&1 || true; echo; done" in pod services-1302/verify-service-up-exec-pod-zjdsv
Oct 30 03:53:17.484: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1302 exec verify-service-up-exec-pod-zjdsv -- /bin/sh -x -c for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.31.42:80 2>&1 || true; echo; done'
Oct 30 03:53:17.929: INFO: stderr: "+ seq 1 150\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.42:80\n+ echo\n"
Oct 30 03:53:17.930: INFO: stdout: "service-headless-toggled-qhh25\nservice-headless-toggled-7vkvw\nservice-headless-toggled-vq4td\nservice-headless-toggled-vq4td\nservice-headless-toggled-7vkvw\nservice-headless-toggled-7vkvw\nservice-headless-toggled-7vkvw\nservice-headless-toggled-vq4td\nservice-headless-toggled-7vkvw\nservice-headless-toggled-vq4td\nservice-headless-toggled-7vkvw\nservice-headless-toggled-qhh25\nservice-headless-toggled-vq4td\nservice-headless-toggled-qhh25\nservice-headless-toggled-qhh25\nservice-headless-toggled-7vkvw\nservice-headless-toggled-7vkvw\nservice-headless-toggled-vq4td\nservice-headless-toggled-qhh25\nservice-headless-toggled-vq4td\nservice-headless-toggled-7vkvw\nservice-headless-toggled-7vkvw\nservice-headless-toggled-7vkvw\nservice-headless-toggled-7vkvw\nservice-headless-toggled-vq4td\nservice-headless-toggled-vq4td\nservice-headless-toggled-vq4td\nservice-headless-toggled-vq4td\nservice-headless-toggled-vq4td\nservice-headless-toggled-qhh25\nservice-headless-toggled-vq4td\nservice-headless-toggled-vq4td\nservice-headless-toggled-vq4td\nservice-headless-toggled-qhh25\nservice-headless-toggled-qhh25\nservice-headless-toggled-vq4td\nservice-headless-toggled-qhh25\nservice-headless-toggled-7vkvw\nservice-headless-toggled-vq4td\nservice-headless-toggled-7vkvw\nservice-headless-toggled-vq4td\nservice-headless-toggled-7vkvw\nservice-headless-toggled-vq4td\nservice-headless-toggled-vq4td\nservice-headless-toggled-qhh25\nservice-headless-toggled-qhh25\nservice-headless-toggled-vq4td\nservice-headless-toggled-qhh25\nservice-headless-toggled-vq4td\nservice-headless-toggled-7vkvw\nservice-headless-toggled-vq4td\nservice-headless-toggled-7vkvw\nservice-headless-toggled-qhh25\nservice-headless-toggled-qhh25\nservice-headless-toggled-7vkvw\nservice-headless-toggled-7vkvw\nservice-headless-toggled-vq4td\nservice-headless-toggled-7vkvw\nservice-headless-toggled-vq4td\nservice-headless-toggled-qhh25\nservice-headless-toggled-vq4td\nservice-headless-toggled-vq4td\nservice-headless-toggled-qhh25\nservice-headless-toggled-qhh25\nservice-headless-toggled-vq4td\nservice-headless-toggled-vq4td\nservice-headless-toggled-vq4td\nservice-headless-toggled-qhh25\nservice-headless-toggled-qhh25\nservice-headless-toggled-qhh25\nservice-headless-toggled-qhh25\nservice-headless-toggled-vq4td\nservice-headless-toggled-vq4td\nservice-headless-toggled-7vkvw\nservice-headless-toggled-vq4td\nservice-headless-toggled-qhh25\nservice-headless-toggled-7vkvw\nservice-headless-toggled-vq4td\nservice-headless-toggled-qhh25\nservice-headless-toggled-qhh25\nservice-headless-toggled-vq4td\nservice-headless-toggled-qhh25\nservice-headless-toggled-qhh25\nservice-headless-toggled-7vkvw\nservice-headless-toggled-vq4td\nservice-headless-toggled-vq4td\nservice-headless-toggled-qhh25\nservice-headless-toggled-qhh25\nservice-headless-toggled-qhh25\nservice-headless-toggled-7vkvw\nservice-headless-toggled-vq4td\nservice-headless-toggled-7vkvw\nservice-headless-toggled-7vkvw\nservice-headless-toggled-qhh25\nservice-headless-toggled-7vkvw\nservice-headless-toggled-qhh25\nservice-headless-toggled-7vkvw\nservice-headless-toggled-7vkvw\nservice-headless-toggled-qhh25\nservice-headless-toggled-qhh25\nservice-headless-toggled-qhh25\nservice-headless-toggled-7vkvw\nservice-headless-toggled-7vkvw\nservice-headless-toggled-7vkvw\nservice-headless-toggled-7vkvw\nservice-headless-toggled-vq4td\nservice-headless-toggled-7vkvw\nservice-headless-toggled-qhh25\nservice-headless-toggled-qhh25\nservice-headless-toggled-7vkvw\nservice-headless-toggled-vq4td\nservice-headless-toggled-7vkvw\nservice-headless-toggled-7vkvw\nservice-headless-toggled-qhh25\nservice-headless-toggled-7vkvw\nservice-headless-toggled-vq4td\nservice-headless-toggled-7vkvw\nservice-headless-toggled-vq4td\nservice-headless-toggled-qhh25\nservice-headless-toggled-qhh25\nservice-headless-toggled-qhh25\nservice-headless-toggled-qhh25\nservice-headless-toggled-qhh25\nservice-headless-toggled-7vkvw\nservice-headless-toggled-7vkvw\nservice-headless-toggled-vq4td\nservice-headless-toggled-vq4td\nservice-headless-toggled-vq4td\nservice-headless-toggled-qhh25\nservice-headless-toggled-qhh25\nservice-headless-toggled-qhh25\nservice-headless-toggled-vq4td\nservice-headless-toggled-vq4td\nservice-headless-toggled-7vkvw\nservice-headless-toggled-vq4td\nservice-headless-toggled-qhh25\nservice-headless-toggled-vq4td\nservice-headless-toggled-vq4td\nservice-headless-toggled-vq4td\nservice-headless-toggled-qhh25\nservice-headless-toggled-vq4td\nservice-headless-toggled-vq4td\nservice-headless-toggled-vq4td\nservice-headless-toggled-vq4td\nservice-headless-toggled-qhh25\nservice-headless-toggled-qhh25\nservice-headless-toggled-vq4td\nservice-headless-toggled-7vkvw\nservice-headless-toggled-vq4td\nservice-headless-toggled-7vkvw\n"
STEP: Deleting pod verify-service-up-host-exec-pod in namespace services-1302
STEP: Deleting pod verify-service-up-exec-pod-zjdsv in namespace services-1302
STEP: verifying service-headless is still not up
Oct 30 03:53:17.943: INFO: Creating new host exec pod
Oct 30 03:53:17.958: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Oct 30 03:53:19.961: INFO: The status of Pod verify-service-down-host-exec-pod is Running (Ready = true)
Oct 30 03:53:19.961: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1302 exec verify-service-down-host-exec-pod -- /bin/sh -x -c curl -g -s --connect-timeout 2 http://10.233.3.61:80 && echo service-down-failed'
Oct 30 03:53:22.259: INFO: rc: 28
Oct 30 03:53:22.259: INFO: error while kubectl execing "curl -g -s --connect-timeout 2 http://10.233.3.61:80 && echo service-down-failed" in pod services-1302/verify-service-down-host-exec-pod: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1302 exec verify-service-down-host-exec-pod -- /bin/sh -x -c curl -g -s --connect-timeout 2 http://10.233.3.61:80 && echo service-down-failed:
Command stdout:

stderr:
+ curl -g -s --connect-timeout 2 http://10.233.3.61:80
command terminated with exit code 28

error:
exit status 28
Output: 
STEP: Deleting pod verify-service-down-host-exec-pod in namespace services-1302
[AfterEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct 30 03:53:22.265: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-1302" for this suite.
[AfterEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750


• [SLOW TEST:75.807 seconds]
[sig-network] Services
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should implement service.kubernetes.io/headless
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1916
------------------------------
{"msg":"PASSED [sig-network] Services should implement service.kubernetes.io/headless","total":-1,"completed":2,"skipped":852,"failed":0}

SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct 30 03:53:17.715: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746
[It] should be rejected when no endpoints exist
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1968
STEP: creating a service with no endpoints
STEP: creating execpod-noendpoints on node node1
Oct 30 03:53:17.745: INFO: Creating new exec pod
Oct 30 03:53:21.764: INFO: waiting up to 30s to connect to no-pods:80
STEP: hitting service no-pods:80 from pod execpod-noendpoints on node node1
Oct 30 03:53:21.764: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4124 exec execpod-noendpointswmkc9 -- /bin/sh -x -c /agnhost connect --timeout=3s no-pods:80'
Oct 30 03:53:23.047: INFO: rc: 1
Oct 30 03:53:23.047: INFO: error contained 'REFUSED', as expected: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4124 exec execpod-noendpointswmkc9 -- /bin/sh -x -c /agnhost connect --timeout=3s no-pods:80:
Command stdout:

stderr:
+ /agnhost connect '--timeout=3s' no-pods:80
REFUSED
command terminated with exit code 1

error:
exit status 1
[AfterEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct 30 03:53:23.047: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-4124" for this suite.
[AfterEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750


• [SLOW TEST:5.347 seconds]
[sig-network] Services
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should be rejected when no endpoints exist
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1968
------------------------------
{"msg":"PASSED [sig-network] Services should be rejected when no endpoints exist","total":-1,"completed":2,"skipped":838,"failed":0}

SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct 30 03:51:53.486: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746
[It] should implement service.kubernetes.io/service-proxy-name
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1865
STEP: creating service-disabled in namespace services-3745
STEP: creating service service-proxy-disabled in namespace services-3745
STEP: creating replication controller service-proxy-disabled in namespace services-3745
I1030 03:51:53.517835      30 runners.go:190] Created replication controller with name: service-proxy-disabled, namespace: services-3745, replica count: 3
I1030 03:51:56.569984      30 runners.go:190] service-proxy-disabled Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1030 03:51:59.571027      30 runners.go:190] service-proxy-disabled Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1030 03:52:02.571594      30 runners.go:190] service-proxy-disabled Pods: 3 out of 3 created, 2 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1030 03:52:05.572225      30 runners.go:190] service-proxy-disabled Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
STEP: creating service in namespace services-3745
STEP: creating service service-proxy-toggled in namespace services-3745
STEP: creating replication controller service-proxy-toggled in namespace services-3745
I1030 03:52:05.585954      30 runners.go:190] Created replication controller with name: service-proxy-toggled, namespace: services-3745, replica count: 3
I1030 03:52:08.636569      30 runners.go:190] service-proxy-toggled Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1030 03:52:11.636863      30 runners.go:190] service-proxy-toggled Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
STEP: verifying service is up
Oct 30 03:52:11.639: INFO: Creating new host exec pod
Oct 30 03:52:11.651: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Oct 30 03:52:13.654: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Oct 30 03:52:15.655: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Oct 30 03:52:17.654: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Oct 30 03:52:19.653: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Oct 30 03:52:21.655: INFO: The status of Pod verify-service-up-host-exec-pod is Running (Ready = true)
Oct 30 03:52:21.656: INFO: Creating new exec pod
STEP: verifying service has 3 reachable backends
Oct 30 03:52:27.670: INFO: Executing cmd "for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.20.40:80 2>&1 || true; echo; done" in pod services-3745/verify-service-up-host-exec-pod
Oct 30 03:52:27.670: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3745 exec verify-service-up-host-exec-pod -- /bin/sh -x -c for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.20.40:80 2>&1 || true; echo; done'
Oct 30 03:52:28.043: INFO: stderr: "+ seq 1 150\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n"
Oct 30 03:52:28.044: INFO: stdout: "service-proxy-toggled-jbjrr\nservice-proxy-toggled-trxd7\nservice-proxy-toggled-jbjrr\nservice-proxy-toggled-cj5j7\nservice-proxy-toggled-trxd7\nservice-proxy-toggled-jbjrr\nservice-proxy-toggled-trxd7\nservice-proxy-toggled-trxd7\nservice-proxy-toggled-jbjrr\nservice-proxy-toggled-trxd7\nservice-proxy-toggled-cj5j7\nservice-proxy-toggled-jbjrr\nservice-proxy-toggled-trxd7\nservice-proxy-toggled-jbjrr\nservice-proxy-toggled-trxd7\nservice-proxy-toggled-trxd7\nservice-proxy-toggled-jbjrr\nservice-proxy-toggled-cj5j7\nservice-proxy-toggled-cj5j7\nservice-proxy-toggled-cj5j7\nservice-proxy-toggled-trxd7\nservice-proxy-toggled-jbjrr\nservice-proxy-toggled-trxd7\nservice-proxy-toggled-cj5j7\nservice-proxy-toggled-cj5j7\nservice-proxy-toggled-trxd7\nservice-proxy-toggled-jbjrr\nservice-proxy-toggled-cj5j7\nservice-proxy-toggled-jbjrr\nservice-proxy-toggled-trxd7\nservice-proxy-toggled-trxd7\nservice-proxy-toggled-trxd7\nservice-proxy-toggled-jbjrr\nservice-proxy-toggled-cj5j7\nservice-proxy-toggled-trxd7\nservice-proxy-toggled-jbjrr\nservice-proxy-toggled-cj5j7\nservice-proxy-toggled-jbjrr\nservice-proxy-toggled-cj5j7\nservice-proxy-toggled-jbjrr\nservice-proxy-toggled-trxd7\nservice-proxy-toggled-jbjrr\nservice-proxy-toggled-cj5j7\nservice-proxy-toggled-cj5j7\nservice-proxy-toggled-cj5j7\nservice-proxy-toggled-cj5j7\nservice-proxy-toggled-jbjrr\nservice-proxy-toggled-trxd7\nservice-proxy-toggled-trxd7\nservice-proxy-toggled-jbjrr\nservice-proxy-toggled-trxd7\nservice-proxy-toggled-jbjrr\nservice-proxy-toggled-trxd7\nservice-proxy-toggled-jbjrr\nservice-proxy-toggled-trxd7\nservice-proxy-toggled-trxd7\nservice-proxy-toggled-jbjrr\nservice-proxy-toggled-trxd7\nservice-proxy-toggled-trxd7\nservice-proxy-toggled-jbjrr\nservice-proxy-toggled-cj5j7\nservice-proxy-toggled-cj5j7\nservice-proxy-toggled-trxd7\nservice-proxy-toggled-jbjrr\nservice-proxy-toggled-jbjrr\nservice-proxy-toggled-trxd7\nservice-proxy-toggled-jbjrr\nservice-proxy-toggled-trxd7\nservice-proxy-toggled-jbjrr\nservice-proxy-toggled-cj5j7\nservice-proxy-toggled-jbjrr\nservice-proxy-toggled-jbjrr\nservice-proxy-toggled-cj5j7\nservice-proxy-toggled-jbjrr\nservice-proxy-toggled-cj5j7\nservice-proxy-toggled-jbjrr\nservice-proxy-toggled-jbjrr\nservice-proxy-toggled-jbjrr\nservice-proxy-toggled-jbjrr\nservice-proxy-toggled-trxd7\nservice-proxy-toggled-trxd7\nservice-proxy-toggled-jbjrr\nservice-proxy-toggled-trxd7\nservice-proxy-toggled-trxd7\nservice-proxy-toggled-jbjrr\nservice-proxy-toggled-jbjrr\nservice-proxy-toggled-cj5j7\nservice-proxy-toggled-jbjrr\nservice-proxy-toggled-trxd7\nservice-proxy-toggled-jbjrr\nservice-proxy-toggled-cj5j7\nservice-proxy-toggled-trxd7\nservice-proxy-toggled-cj5j7\nservice-proxy-toggled-cj5j7\nservice-proxy-toggled-cj5j7\nservice-proxy-toggled-jbjrr\nservice-proxy-toggled-trxd7\nservice-proxy-toggled-cj5j7\nservice-proxy-toggled-jbjrr\nservice-proxy-toggled-cj5j7\nservice-proxy-toggled-cj5j7\nservice-proxy-toggled-trxd7\nservice-proxy-toggled-trxd7\nservice-proxy-toggled-cj5j7\nservice-proxy-toggled-cj5j7\nservice-proxy-toggled-cj5j7\nservice-proxy-toggled-cj5j7\nservice-proxy-toggled-cj5j7\nservice-proxy-toggled-trxd7\nservice-proxy-toggled-trxd7\nservice-proxy-toggled-trxd7\nservice-proxy-toggled-jbjrr\nservice-proxy-toggled-cj5j7\nservice-proxy-toggled-cj5j7\nservice-proxy-toggled-jbjrr\nservice-proxy-toggled-jbjrr\nservice-proxy-toggled-cj5j7\nservice-proxy-toggled-cj5j7\nservice-proxy-toggled-trxd7\nservice-proxy-toggled-trxd7\nservice-proxy-toggled-trxd7\nservice-proxy-toggled-jbjrr\nservice-proxy-toggled-trxd7\nservice-proxy-toggled-cj5j7\nservice-proxy-toggled-cj5j7\nservice-proxy-toggled-jbjrr\nservice-proxy-toggled-jbjrr\nservice-proxy-toggled-jbjrr\nservice-proxy-toggled-cj5j7\nservice-proxy-toggled-cj5j7\nservice-proxy-toggled-cj5j7\nservice-proxy-toggled-jbjrr\nservice-proxy-toggled-cj5j7\nservice-proxy-toggled-jbjrr\nservice-proxy-toggled-jbjrr\nservice-proxy-toggled-cj5j7\nservice-proxy-toggled-trxd7\nservice-proxy-toggled-jbjrr\nservice-proxy-toggled-trxd7\nservice-proxy-toggled-jbjrr\nservice-proxy-toggled-trxd7\nservice-proxy-toggled-trxd7\nservice-proxy-toggled-cj5j7\nservice-proxy-toggled-jbjrr\nservice-proxy-toggled-cj5j7\nservice-proxy-toggled-jbjrr\nservice-proxy-toggled-trxd7\nservice-proxy-toggled-jbjrr\nservice-proxy-toggled-cj5j7\nservice-proxy-toggled-trxd7\n"
Oct 30 03:52:28.044: INFO: Executing cmd "for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.20.40:80 2>&1 || true; echo; done" in pod services-3745/verify-service-up-exec-pod-vm7s2
Oct 30 03:52:28.044: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3745 exec verify-service-up-exec-pod-vm7s2 -- /bin/sh -x -c for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.20.40:80 2>&1 || true; echo; done'
Oct 30 03:52:28.414: INFO: stderr: "+ seq 1 150\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n"
Oct 30 03:52:28.415: INFO: stdout: "service-proxy-toggled-cj5j7\nservice-proxy-toggled-cj5j7\nservice-proxy-toggled-trxd7\nservice-proxy-toggled-trxd7\nservice-proxy-toggled-cj5j7\nservice-proxy-toggled-jbjrr\nservice-proxy-toggled-trxd7\nservice-proxy-toggled-jbjrr\nservice-proxy-toggled-cj5j7\nservice-proxy-toggled-jbjrr\nservice-proxy-toggled-jbjrr\nservice-proxy-toggled-cj5j7\nservice-proxy-toggled-trxd7\nservice-proxy-toggled-trxd7\nservice-proxy-toggled-jbjrr\nservice-proxy-toggled-cj5j7\nservice-proxy-toggled-trxd7\nservice-proxy-toggled-jbjrr\nservice-proxy-toggled-jbjrr\nservice-proxy-toggled-cj5j7\nservice-proxy-toggled-trxd7\nservice-proxy-toggled-trxd7\nservice-proxy-toggled-trxd7\nservice-proxy-toggled-jbjrr\nservice-proxy-toggled-trxd7\nservice-proxy-toggled-jbjrr\nservice-proxy-toggled-trxd7\nservice-proxy-toggled-cj5j7\nservice-proxy-toggled-cj5j7\nservice-proxy-toggled-trxd7\nservice-proxy-toggled-cj5j7\nservice-proxy-toggled-trxd7\nservice-proxy-toggled-jbjrr\nservice-proxy-toggled-trxd7\nservice-proxy-toggled-cj5j7\nservice-proxy-toggled-trxd7\nservice-proxy-toggled-jbjrr\nservice-proxy-toggled-cj5j7\nservice-proxy-toggled-cj5j7\nservice-proxy-toggled-cj5j7\nservice-proxy-toggled-jbjrr\nservice-proxy-toggled-trxd7\nservice-proxy-toggled-cj5j7\nservice-proxy-toggled-jbjrr\nservice-proxy-toggled-trxd7\nservice-proxy-toggled-trxd7\nservice-proxy-toggled-cj5j7\nservice-proxy-toggled-cj5j7\nservice-proxy-toggled-trxd7\nservice-proxy-toggled-jbjrr\nservice-proxy-toggled-cj5j7\nservice-proxy-toggled-jbjrr\nservice-proxy-toggled-cj5j7\nservice-proxy-toggled-jbjrr\nservice-proxy-toggled-trxd7\nservice-proxy-toggled-cj5j7\nservice-proxy-toggled-jbjrr\nservice-proxy-toggled-trxd7\nservice-proxy-toggled-jbjrr\nservice-proxy-toggled-trxd7\nservice-proxy-toggled-trxd7\nservice-proxy-toggled-jbjrr\nservice-proxy-toggled-jbjrr\nservice-proxy-toggled-cj5j7\nservice-proxy-toggled-trxd7\nservice-proxy-toggled-trxd7\nservice-proxy-toggled-cj5j7\nservice-proxy-toggled-cj5j7\nservice-proxy-toggled-trxd7\nservice-proxy-toggled-trxd7\nservice-proxy-toggled-jbjrr\nservice-proxy-toggled-jbjrr\nservice-proxy-toggled-cj5j7\nservice-proxy-toggled-cj5j7\nservice-proxy-toggled-trxd7\nservice-proxy-toggled-cj5j7\nservice-proxy-toggled-cj5j7\nservice-proxy-toggled-trxd7\nservice-proxy-toggled-jbjrr\nservice-proxy-toggled-cj5j7\nservice-proxy-toggled-trxd7\nservice-proxy-toggled-jbjrr\nservice-proxy-toggled-trxd7\nservice-proxy-toggled-cj5j7\nservice-proxy-toggled-trxd7\nservice-proxy-toggled-jbjrr\nservice-proxy-toggled-cj5j7\nservice-proxy-toggled-jbjrr\nservice-proxy-toggled-cj5j7\nservice-proxy-toggled-cj5j7\nservice-proxy-toggled-cj5j7\nservice-proxy-toggled-cj5j7\nservice-proxy-toggled-cj5j7\nservice-proxy-toggled-jbjrr\nservice-proxy-toggled-trxd7\nservice-proxy-toggled-cj5j7\nservice-proxy-toggled-jbjrr\nservice-proxy-toggled-trxd7\nservice-proxy-toggled-cj5j7\nservice-proxy-toggled-jbjrr\nservice-proxy-toggled-trxd7\nservice-proxy-toggled-jbjrr\nservice-proxy-toggled-jbjrr\nservice-proxy-toggled-cj5j7\nservice-proxy-toggled-jbjrr\nservice-proxy-toggled-jbjrr\nservice-proxy-toggled-jbjrr\nservice-proxy-toggled-cj5j7\nservice-proxy-toggled-jbjrr\nservice-proxy-toggled-cj5j7\nservice-proxy-toggled-trxd7\nservice-proxy-toggled-trxd7\nservice-proxy-toggled-cj5j7\nservice-proxy-toggled-trxd7\nservice-proxy-toggled-trxd7\nservice-proxy-toggled-cj5j7\nservice-proxy-toggled-cj5j7\nservice-proxy-toggled-trxd7\nservice-proxy-toggled-cj5j7\nservice-proxy-toggled-cj5j7\nservice-proxy-toggled-trxd7\nservice-proxy-toggled-jbjrr\nservice-proxy-toggled-trxd7\nservice-proxy-toggled-cj5j7\nservice-proxy-toggled-jbjrr\nservice-proxy-toggled-cj5j7\nservice-proxy-toggled-cj5j7\nservice-proxy-toggled-jbjrr\nservice-proxy-toggled-trxd7\nservice-proxy-toggled-trxd7\nservice-proxy-toggled-cj5j7\nservice-proxy-toggled-jbjrr\nservice-proxy-toggled-trxd7\nservice-proxy-toggled-trxd7\nservice-proxy-toggled-trxd7\nservice-proxy-toggled-jbjrr\nservice-proxy-toggled-trxd7\nservice-proxy-toggled-cj5j7\nservice-proxy-toggled-trxd7\nservice-proxy-toggled-trxd7\nservice-proxy-toggled-trxd7\nservice-proxy-toggled-cj5j7\nservice-proxy-toggled-trxd7\nservice-proxy-toggled-trxd7\nservice-proxy-toggled-cj5j7\nservice-proxy-toggled-trxd7\nservice-proxy-toggled-jbjrr\nservice-proxy-toggled-cj5j7\nservice-proxy-toggled-jbjrr\nservice-proxy-toggled-jbjrr\n"
STEP: Deleting pod verify-service-up-host-exec-pod in namespace services-3745
STEP: Deleting pod verify-service-up-exec-pod-vm7s2 in namespace services-3745
STEP: verifying service-disabled is not up
Oct 30 03:52:28.426: INFO: Creating new host exec pod
Oct 30 03:52:28.441: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Oct 30 03:52:30.445: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Oct 30 03:52:32.446: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Oct 30 03:52:34.443: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Oct 30 03:52:36.445: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Oct 30 03:52:38.445: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Oct 30 03:52:40.446: INFO: The status of Pod verify-service-down-host-exec-pod is Running (Ready = true)
Oct 30 03:52:40.446: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3745 exec verify-service-down-host-exec-pod -- /bin/sh -x -c curl -g -s --connect-timeout 2 http://10.233.62.86:80 && echo service-down-failed'
Oct 30 03:52:43.207: INFO: rc: 28
Oct 30 03:52:43.207: INFO: error while kubectl execing "curl -g -s --connect-timeout 2 http://10.233.62.86:80 && echo service-down-failed" in pod services-3745/verify-service-down-host-exec-pod: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3745 exec verify-service-down-host-exec-pod -- /bin/sh -x -c curl -g -s --connect-timeout 2 http://10.233.62.86:80 && echo service-down-failed:
Command stdout:

stderr:
+ curl -g -s --connect-timeout 2 http://10.233.62.86:80
command terminated with exit code 28

error:
exit status 28
Output: 
STEP: Deleting pod verify-service-down-host-exec-pod in namespace services-3745
STEP: adding service-proxy-name label
STEP: verifying service is not up
Oct 30 03:52:43.223: INFO: Creating new host exec pod
Oct 30 03:52:43.240: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Oct 30 03:52:45.243: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Oct 30 03:52:47.245: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Oct 30 03:52:49.245: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Oct 30 03:52:51.245: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Oct 30 03:52:53.243: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Oct 30 03:52:55.244: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Oct 30 03:52:57.244: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Oct 30 03:52:59.244: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Oct 30 03:53:01.245: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Oct 30 03:53:03.244: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Oct 30 03:53:05.244: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Oct 30 03:53:07.245: INFO: The status of Pod verify-service-down-host-exec-pod is Running (Ready = true)
Oct 30 03:53:07.245: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3745 exec verify-service-down-host-exec-pod -- /bin/sh -x -c curl -g -s --connect-timeout 2 http://10.233.20.40:80 && echo service-down-failed'
Oct 30 03:53:10.468: INFO: rc: 28
Oct 30 03:53:10.468: INFO: error while kubectl execing "curl -g -s --connect-timeout 2 http://10.233.20.40:80 && echo service-down-failed" in pod services-3745/verify-service-down-host-exec-pod: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3745 exec verify-service-down-host-exec-pod -- /bin/sh -x -c curl -g -s --connect-timeout 2 http://10.233.20.40:80 && echo service-down-failed:
Command stdout:

stderr:
+ curl -g -s --connect-timeout 2 http://10.233.20.40:80
command terminated with exit code 28

error:
exit status 28
Output: 
STEP: Deleting pod verify-service-down-host-exec-pod in namespace services-3745
STEP: removing service-proxy-name annotation
STEP: verifying service is up
Oct 30 03:53:10.483: INFO: Creating new host exec pod
Oct 30 03:53:10.499: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Oct 30 03:53:12.502: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Oct 30 03:53:14.503: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Oct 30 03:53:16.503: INFO: The status of Pod verify-service-up-host-exec-pod is Running (Ready = true)
Oct 30 03:53:16.503: INFO: Creating new exec pod
STEP: verifying service has 3 reachable backends
Oct 30 03:53:20.519: INFO: Executing cmd "for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.20.40:80 2>&1 || true; echo; done" in pod services-3745/verify-service-up-host-exec-pod
Oct 30 03:53:20.519: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3745 exec verify-service-up-host-exec-pod -- /bin/sh -x -c for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.20.40:80 2>&1 || true; echo; done'
Oct 30 03:53:21.062: INFO: stderr: "+ seq 1 150\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n"
Oct 30 03:53:21.062: INFO: stdout: "service-proxy-toggled-cj5j7\nservice-proxy-toggled-jbjrr\nservice-proxy-toggled-jbjrr\nservice-proxy-toggled-cj5j7\nservice-proxy-toggled-trxd7\nservice-proxy-toggled-cj5j7\nservice-proxy-toggled-cj5j7\nservice-proxy-toggled-cj5j7\nservice-proxy-toggled-jbjrr\nservice-proxy-toggled-cj5j7\nservice-proxy-toggled-jbjrr\nservice-proxy-toggled-jbjrr\nservice-proxy-toggled-cj5j7\nservice-proxy-toggled-cj5j7\nservice-proxy-toggled-cj5j7\nservice-proxy-toggled-cj5j7\nservice-proxy-toggled-trxd7\nservice-proxy-toggled-trxd7\nservice-proxy-toggled-trxd7\nservice-proxy-toggled-jbjrr\nservice-proxy-toggled-jbjrr\nservice-proxy-toggled-jbjrr\nservice-proxy-toggled-jbjrr\nservice-proxy-toggled-jbjrr\nservice-proxy-toggled-trxd7\nservice-proxy-toggled-trxd7\nservice-proxy-toggled-trxd7\nservice-proxy-toggled-cj5j7\nservice-proxy-toggled-cj5j7\nservice-proxy-toggled-cj5j7\nservice-proxy-toggled-cj5j7\nservice-proxy-toggled-trxd7\nservice-proxy-toggled-trxd7\nservice-proxy-toggled-trxd7\nservice-proxy-toggled-trxd7\nservice-proxy-toggled-jbjrr\nservice-proxy-toggled-jbjrr\nservice-proxy-toggled-jbjrr\nservice-proxy-toggled-jbjrr\nservice-proxy-toggled-jbjrr\nservice-proxy-toggled-cj5j7\nservice-proxy-toggled-cj5j7\nservice-proxy-toggled-jbjrr\nservice-proxy-toggled-jbjrr\nservice-proxy-toggled-cj5j7\nservice-proxy-toggled-jbjrr\nservice-proxy-toggled-cj5j7\nservice-proxy-toggled-cj5j7\nservice-proxy-toggled-cj5j7\nservice-proxy-toggled-jbjrr\nservice-proxy-toggled-trxd7\nservice-proxy-toggled-jbjrr\nservice-proxy-toggled-jbjrr\nservice-proxy-toggled-trxd7\nservice-proxy-toggled-cj5j7\nservice-proxy-toggled-cj5j7\nservice-proxy-toggled-jbjrr\nservice-proxy-toggled-trxd7\nservice-proxy-toggled-cj5j7\nservice-proxy-toggled-jbjrr\nservice-proxy-toggled-trxd7\nservice-proxy-toggled-cj5j7\nservice-proxy-toggled-cj5j7\nservice-proxy-toggled-cj5j7\nservice-proxy-toggled-jbjrr\nservice-proxy-toggled-trxd7\nservice-proxy-toggled-jbjrr\nservice-proxy-toggled-cj5j7\nservice-proxy-toggled-cj5j7\nservice-proxy-toggled-cj5j7\nservice-proxy-toggled-jbjrr\nservice-proxy-toggled-cj5j7\nservice-proxy-toggled-jbjrr\nservice-proxy-toggled-trxd7\nservice-proxy-toggled-jbjrr\nservice-proxy-toggled-cj5j7\nservice-proxy-toggled-cj5j7\nservice-proxy-toggled-jbjrr\nservice-proxy-toggled-cj5j7\nservice-proxy-toggled-trxd7\nservice-proxy-toggled-cj5j7\nservice-proxy-toggled-trxd7\nservice-proxy-toggled-trxd7\nservice-proxy-toggled-trxd7\nservice-proxy-toggled-trxd7\nservice-proxy-toggled-cj5j7\nservice-proxy-toggled-trxd7\nservice-proxy-toggled-trxd7\nservice-proxy-toggled-cj5j7\nservice-proxy-toggled-trxd7\nservice-proxy-toggled-jbjrr\nservice-proxy-toggled-cj5j7\nservice-proxy-toggled-cj5j7\nservice-proxy-toggled-trxd7\nservice-proxy-toggled-cj5j7\nservice-proxy-toggled-trxd7\nservice-proxy-toggled-cj5j7\nservice-proxy-toggled-trxd7\nservice-proxy-toggled-cj5j7\nservice-proxy-toggled-cj5j7\nservice-proxy-toggled-cj5j7\nservice-proxy-toggled-trxd7\nservice-proxy-toggled-jbjrr\nservice-proxy-toggled-trxd7\nservice-proxy-toggled-jbjrr\nservice-proxy-toggled-trxd7\nservice-proxy-toggled-cj5j7\nservice-proxy-toggled-trxd7\nservice-proxy-toggled-trxd7\nservice-proxy-toggled-jbjrr\nservice-proxy-toggled-trxd7\nservice-proxy-toggled-trxd7\nservice-proxy-toggled-cj5j7\nservice-proxy-toggled-jbjrr\nservice-proxy-toggled-trxd7\nservice-proxy-toggled-cj5j7\nservice-proxy-toggled-cj5j7\nservice-proxy-toggled-jbjrr\nservice-proxy-toggled-jbjrr\nservice-proxy-toggled-jbjrr\nservice-proxy-toggled-trxd7\nservice-proxy-toggled-trxd7\nservice-proxy-toggled-cj5j7\nservice-proxy-toggled-jbjrr\nservice-proxy-toggled-cj5j7\nservice-proxy-toggled-jbjrr\nservice-proxy-toggled-trxd7\nservice-proxy-toggled-cj5j7\nservice-proxy-toggled-cj5j7\nservice-proxy-toggled-jbjrr\nservice-proxy-toggled-cj5j7\nservice-proxy-toggled-trxd7\nservice-proxy-toggled-jbjrr\nservice-proxy-toggled-jbjrr\nservice-proxy-toggled-cj5j7\nservice-proxy-toggled-jbjrr\nservice-proxy-toggled-cj5j7\nservice-proxy-toggled-cj5j7\nservice-proxy-toggled-cj5j7\nservice-proxy-toggled-jbjrr\nservice-proxy-toggled-jbjrr\nservice-proxy-toggled-cj5j7\nservice-proxy-toggled-cj5j7\nservice-proxy-toggled-cj5j7\nservice-proxy-toggled-cj5j7\nservice-proxy-toggled-trxd7\nservice-proxy-toggled-trxd7\nservice-proxy-toggled-cj5j7\nservice-proxy-toggled-cj5j7\nservice-proxy-toggled-trxd7\n"
Oct 30 03:53:21.062: INFO: Executing cmd "for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.20.40:80 2>&1 || true; echo; done" in pod services-3745/verify-service-up-exec-pod-bh6g9
Oct 30 03:53:21.062: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3745 exec verify-service-up-exec-pod-bh6g9 -- /bin/sh -x -c for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.20.40:80 2>&1 || true; echo; done'
Oct 30 03:53:21.411: INFO: stderr: "+ seq 1 150\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.40:80\n+ echo\n"
Oct 30 03:53:21.412: INFO: stdout: "service-proxy-toggled-trxd7\nservice-proxy-toggled-trxd7\nservice-proxy-toggled-trxd7\nservice-proxy-toggled-jbjrr\nservice-proxy-toggled-jbjrr\nservice-proxy-toggled-cj5j7\nservice-proxy-toggled-jbjrr\nservice-proxy-toggled-trxd7\nservice-proxy-toggled-jbjrr\nservice-proxy-toggled-cj5j7\nservice-proxy-toggled-jbjrr\nservice-proxy-toggled-cj5j7\nservice-proxy-toggled-jbjrr\nservice-proxy-toggled-trxd7\nservice-proxy-toggled-jbjrr\nservice-proxy-toggled-jbjrr\nservice-proxy-toggled-cj5j7\nservice-proxy-toggled-jbjrr\nservice-proxy-toggled-jbjrr\nservice-proxy-toggled-jbjrr\nservice-proxy-toggled-jbjrr\nservice-proxy-toggled-cj5j7\nservice-proxy-toggled-jbjrr\nservice-proxy-toggled-jbjrr\nservice-proxy-toggled-cj5j7\nservice-proxy-toggled-jbjrr\nservice-proxy-toggled-cj5j7\nservice-proxy-toggled-jbjrr\nservice-proxy-toggled-jbjrr\nservice-proxy-toggled-jbjrr\nservice-proxy-toggled-trxd7\nservice-proxy-toggled-jbjrr\nservice-proxy-toggled-cj5j7\nservice-proxy-toggled-cj5j7\nservice-proxy-toggled-cj5j7\nservice-proxy-toggled-cj5j7\nservice-proxy-toggled-jbjrr\nservice-proxy-toggled-trxd7\nservice-proxy-toggled-cj5j7\nservice-proxy-toggled-trxd7\nservice-proxy-toggled-jbjrr\nservice-proxy-toggled-cj5j7\nservice-proxy-toggled-jbjrr\nservice-proxy-toggled-jbjrr\nservice-proxy-toggled-jbjrr\nservice-proxy-toggled-trxd7\nservice-proxy-toggled-cj5j7\nservice-proxy-toggled-trxd7\nservice-proxy-toggled-cj5j7\nservice-proxy-toggled-cj5j7\nservice-proxy-toggled-cj5j7\nservice-proxy-toggled-cj5j7\nservice-proxy-toggled-jbjrr\nservice-proxy-toggled-jbjrr\nservice-proxy-toggled-trxd7\nservice-proxy-toggled-cj5j7\nservice-proxy-toggled-cj5j7\nservice-proxy-toggled-trxd7\nservice-proxy-toggled-trxd7\nservice-proxy-toggled-cj5j7\nservice-proxy-toggled-trxd7\nservice-proxy-toggled-trxd7\nservice-proxy-toggled-trxd7\nservice-proxy-toggled-trxd7\nservice-proxy-toggled-trxd7\nservice-proxy-toggled-jbjrr\nservice-proxy-toggled-jbjrr\nservice-proxy-toggled-trxd7\nservice-proxy-toggled-cj5j7\nservice-proxy-toggled-cj5j7\nservice-proxy-toggled-jbjrr\nservice-proxy-toggled-jbjrr\nservice-proxy-toggled-cj5j7\nservice-proxy-toggled-trxd7\nservice-proxy-toggled-cj5j7\nservice-proxy-toggled-jbjrr\nservice-proxy-toggled-jbjrr\nservice-proxy-toggled-cj5j7\nservice-proxy-toggled-trxd7\nservice-proxy-toggled-cj5j7\nservice-proxy-toggled-cj5j7\nservice-proxy-toggled-trxd7\nservice-proxy-toggled-jbjrr\nservice-proxy-toggled-jbjrr\nservice-proxy-toggled-jbjrr\nservice-proxy-toggled-jbjrr\nservice-proxy-toggled-cj5j7\nservice-proxy-toggled-trxd7\nservice-proxy-toggled-cj5j7\nservice-proxy-toggled-trxd7\nservice-proxy-toggled-trxd7\nservice-proxy-toggled-cj5j7\nservice-proxy-toggled-trxd7\nservice-proxy-toggled-trxd7\nservice-proxy-toggled-jbjrr\nservice-proxy-toggled-trxd7\nservice-proxy-toggled-jbjrr\nservice-proxy-toggled-trxd7\nservice-proxy-toggled-jbjrr\nservice-proxy-toggled-cj5j7\nservice-proxy-toggled-trxd7\nservice-proxy-toggled-trxd7\nservice-proxy-toggled-cj5j7\nservice-proxy-toggled-jbjrr\nservice-proxy-toggled-cj5j7\nservice-proxy-toggled-trxd7\nservice-proxy-toggled-cj5j7\nservice-proxy-toggled-trxd7\nservice-proxy-toggled-trxd7\nservice-proxy-toggled-cj5j7\nservice-proxy-toggled-jbjrr\nservice-proxy-toggled-trxd7\nservice-proxy-toggled-trxd7\nservice-proxy-toggled-cj5j7\nservice-proxy-toggled-cj5j7\nservice-proxy-toggled-trxd7\nservice-proxy-toggled-cj5j7\nservice-proxy-toggled-jbjrr\nservice-proxy-toggled-trxd7\nservice-proxy-toggled-cj5j7\nservice-proxy-toggled-trxd7\nservice-proxy-toggled-jbjrr\nservice-proxy-toggled-trxd7\nservice-proxy-toggled-jbjrr\nservice-proxy-toggled-trxd7\nservice-proxy-toggled-cj5j7\nservice-proxy-toggled-cj5j7\nservice-proxy-toggled-jbjrr\nservice-proxy-toggled-cj5j7\nservice-proxy-toggled-trxd7\nservice-proxy-toggled-cj5j7\nservice-proxy-toggled-cj5j7\nservice-proxy-toggled-jbjrr\nservice-proxy-toggled-trxd7\nservice-proxy-toggled-cj5j7\nservice-proxy-toggled-cj5j7\nservice-proxy-toggled-trxd7\nservice-proxy-toggled-cj5j7\nservice-proxy-toggled-jbjrr\nservice-proxy-toggled-jbjrr\nservice-proxy-toggled-jbjrr\nservice-proxy-toggled-cj5j7\nservice-proxy-toggled-jbjrr\nservice-proxy-toggled-jbjrr\nservice-proxy-toggled-trxd7\nservice-proxy-toggled-jbjrr\nservice-proxy-toggled-jbjrr\nservice-proxy-toggled-trxd7\nservice-proxy-toggled-cj5j7\nservice-proxy-toggled-cj5j7\n"
STEP: Deleting pod verify-service-up-host-exec-pod in namespace services-3745
STEP: Deleting pod verify-service-up-exec-pod-bh6g9 in namespace services-3745
STEP: verifying service-disabled is still not up
Oct 30 03:53:21.425: INFO: Creating new host exec pod
Oct 30 03:53:21.437: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Oct 30 03:53:23.439: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Oct 30 03:53:25.451: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Oct 30 03:53:27.439: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Oct 30 03:53:29.440: INFO: The status of Pod verify-service-down-host-exec-pod is Running (Ready = true)
Oct 30 03:53:29.440: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3745 exec verify-service-down-host-exec-pod -- /bin/sh -x -c curl -g -s --connect-timeout 2 http://10.233.62.86:80 && echo service-down-failed'
Oct 30 03:53:31.950: INFO: rc: 28
Oct 30 03:53:31.950: INFO: error while kubectl execing "curl -g -s --connect-timeout 2 http://10.233.62.86:80 && echo service-down-failed" in pod services-3745/verify-service-down-host-exec-pod: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3745 exec verify-service-down-host-exec-pod -- /bin/sh -x -c curl -g -s --connect-timeout 2 http://10.233.62.86:80 && echo service-down-failed:
Command stdout:

stderr:
+ curl -g -s --connect-timeout 2 http://10.233.62.86:80
command terminated with exit code 28

error:
exit status 28
Output: 
STEP: Deleting pod verify-service-down-host-exec-pod in namespace services-3745
[AfterEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct 30 03:53:31.958: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-3745" for this suite.
[AfterEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750


• [SLOW TEST:98.480 seconds]
[sig-network] Services
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should implement service.kubernetes.io/service-proxy-name
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1865
------------------------------
{"msg":"PASSED [sig-network] Services should implement service.kubernetes.io/service-proxy-name","total":-1,"completed":3,"skipped":246,"failed":0}

SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
Oct 30 03:53:32.112: INFO: Running AfterSuite actions on all nodes


[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct 30 03:53:04.910: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename nettest
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:83
STEP: Executing a successful http request from the external internet
[It] should function for client IP based session affinity: http [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:416
STEP: Performing setup for networking test in namespace nettest-7299
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Oct 30 03:53:05.021: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
Oct 30 03:53:05.053: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Oct 30 03:53:07.057: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Oct 30 03:53:09.058: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Oct 30 03:53:11.056: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Oct 30 03:53:13.056: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Oct 30 03:53:15.058: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Oct 30 03:53:17.057: INFO: The status of Pod netserver-0 is Running (Ready = false)
Oct 30 03:53:19.057: INFO: The status of Pod netserver-0 is Running (Ready = false)
Oct 30 03:53:21.057: INFO: The status of Pod netserver-0 is Running (Ready = false)
Oct 30 03:53:23.059: INFO: The status of Pod netserver-0 is Running (Ready = false)
Oct 30 03:53:25.058: INFO: The status of Pod netserver-0 is Running (Ready = false)
Oct 30 03:53:27.059: INFO: The status of Pod netserver-0 is Running (Ready = true)
Oct 30 03:53:27.063: INFO: The status of Pod netserver-1 is Running (Ready = true)
STEP: Creating test pods
Oct 30 03:53:33.083: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2
STEP: Getting node addresses
Oct 30 03:53:33.083: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
Oct 30 03:53:33.090: INFO: Requires at least 2 nodes (not -1)
[AfterEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct 30 03:53:33.092: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "nettest-7299" for this suite.


S [SKIPPING] [28.188 seconds]
[sig-network] Networking
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  Granular Checks: Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:151
    should function for client IP based session affinity: http [LinuxOnly] [It]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:416

    Requires at least 2 nodes (not -1)

    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/network/utils.go:782
------------------------------
Oct 30 03:53:33.100: INFO: Running AfterSuite actions on all nodes


[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct 30 03:53:23.169: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746
[It] should release NodePorts on delete
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1561
STEP: creating service nodeport-reuse with type NodePort in namespace services-7655
STEP: deleting original service nodeport-reuse
Oct 30 03:53:23.207: INFO: Creating new host exec pod
Oct 30 03:53:23.354: INFO: The status of Pod hostexec is Pending, waiting for it to be Running (with Ready = true)
Oct 30 03:53:25.357: INFO: The status of Pod hostexec is Pending, waiting for it to be Running (with Ready = true)
Oct 30 03:53:27.359: INFO: The status of Pod hostexec is Pending, waiting for it to be Running (with Ready = true)
Oct 30 03:53:29.357: INFO: The status of Pod hostexec is Pending, waiting for it to be Running (with Ready = true)
Oct 30 03:53:31.358: INFO: The status of Pod hostexec is Pending, waiting for it to be Running (with Ready = true)
Oct 30 03:53:33.357: INFO: The status of Pod hostexec is Running (Ready = true)
Oct 30 03:53:33.358: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7655 exec hostexec -- /bin/sh -x -c ! ss -ant46 'sport = :31330' | tail -n +2 | grep LISTEN'
Oct 30 03:53:33.647: INFO: stderr: "+ tail -n +2\n+ grep LISTEN\n+ ss -ant46 'sport = :31330'\n"
Oct 30 03:53:33.647: INFO: stdout: ""
STEP: creating service nodeport-reuse with same NodePort 31330
STEP: deleting service nodeport-reuse in namespace services-7655
[AfterEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct 30 03:53:33.666: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-7655" for this suite.
[AfterEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750


• [SLOW TEST:10.504 seconds]
[sig-network] Services
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should release NodePorts on delete
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1561
------------------------------
{"msg":"PASSED [sig-network] Services should release NodePorts on delete","total":-1,"completed":3,"skipped":890,"failed":0}
Oct 30 03:53:33.675: INFO: Running AfterSuite actions on all nodes


[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct 30 03:51:42.464: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename nettest
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:83
STEP: Executing a successful http request from the external internet
[It] should function for pod-Service: udp
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:168
STEP: Performing setup for networking test in namespace nettest-3511
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Oct 30 03:51:42.613: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
Oct 30 03:51:42.643: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Oct 30 03:51:44.648: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Oct 30 03:51:46.648: INFO: The status of Pod netserver-0 is Running (Ready = false)
Oct 30 03:51:48.646: INFO: The status of Pod netserver-0 is Running (Ready = false)
Oct 30 03:51:50.646: INFO: The status of Pod netserver-0 is Running (Ready = false)
Oct 30 03:51:52.647: INFO: The status of Pod netserver-0 is Running (Ready = false)
Oct 30 03:51:54.646: INFO: The status of Pod netserver-0 is Running (Ready = false)
Oct 30 03:51:56.648: INFO: The status of Pod netserver-0 is Running (Ready = false)
Oct 30 03:51:58.647: INFO: The status of Pod netserver-0 is Running (Ready = false)
Oct 30 03:52:00.648: INFO: The status of Pod netserver-0 is Running (Ready = false)
Oct 30 03:52:02.648: INFO: The status of Pod netserver-0 is Running (Ready = false)
Oct 30 03:52:04.647: INFO: The status of Pod netserver-0 is Running (Ready = true)
Oct 30 03:52:04.652: INFO: The status of Pod netserver-1 is Running (Ready = true)
STEP: Creating test pods
Oct 30 03:52:08.674: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2
STEP: Getting node addresses
Oct 30 03:52:08.674: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating the service on top of the pods in kubernetes
Oct 30 03:52:08.695: INFO: Service node-port-service in namespace nettest-3511 found.
Oct 30 03:52:08.709: INFO: Service session-affinity-service in namespace nettest-3511 found.
STEP: Waiting for NodePort service to expose endpoint
Oct 30 03:52:09.712: INFO: Waiting for amount of service:node-port-service endpoints to be 2
STEP: Waiting for Session Affinity service to expose endpoint
Oct 30 03:52:10.716: INFO: Waiting for amount of service:session-affinity-service endpoints to be 2
STEP: dialing(udp) test-container-pod --> 10.233.32.17:90 (config.clusterIP)
Oct 30 03:52:10.721: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.4.250:9080/dial?request=hostname&protocol=udp&host=10.233.32.17&port=90&tries=1'] Namespace:nettest-3511 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Oct 30 03:52:10.721: INFO: >>> kubeConfig: /root/.kube/config
Oct 30 03:52:10.883: INFO: Waiting for responses: map[netserver-1:{}]
Oct 30 03:52:12.887: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.4.250:9080/dial?request=hostname&protocol=udp&host=10.233.32.17&port=90&tries=1'] Namespace:nettest-3511 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Oct 30 03:52:12.887: INFO: >>> kubeConfig: /root/.kube/config
Oct 30 03:52:13.171: INFO: Waiting for responses: map[netserver-1:{}]
Oct 30 03:52:15.176: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.4.250:9080/dial?request=hostname&protocol=udp&host=10.233.32.17&port=90&tries=1'] Namespace:nettest-3511 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Oct 30 03:52:15.176: INFO: >>> kubeConfig: /root/.kube/config
Oct 30 03:52:15.287: INFO: Waiting for responses: map[netserver-1:{}]
Oct 30 03:52:17.292: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.4.250:9080/dial?request=hostname&protocol=udp&host=10.233.32.17&port=90&tries=1'] Namespace:nettest-3511 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Oct 30 03:52:17.292: INFO: >>> kubeConfig: /root/.kube/config
Oct 30 03:52:17.390: INFO: Waiting for responses: map[]
Oct 30 03:52:17.390: INFO: reached 10.233.32.17 after 3/34 tries
STEP: dialing(udp) test-container-pod --> 10.10.190.207:32181 (nodeIP)
Oct 30 03:52:17.393: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.4.250:9080/dial?request=hostname&protocol=udp&host=10.10.190.207&port=32181&tries=1'] Namespace:nettest-3511 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Oct 30 03:52:17.393: INFO: >>> kubeConfig: /root/.kube/config
Oct 30 03:52:17.476: INFO: Waiting for responses: map[netserver-0:{} netserver-1:{}]
Oct 30 03:52:19.480: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.4.250:9080/dial?request=hostname&protocol=udp&host=10.10.190.207&port=32181&tries=1'] Namespace:nettest-3511 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Oct 30 03:52:19.480: INFO: >>> kubeConfig: /root/.kube/config
Oct 30 03:52:19.572: INFO: Waiting for responses: map[netserver-0:{} netserver-1:{}]
Oct 30 03:52:21.575: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.4.250:9080/dial?request=hostname&protocol=udp&host=10.10.190.207&port=32181&tries=1'] Namespace:nettest-3511 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Oct 30 03:52:21.575: INFO: >>> kubeConfig: /root/.kube/config
Oct 30 03:52:21.665: INFO: Waiting for responses: map[netserver-0:{} netserver-1:{}]
Oct 30 03:52:23.669: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.4.250:9080/dial?request=hostname&protocol=udp&host=10.10.190.207&port=32181&tries=1'] Namespace:nettest-3511 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Oct 30 03:52:23.669: INFO: >>> kubeConfig: /root/.kube/config
Oct 30 03:52:23.886: INFO: Waiting for responses: map[netserver-0:{} netserver-1:{}]
Oct 30 03:52:25.892: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.4.250:9080/dial?request=hostname&protocol=udp&host=10.10.190.207&port=32181&tries=1'] Namespace:nettest-3511 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Oct 30 03:52:25.892: INFO: >>> kubeConfig: /root/.kube/config
Oct 30 03:52:26.034: INFO: Waiting for responses: map[netserver-0:{} netserver-1:{}]
Oct 30 03:52:28.037: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.4.250:9080/dial?request=hostname&protocol=udp&host=10.10.190.207&port=32181&tries=1'] Namespace:nettest-3511 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Oct 30 03:52:28.037: INFO: >>> kubeConfig: /root/.kube/config
Oct 30 03:52:28.148: INFO: Waiting for responses: map[netserver-0:{} netserver-1:{}]
Oct 30 03:52:30.153: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.4.250:9080/dial?request=hostname&protocol=udp&host=10.10.190.207&port=32181&tries=1'] Namespace:nettest-3511 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Oct 30 03:52:30.153: INFO: >>> kubeConfig: /root/.kube/config
Oct 30 03:52:30.269: INFO: Waiting for responses: map[netserver-0:{} netserver-1:{}]
Oct 30 03:52:32.493: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.4.250:9080/dial?request=hostname&protocol=udp&host=10.10.190.207&port=32181&tries=1'] Namespace:nettest-3511 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Oct 30 03:52:32.493: INFO: >>> kubeConfig: /root/.kube/config
Oct 30 03:52:32.602: INFO: Waiting for responses: map[netserver-0:{} netserver-1:{}]
Oct 30 03:52:34.605: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.4.250:9080/dial?request=hostname&protocol=udp&host=10.10.190.207&port=32181&tries=1'] Namespace:nettest-3511 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Oct 30 03:52:34.605: INFO: >>> kubeConfig: /root/.kube/config
Oct 30 03:52:34.890: INFO: Waiting for responses: map[netserver-0:{} netserver-1:{}]
Oct 30 03:52:36.894: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.4.250:9080/dial?request=hostname&protocol=udp&host=10.10.190.207&port=32181&tries=1'] Namespace:nettest-3511 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Oct 30 03:52:36.894: INFO: >>> kubeConfig: /root/.kube/config
Oct 30 03:52:36.981: INFO: Waiting for responses: map[netserver-0:{} netserver-1:{}]
Oct 30 03:52:38.989: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.4.250:9080/dial?request=hostname&protocol=udp&host=10.10.190.207&port=32181&tries=1'] Namespace:nettest-3511 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Oct 30 03:52:38.989: INFO: >>> kubeConfig: /root/.kube/config
Oct 30 03:52:39.082: INFO: Waiting for responses: map[netserver-0:{} netserver-1:{}]
Oct 30 03:52:41.085: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.4.250:9080/dial?request=hostname&protocol=udp&host=10.10.190.207&port=32181&tries=1'] Namespace:nettest-3511 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Oct 30 03:52:41.085: INFO: >>> kubeConfig: /root/.kube/config
Oct 30 03:52:42.344: INFO: Waiting for responses: map[netserver-0:{} netserver-1:{}]
Oct 30 03:52:44.347: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.4.250:9080/dial?request=hostname&protocol=udp&host=10.10.190.207&port=32181&tries=1'] Namespace:nettest-3511 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Oct 30 03:52:44.347: INFO: >>> kubeConfig: /root/.kube/config
Oct 30 03:52:44.541: INFO: Waiting for responses: map[netserver-0:{} netserver-1:{}]
Oct 30 03:52:46.546: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.4.250:9080/dial?request=hostname&protocol=udp&host=10.10.190.207&port=32181&tries=1'] Namespace:nettest-3511 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Oct 30 03:52:46.546: INFO: >>> kubeConfig: /root/.kube/config
Oct 30 03:52:46.630: INFO: Waiting for responses: map[netserver-0:{} netserver-1:{}]
Oct 30 03:52:48.634: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.4.250:9080/dial?request=hostname&protocol=udp&host=10.10.190.207&port=32181&tries=1'] Namespace:nettest-3511 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Oct 30 03:52:48.634: INFO: >>> kubeConfig: /root/.kube/config
Oct 30 03:52:48.717: INFO: Waiting for responses: map[netserver-0:{} netserver-1:{}]
Oct 30 03:52:50.722: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.4.250:9080/dial?request=hostname&protocol=udp&host=10.10.190.207&port=32181&tries=1'] Namespace:nettest-3511 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Oct 30 03:52:50.722: INFO: >>> kubeConfig: /root/.kube/config
Oct 30 03:52:50.937: INFO: Waiting for responses: map[netserver-0:{} netserver-1:{}]
Oct 30 03:52:52.941: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.4.250:9080/dial?request=hostname&protocol=udp&host=10.10.190.207&port=32181&tries=1'] Namespace:nettest-3511 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Oct 30 03:52:52.941: INFO: >>> kubeConfig: /root/.kube/config
Oct 30 03:52:53.027: INFO: Waiting for responses: map[netserver-0:{} netserver-1:{}]
Oct 30 03:52:55.031: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.4.250:9080/dial?request=hostname&protocol=udp&host=10.10.190.207&port=32181&tries=1'] Namespace:nettest-3511 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Oct 30 03:52:55.031: INFO: >>> kubeConfig: /root/.kube/config
Oct 30 03:52:55.145: INFO: Waiting for responses: map[netserver-0:{} netserver-1:{}]
Oct 30 03:52:57.150: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.4.250:9080/dial?request=hostname&protocol=udp&host=10.10.190.207&port=32181&tries=1'] Namespace:nettest-3511 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Oct 30 03:52:57.150: INFO: >>> kubeConfig: /root/.kube/config
Oct 30 03:52:57.244: INFO: Waiting for responses: map[netserver-0:{} netserver-1:{}]
Oct 30 03:52:59.247: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.4.250:9080/dial?request=hostname&protocol=udp&host=10.10.190.207&port=32181&tries=1'] Namespace:nettest-3511 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Oct 30 03:52:59.247: INFO: >>> kubeConfig: /root/.kube/config
Oct 30 03:52:59.493: INFO: Waiting for responses: map[netserver-0:{} netserver-1:{}]
Oct 30 03:53:01.497: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.4.250:9080/dial?request=hostname&protocol=udp&host=10.10.190.207&port=32181&tries=1'] Namespace:nettest-3511 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Oct 30 03:53:01.497: INFO: >>> kubeConfig: /root/.kube/config
Oct 30 03:53:01.616: INFO: Waiting for responses: map[netserver-0:{} netserver-1:{}]
Oct 30 03:53:03.619: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.4.250:9080/dial?request=hostname&protocol=udp&host=10.10.190.207&port=32181&tries=1'] Namespace:nettest-3511 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Oct 30 03:53:03.620: INFO: >>> kubeConfig: /root/.kube/config
Oct 30 03:53:03.704: INFO: Waiting for responses: map[netserver-0:{} netserver-1:{}]
Oct 30 03:53:05.708: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.4.250:9080/dial?request=hostname&protocol=udp&host=10.10.190.207&port=32181&tries=1'] Namespace:nettest-3511 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Oct 30 03:53:05.708: INFO: >>> kubeConfig: /root/.kube/config
Oct 30 03:53:05.799: INFO: Waiting for responses: map[netserver-0:{} netserver-1:{}]
Oct 30 03:53:07.814: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.4.250:9080/dial?request=hostname&protocol=udp&host=10.10.190.207&port=32181&tries=1'] Namespace:nettest-3511 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Oct 30 03:53:07.814: INFO: >>> kubeConfig: /root/.kube/config
Oct 30 03:53:08.072: INFO: Waiting for responses: map[netserver-0:{} netserver-1:{}]
Oct 30 03:53:10.075: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.4.250:9080/dial?request=hostname&protocol=udp&host=10.10.190.207&port=32181&tries=1'] Namespace:nettest-3511 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Oct 30 03:53:10.075: INFO: >>> kubeConfig: /root/.kube/config
Oct 30 03:53:10.271: INFO: Waiting for responses: map[netserver-0:{} netserver-1:{}]
Oct 30 03:53:12.276: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.4.250:9080/dial?request=hostname&protocol=udp&host=10.10.190.207&port=32181&tries=1'] Namespace:nettest-3511 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Oct 30 03:53:12.276: INFO: >>> kubeConfig: /root/.kube/config
Oct 30 03:53:12.574: INFO: Waiting for responses: map[netserver-0:{} netserver-1:{}]
Oct 30 03:53:14.577: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.4.250:9080/dial?request=hostname&protocol=udp&host=10.10.190.207&port=32181&tries=1'] Namespace:nettest-3511 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Oct 30 03:53:14.577: INFO: >>> kubeConfig: /root/.kube/config
Oct 30 03:53:15.067: INFO: Waiting for responses: map[netserver-0:{} netserver-1:{}]
Oct 30 03:53:17.071: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.4.250:9080/dial?request=hostname&protocol=udp&host=10.10.190.207&port=32181&tries=1'] Namespace:nettest-3511 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Oct 30 03:53:17.071: INFO: >>> kubeConfig: /root/.kube/config
Oct 30 03:53:17.265: INFO: Waiting for responses: map[netserver-0:{} netserver-1:{}]
Oct 30 03:53:19.268: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.4.250:9080/dial?request=hostname&protocol=udp&host=10.10.190.207&port=32181&tries=1'] Namespace:nettest-3511 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Oct 30 03:53:19.268: INFO: >>> kubeConfig: /root/.kube/config
Oct 30 03:53:19.391: INFO: Waiting for responses: map[netserver-0:{} netserver-1:{}]
Oct 30 03:53:21.394: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.4.250:9080/dial?request=hostname&protocol=udp&host=10.10.190.207&port=32181&tries=1'] Namespace:nettest-3511 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Oct 30 03:53:21.394: INFO: >>> kubeConfig: /root/.kube/config
Oct 30 03:53:21.521: INFO: Waiting for responses: map[netserver-0:{} netserver-1:{}]
Oct 30 03:53:23.526: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.4.250:9080/dial?request=hostname&protocol=udp&host=10.10.190.207&port=32181&tries=1'] Namespace:nettest-3511 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Oct 30 03:53:23.526: INFO: >>> kubeConfig: /root/.kube/config
Oct 30 03:53:23.790: INFO: Waiting for responses: map[netserver-0:{} netserver-1:{}]
Oct 30 03:53:25.794: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.4.250:9080/dial?request=hostname&protocol=udp&host=10.10.190.207&port=32181&tries=1'] Namespace:nettest-3511 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Oct 30 03:53:25.794: INFO: >>> kubeConfig: /root/.kube/config
Oct 30 03:53:26.162: INFO: Waiting for responses: map[netserver-0:{} netserver-1:{}]
Oct 30 03:53:28.167: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.4.250:9080/dial?request=hostname&protocol=udp&host=10.10.190.207&port=32181&tries=1'] Namespace:nettest-3511 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Oct 30 03:53:28.167: INFO: >>> kubeConfig: /root/.kube/config
Oct 30 03:53:28.364: INFO: Waiting for responses: map[netserver-0:{} netserver-1:{}]
Oct 30 03:53:30.367: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.4.250:9080/dial?request=hostname&protocol=udp&host=10.10.190.207&port=32181&tries=1'] Namespace:nettest-3511 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Oct 30 03:53:30.367: INFO: >>> kubeConfig: /root/.kube/config
Oct 30 03:53:30.825: INFO: Waiting for responses: map[netserver-0:{} netserver-1:{}]
Oct 30 03:53:32.826: INFO: 
Output of kubectl describe pod nettest-3511/netserver-0:

Oct 30 03:53:32.826: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=nettest-3511 describe pod netserver-0 --namespace=nettest-3511'
Oct 30 03:53:33.006: INFO: stderr: ""
Oct 30 03:53:33.006: INFO: stdout: "Name:         netserver-0\nNamespace:    nettest-3511\nPriority:     0\nNode:         node1/10.10.190.207\nStart Time:   Sat, 30 Oct 2021 03:51:42 +0000\nLabels:       selector-361f4905-ca7c-485f-aba4-a77af4417289=true\nAnnotations:  k8s.v1.cni.cncf.io/network-status:\n                [{\n                    \"name\": \"default-cni-network\",\n                    \"interface\": \"eth0\",\n                    \"ips\": [\n                        \"10.244.3.107\"\n                    ],\n                    \"mac\": \"36:ff:75:ce:54:b4\",\n                    \"default\": true,\n                    \"dns\": {}\n                }]\n              k8s.v1.cni.cncf.io/networks-status:\n                [{\n                    \"name\": \"default-cni-network\",\n                    \"interface\": \"eth0\",\n                    \"ips\": [\n                        \"10.244.3.107\"\n                    ],\n                    \"mac\": \"36:ff:75:ce:54:b4\",\n                    \"default\": true,\n                    \"dns\": {}\n                }]\n              kubernetes.io/psp: collectd\nStatus:       Running\nIP:           10.244.3.107\nIPs:\n  IP:  10.244.3.107\nContainers:\n  webserver:\n    Container ID:  docker://e69946f505501d4776ed234e089688c310a401ea4bbbbe3209c63d4d04f98e3a\n    Image:         k8s.gcr.io/e2e-test-images/agnhost:2.32\n    Image ID:      docker-pullable://k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1\n    Ports:         8080/TCP, 8081/UDP\n    Host Ports:    0/TCP, 0/UDP\n    Args:\n      netexec\n      --http-port=8080\n      --udp-port=8081\n    State:          Running\n      Started:      Sat, 30 Oct 2021 03:51:45 +0000\n    Ready:          True\n    Restart Count:  0\n    Liveness:       http-get http://:8080/healthz delay=10s timeout=30s period=10s #success=1 #failure=3\n    Readiness:      http-get http://:8080/healthz delay=10s timeout=30s period=10s #success=1 #failure=3\n    Environment:    \n    Mounts:\n      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-dspfg (ro)\nConditions:\n  Type              Status\n  Initialized       True \n  Ready             True \n  ContainersReady   True \n  PodScheduled      True \nVolumes:\n  kube-api-access-dspfg:\n    Type:                    Projected (a volume that contains injected data from multiple sources)\n    TokenExpirationSeconds:  3607\n    ConfigMapName:           kube-root-ca.crt\n    ConfigMapOptional:       \n    DownwardAPI:             true\nQoS Class:                   BestEffort\nNode-Selectors:              kubernetes.io/hostname=node1\nTolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s\n                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s\nEvents:\n  Type    Reason     Age   From               Message\n  ----    ------     ----  ----               -------\n  Normal  Scheduled  111s  default-scheduler  Successfully assigned nettest-3511/netserver-0 to node1\n  Normal  Pulling    109s  kubelet            Pulling image \"k8s.gcr.io/e2e-test-images/agnhost:2.32\"\n  Normal  Pulled     109s  kubelet            Successfully pulled image \"k8s.gcr.io/e2e-test-images/agnhost:2.32\" in 304.660263ms\n  Normal  Created    109s  kubelet            Created container webserver\n  Normal  Started    108s  kubelet            Started container webserver\n"
Oct 30 03:53:33.007: INFO: Name:         netserver-0
Namespace:    nettest-3511
Priority:     0
Node:         node1/10.10.190.207
Start Time:   Sat, 30 Oct 2021 03:51:42 +0000
Labels:       selector-361f4905-ca7c-485f-aba4-a77af4417289=true
Annotations:  k8s.v1.cni.cncf.io/network-status:
                [{
                    "name": "default-cni-network",
                    "interface": "eth0",
                    "ips": [
                        "10.244.3.107"
                    ],
                    "mac": "36:ff:75:ce:54:b4",
                    "default": true,
                    "dns": {}
                }]
              k8s.v1.cni.cncf.io/networks-status:
                [{
                    "name": "default-cni-network",
                    "interface": "eth0",
                    "ips": [
                        "10.244.3.107"
                    ],
                    "mac": "36:ff:75:ce:54:b4",
                    "default": true,
                    "dns": {}
                }]
              kubernetes.io/psp: collectd
Status:       Running
IP:           10.244.3.107
IPs:
  IP:  10.244.3.107
Containers:
  webserver:
    Container ID:  docker://e69946f505501d4776ed234e089688c310a401ea4bbbbe3209c63d4d04f98e3a
    Image:         k8s.gcr.io/e2e-test-images/agnhost:2.32
    Image ID:      docker-pullable://k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1
    Ports:         8080/TCP, 8081/UDP
    Host Ports:    0/TCP, 0/UDP
    Args:
      netexec
      --http-port=8080
      --udp-port=8081
    State:          Running
      Started:      Sat, 30 Oct 2021 03:51:45 +0000
    Ready:          True
    Restart Count:  0
    Liveness:       http-get http://:8080/healthz delay=10s timeout=30s period=10s #success=1 #failure=3
    Readiness:      http-get http://:8080/healthz delay=10s timeout=30s period=10s #success=1 #failure=3
    Environment:    
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-dspfg (ro)
Conditions:
  Type              Status
  Initialized       True 
  Ready             True 
  ContainersReady   True 
  PodScheduled      True 
Volumes:
  kube-api-access-dspfg:
    Type:                    Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:  3607
    ConfigMapName:           kube-root-ca.crt
    ConfigMapOptional:       
    DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              kubernetes.io/hostname=node1
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
  Type    Reason     Age   From               Message
  ----    ------     ----  ----               -------
  Normal  Scheduled  111s  default-scheduler  Successfully assigned nettest-3511/netserver-0 to node1
  Normal  Pulling    109s  kubelet            Pulling image "k8s.gcr.io/e2e-test-images/agnhost:2.32"
  Normal  Pulled     109s  kubelet            Successfully pulled image "k8s.gcr.io/e2e-test-images/agnhost:2.32" in 304.660263ms
  Normal  Created    109s  kubelet            Created container webserver
  Normal  Started    108s  kubelet            Started container webserver

Oct 30 03:53:33.007: INFO: 
Output of kubectl describe pod nettest-3511/netserver-1:

Oct 30 03:53:33.007: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=nettest-3511 describe pod netserver-1 --namespace=nettest-3511'
Oct 30 03:53:33.180: INFO: stderr: ""
Oct 30 03:53:33.180: INFO: stdout: "Name:         netserver-1\nNamespace:    nettest-3511\nPriority:     0\nNode:         node2/10.10.190.208\nStart Time:   Sat, 30 Oct 2021 03:51:42 +0000\nLabels:       selector-361f4905-ca7c-485f-aba4-a77af4417289=true\nAnnotations:  k8s.v1.cni.cncf.io/network-status:\n                [{\n                    \"name\": \"default-cni-network\",\n                    \"interface\": \"eth0\",\n                    \"ips\": [\n                        \"10.244.4.239\"\n                    ],\n                    \"mac\": \"6a:03:e1:70:e5:fc\",\n                    \"default\": true,\n                    \"dns\": {}\n                }]\n              k8s.v1.cni.cncf.io/networks-status:\n                [{\n                    \"name\": \"default-cni-network\",\n                    \"interface\": \"eth0\",\n                    \"ips\": [\n                        \"10.244.4.239\"\n                    ],\n                    \"mac\": \"6a:03:e1:70:e5:fc\",\n                    \"default\": true,\n                    \"dns\": {}\n                }]\n              kubernetes.io/psp: collectd\nStatus:       Running\nIP:           10.244.4.239\nIPs:\n  IP:  10.244.4.239\nContainers:\n  webserver:\n    Container ID:  docker://325a0bb8652a69c90a0ea966d05afd9418e508c87f71e4b7fd3826b67be90c07\n    Image:         k8s.gcr.io/e2e-test-images/agnhost:2.32\n    Image ID:      docker-pullable://k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1\n    Ports:         8080/TCP, 8081/UDP\n    Host Ports:    0/TCP, 0/UDP\n    Args:\n      netexec\n      --http-port=8080\n      --udp-port=8081\n    State:          Running\n      Started:      Sat, 30 Oct 2021 03:51:44 +0000\n    Ready:          True\n    Restart Count:  0\n    Liveness:       http-get http://:8080/healthz delay=10s timeout=30s period=10s #success=1 #failure=3\n    Readiness:      http-get http://:8080/healthz delay=10s timeout=30s period=10s #success=1 #failure=3\n    Environment:    \n    Mounts:\n      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-ssj4h (ro)\nConditions:\n  Type              Status\n  Initialized       True \n  Ready             True \n  ContainersReady   True \n  PodScheduled      True \nVolumes:\n  kube-api-access-ssj4h:\n    Type:                    Projected (a volume that contains injected data from multiple sources)\n    TokenExpirationSeconds:  3607\n    ConfigMapName:           kube-root-ca.crt\n    ConfigMapOptional:       \n    DownwardAPI:             true\nQoS Class:                   BestEffort\nNode-Selectors:              kubernetes.io/hostname=node2\nTolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s\n                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s\nEvents:\n  Type    Reason     Age   From               Message\n  ----    ------     ----  ----               -------\n  Normal  Scheduled  111s  default-scheduler  Successfully assigned nettest-3511/netserver-1 to node2\n  Normal  Pulling    109s  kubelet            Pulling image \"k8s.gcr.io/e2e-test-images/agnhost:2.32\"\n  Normal  Pulled     109s  kubelet            Successfully pulled image \"k8s.gcr.io/e2e-test-images/agnhost:2.32\" in 277.854928ms\n  Normal  Created    109s  kubelet            Created container webserver\n  Normal  Started    109s  kubelet            Started container webserver\n"
Oct 30 03:53:33.180: INFO: Name:         netserver-1
Namespace:    nettest-3511
Priority:     0
Node:         node2/10.10.190.208
Start Time:   Sat, 30 Oct 2021 03:51:42 +0000
Labels:       selector-361f4905-ca7c-485f-aba4-a77af4417289=true
Annotations:  k8s.v1.cni.cncf.io/network-status:
                [{
                    "name": "default-cni-network",
                    "interface": "eth0",
                    "ips": [
                        "10.244.4.239"
                    ],
                    "mac": "6a:03:e1:70:e5:fc",
                    "default": true,
                    "dns": {}
                }]
              k8s.v1.cni.cncf.io/networks-status:
                [{
                    "name": "default-cni-network",
                    "interface": "eth0",
                    "ips": [
                        "10.244.4.239"
                    ],
                    "mac": "6a:03:e1:70:e5:fc",
                    "default": true,
                    "dns": {}
                }]
              kubernetes.io/psp: collectd
Status:       Running
IP:           10.244.4.239
IPs:
  IP:  10.244.4.239
Containers:
  webserver:
    Container ID:  docker://325a0bb8652a69c90a0ea966d05afd9418e508c87f71e4b7fd3826b67be90c07
    Image:         k8s.gcr.io/e2e-test-images/agnhost:2.32
    Image ID:      docker-pullable://k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1
    Ports:         8080/TCP, 8081/UDP
    Host Ports:    0/TCP, 0/UDP
    Args:
      netexec
      --http-port=8080
      --udp-port=8081
    State:          Running
      Started:      Sat, 30 Oct 2021 03:51:44 +0000
    Ready:          True
    Restart Count:  0
    Liveness:       http-get http://:8080/healthz delay=10s timeout=30s period=10s #success=1 #failure=3
    Readiness:      http-get http://:8080/healthz delay=10s timeout=30s period=10s #success=1 #failure=3
    Environment:    
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-ssj4h (ro)
Conditions:
  Type              Status
  Initialized       True 
  Ready             True 
  ContainersReady   True 
  PodScheduled      True 
Volumes:
  kube-api-access-ssj4h:
    Type:                    Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:  3607
    ConfigMapName:           kube-root-ca.crt
    ConfigMapOptional:       
    DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              kubernetes.io/hostname=node2
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
  Type    Reason     Age   From               Message
  ----    ------     ----  ----               -------
  Normal  Scheduled  111s  default-scheduler  Successfully assigned nettest-3511/netserver-1 to node2
  Normal  Pulling    109s  kubelet            Pulling image "k8s.gcr.io/e2e-test-images/agnhost:2.32"
  Normal  Pulled     109s  kubelet            Successfully pulled image "k8s.gcr.io/e2e-test-images/agnhost:2.32" in 277.854928ms
  Normal  Created    109s  kubelet            Created container webserver
  Normal  Started    109s  kubelet            Started container webserver

Oct 30 03:53:33.180: INFO: encountered error during dial (did not find expected responses... 
Tries 34
Command curl -g -q -s 'http://10.244.4.250:9080/dial?request=hostname&protocol=udp&host=10.10.190.207&port=32181&tries=1'
retrieved map[]
expected map[netserver-0:{} netserver-1:{}])
Oct 30 03:53:33.181: FAIL: failed dialing endpoint, did not find expected responses... 
Tries 34
Command curl -g -q -s 'http://10.244.4.250:9080/dial?request=hostname&protocol=udp&host=10.10.190.207&port=32181&tries=1'
retrieved map[]
expected map[netserver-0:{} netserver-1:{}]

Full Stack Trace
k8s.io/kubernetes/test/e2e.RunE2ETests(0xc001299b00)
	_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x36c
k8s.io/kubernetes/test/e2e.TestE2E(0xc001299b00)
	_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:144 +0x2b
testing.tRunner(0xc001299b00, 0x70e7b58)
	/usr/local/go/src/testing/testing.go:1193 +0xef
created by testing.(*T).Run
	/usr/local/go/src/testing/testing.go:1238 +0x2b3
[AfterEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
STEP: Collecting events from namespace "nettest-3511".
STEP: Found 15 events.
Oct 30 03:53:33.185: INFO: At 2021-10-30 03:51:42 +0000 UTC - event for netserver-0: {default-scheduler } Scheduled: Successfully assigned nettest-3511/netserver-0 to node1
Oct 30 03:53:33.185: INFO: At 2021-10-30 03:51:42 +0000 UTC - event for netserver-1: {default-scheduler } Scheduled: Successfully assigned nettest-3511/netserver-1 to node2
Oct 30 03:53:33.185: INFO: At 2021-10-30 03:51:44 +0000 UTC - event for netserver-0: {kubelet node1} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/agnhost:2.32" in 304.660263ms
Oct 30 03:53:33.185: INFO: At 2021-10-30 03:51:44 +0000 UTC - event for netserver-0: {kubelet node1} Created: Created container webserver
Oct 30 03:53:33.185: INFO: At 2021-10-30 03:51:44 +0000 UTC - event for netserver-0: {kubelet node1} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/agnhost:2.32"
Oct 30 03:53:33.185: INFO: At 2021-10-30 03:51:44 +0000 UTC - event for netserver-1: {kubelet node2} Created: Created container webserver
Oct 30 03:53:33.185: INFO: At 2021-10-30 03:51:44 +0000 UTC - event for netserver-1: {kubelet node2} Started: Started container webserver
Oct 30 03:53:33.185: INFO: At 2021-10-30 03:51:44 +0000 UTC - event for netserver-1: {kubelet node2} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/agnhost:2.32"
Oct 30 03:53:33.185: INFO: At 2021-10-30 03:51:44 +0000 UTC - event for netserver-1: {kubelet node2} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/agnhost:2.32" in 277.854928ms
Oct 30 03:53:33.185: INFO: At 2021-10-30 03:51:45 +0000 UTC - event for netserver-0: {kubelet node1} Started: Started container webserver
Oct 30 03:53:33.185: INFO: At 2021-10-30 03:52:04 +0000 UTC - event for test-container-pod: {default-scheduler } Scheduled: Successfully assigned nettest-3511/test-container-pod to node2
Oct 30 03:53:33.185: INFO: At 2021-10-30 03:52:06 +0000 UTC - event for test-container-pod: {kubelet node2} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/agnhost:2.32"
Oct 30 03:53:33.185: INFO: At 2021-10-30 03:52:06 +0000 UTC - event for test-container-pod: {kubelet node2} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/agnhost:2.32" in 365.008234ms
Oct 30 03:53:33.185: INFO: At 2021-10-30 03:52:06 +0000 UTC - event for test-container-pod: {kubelet node2} Created: Created container webserver
Oct 30 03:53:33.185: INFO: At 2021-10-30 03:52:07 +0000 UTC - event for test-container-pod: {kubelet node2} Started: Started container webserver
Oct 30 03:53:33.187: INFO: POD                 NODE   PHASE    GRACE  CONDITIONS
Oct 30 03:53:33.187: INFO: netserver-0         node1  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-10-30 03:51:42 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2021-10-30 03:52:02 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2021-10-30 03:52:02 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-10-30 03:51:42 +0000 UTC  }]
Oct 30 03:53:33.187: INFO: netserver-1         node2  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-10-30 03:51:42 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2021-10-30 03:52:02 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2021-10-30 03:52:02 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-10-30 03:51:42 +0000 UTC  }]
Oct 30 03:53:33.187: INFO: test-container-pod  node2  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-10-30 03:52:04 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2021-10-30 03:52:08 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2021-10-30 03:52:08 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-10-30 03:52:04 +0000 UTC  }]
Oct 30 03:53:33.187: INFO: 
Oct 30 03:53:33.191: INFO: 
Logging node info for node master1
Oct 30 03:53:33.193: INFO: Node Info: &Node{ObjectMeta:{master1    b47c04d5-47a7-4a95-8e97-481e6e60af54 145805 0 2021-10-29 21:05:34 +0000 UTC   map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master1 kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.202 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  [{kubeadm Update v1 2021-10-29 21:05:37 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}}} {kube-controller-manager Update v1 2021-10-29 21:05:53 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.0.0/24\"":{}},"f:taints":{}}}} {flanneld Update v1 2021-10-29 21:08:30 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kubelet Update v1 2021-10-29 21:13:13 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234767872 0} {}  BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324603904 0} {}  BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-10-29 21:11:27 +0000 UTC,LastTransitionTime:2021-10-29 21:11:27 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-10-30 03:53:29 +0000 UTC,LastTransitionTime:2021-10-29 21:05:32 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-10-30 03:53:29 +0000 UTC,LastTransitionTime:2021-10-29 21:05:32 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-10-30 03:53:29 +0000 UTC,LastTransitionTime:2021-10-29 21:05:32 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-10-30 03:53:29 +0000 UTC,LastTransitionTime:2021-10-29 21:08:35 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.202,},NodeAddress{Type:Hostname,Address:master1,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:5d3ed60c561e427db72df14bd9006ed0,SystemUUID:00ACFB60-0631-E711-906E-0017A4403562,BootID:01b9d6bc-4126-4864-a1df-901a1bee4906,KernelVersion:3.10.0-1160.45.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.10,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:724463471,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 sirot/netperf-latest:latest],SizeBytes:282025213,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:125930239,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e k8s.gcr.io/coredns/coredns:v1.8.0],SizeBytes:42454755,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[localhost:30500/tasextender@sha256:b7d2fa8154ac5d9cff45866e4d3d210a7d390f8576611c301a2eed2b57273227 tasextender:latest localhost:30500/tasextender:0.4],SizeBytes:28910791,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[registry@sha256:1cd9409a311350c3072fe510b52046f104416376c126a479cef9a4dfe692cf57 registry:2.7.0],SizeBytes:24191168,},ContainerImage{Names:[nginx@sha256:2012644549052fa07c43b0d19f320c871a25e105d0b23e33645e4f1bcf8fcd97 nginx:1.20.1-alpine],SizeBytes:22650454,},ContainerImage{Names:[@ :],SizeBytes:5577654,},ContainerImage{Names:[alpine@sha256:c0e9560cda118f9ec63ddefb4a173a2b2a0347082d7dff7dc14272e7841a5b5a alpine:3.12.1],SizeBytes:5573013,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},}
Oct 30 03:53:33.194: INFO: 
Logging kubelet events for node master1
Oct 30 03:53:33.196: INFO: 
Logging pods the kubelet thinks is on node master1
Oct 30 03:53:33.216: INFO: kube-apiserver-master1 started at 2021-10-29 21:11:09 +0000 UTC (0+1 container statuses recorded)
Oct 30 03:53:33.216: INFO: 	Container kube-apiserver ready: true, restart count 0
Oct 30 03:53:33.216: INFO: kube-controller-manager-master1 started at 2021-10-29 21:14:43 +0000 UTC (0+1 container statuses recorded)
Oct 30 03:53:33.216: INFO: 	Container kube-controller-manager ready: true, restart count 2
Oct 30 03:53:33.216: INFO: kube-flannel-d4pmt started at 2021-10-29 21:08:25 +0000 UTC (1+1 container statuses recorded)
Oct 30 03:53:33.216: INFO: 	Init container install-cni ready: true, restart count 0
Oct 30 03:53:33.216: INFO: 	Container kube-flannel ready: true, restart count 2
Oct 30 03:53:33.216: INFO: kube-multus-ds-amd64-wgkfq started at 2021-10-29 21:08:34 +0000 UTC (0+1 container statuses recorded)
Oct 30 03:53:33.216: INFO: 	Container kube-multus ready: true, restart count 1
Oct 30 03:53:33.216: INFO: node-exporter-fv84w started at 2021-10-29 21:21:15 +0000 UTC (0+2 container statuses recorded)
Oct 30 03:53:33.216: INFO: 	Container kube-rbac-proxy ready: true, restart count 0
Oct 30 03:53:33.216: INFO: 	Container node-exporter ready: true, restart count 0
Oct 30 03:53:33.216: INFO: kube-scheduler-master1 started at 2021-10-29 21:11:09 +0000 UTC (0+1 container statuses recorded)
Oct 30 03:53:33.216: INFO: 	Container kube-scheduler ready: true, restart count 0
Oct 30 03:53:33.216: INFO: kube-proxy-z5k8p started at 2021-10-29 21:07:31 +0000 UTC (0+1 container statuses recorded)
Oct 30 03:53:33.216: INFO: 	Container kube-proxy ready: true, restart count 1
Oct 30 03:53:33.216: INFO: coredns-8474476ff8-lczbr started at 2021-10-29 21:09:04 +0000 UTC (0+1 container statuses recorded)
Oct 30 03:53:33.216: INFO: 	Container coredns ready: true, restart count 1
Oct 30 03:53:33.216: INFO: container-registry-65d7c44b96-zzkfl started at 2021-10-29 21:12:56 +0000 UTC (0+2 container statuses recorded)
Oct 30 03:53:33.216: INFO: 	Container docker-registry ready: true, restart count 0
Oct 30 03:53:33.216: INFO: 	Container nginx ready: true, restart count 0
W1030 03:53:33.230380      28 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled.
Oct 30 03:53:33.305: INFO: 
Latency metrics for node master1
Oct 30 03:53:33.305: INFO: 
Logging node info for node master2
Oct 30 03:53:33.308: INFO: Node Info: &Node{ObjectMeta:{master2    208792d3-d365-4ddb-83d4-10e6e818079c 145576 0 2021-10-29 21:06:06 +0000 UTC   map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master2 kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.203 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  [{kubeadm Update v1 2021-10-29 21:06:07 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}}} {flanneld Update v1 2021-10-29 21:08:29 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2021-10-29 21:08:35 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}},"f:taints":{}}}} {kubelet Update v1 2021-10-29 21:18:18 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:capacity":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.1.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234767872 0} {}  BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324603904 0} {}  BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-10-29 21:11:19 +0000 UTC,LastTransitionTime:2021-10-29 21:11:19 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-10-30 03:53:25 +0000 UTC,LastTransitionTime:2021-10-29 21:06:06 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-10-30 03:53:25 +0000 UTC,LastTransitionTime:2021-10-29 21:06:06 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-10-30 03:53:25 +0000 UTC,LastTransitionTime:2021-10-29 21:06:06 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-10-30 03:53:25 +0000 UTC,LastTransitionTime:2021-10-29 21:08:35 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.203,},NodeAddress{Type:Hostname,Address:master2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:12290c1916d84ddda20431c28083da6a,SystemUUID:00A0DE53-E51D-E711-906E-0017A4403562,BootID:314e82b8-9747-4131-b883-220496309995,KernelVersion:3.10.0-1160.45.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.10,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:724463471,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 sirot/netperf-latest:latest],SizeBytes:282025213,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:125930239,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e k8s.gcr.io/coredns/coredns:v1.8.0],SizeBytes:42454755,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},}
Oct 30 03:53:33.309: INFO: 
Logging kubelet events for node master2
Oct 30 03:53:33.311: INFO: 
Logging pods the kubelet thinks is on node master2
Oct 30 03:53:33.333: INFO: kube-flannel-qvqll started at 2021-10-29 21:08:25 +0000 UTC (1+1 container statuses recorded)
Oct 30 03:53:33.333: INFO: 	Init container install-cni ready: true, restart count 2
Oct 30 03:53:33.333: INFO: 	Container kube-flannel ready: true, restart count 1
Oct 30 03:53:33.333: INFO: kube-multus-ds-amd64-brkpk started at 2021-10-29 21:08:34 +0000 UTC (0+1 container statuses recorded)
Oct 30 03:53:33.333: INFO: 	Container kube-multus ready: true, restart count 1
Oct 30 03:53:33.333: INFO: node-exporter-lc9kk started at 2021-10-29 21:21:15 +0000 UTC (0+2 container statuses recorded)
Oct 30 03:53:33.333: INFO: 	Container kube-rbac-proxy ready: true, restart count 0
Oct 30 03:53:33.333: INFO: 	Container node-exporter ready: true, restart count 0
Oct 30 03:53:33.333: INFO: kube-apiserver-master2 started at 2021-10-29 21:11:09 +0000 UTC (0+1 container statuses recorded)
Oct 30 03:53:33.333: INFO: 	Container kube-apiserver ready: true, restart count 0
Oct 30 03:53:33.333: INFO: kube-controller-manager-master2 started at 2021-10-29 21:11:09 +0000 UTC (0+1 container statuses recorded)
Oct 30 03:53:33.333: INFO: 	Container kube-controller-manager ready: true, restart count 3
Oct 30 03:53:33.333: INFO: kube-scheduler-master2 started at 2021-10-29 21:11:09 +0000 UTC (0+1 container statuses recorded)
Oct 30 03:53:33.333: INFO: 	Container kube-scheduler ready: true, restart count 2
Oct 30 03:53:33.333: INFO: kube-proxy-5gz4v started at 2021-10-29 21:07:31 +0000 UTC (0+1 container statuses recorded)
Oct 30 03:53:33.333: INFO: 	Container kube-proxy ready: true, restart count 2
W1030 03:53:33.347626      28 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled.
Oct 30 03:53:33.410: INFO: 
Latency metrics for node master2
Oct 30 03:53:33.410: INFO: 
Logging node info for node master3
Oct 30 03:53:33.412: INFO: Node Info: &Node{ObjectMeta:{master3    168f1589-e029-47ae-b194-10215fc22d6a 145544 0 2021-10-29 21:06:17 +0000 UTC   map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master3 kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.204 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/master.version:v0.8.2 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  [{kubeadm Update v1 2021-10-29 21:06:18 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}}} {flanneld Update v1 2021-10-29 21:08:29 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2021-10-29 21:08:35 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.2.0/24\"":{}},"f:taints":{}}}} {nfd-master Update v1 2021-10-29 21:16:04 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/master.version":{}}}}} {kubelet Update v1 2021-10-29 21:16:17 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:capacity":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.2.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234767872 0} {}  BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324603904 0} {}  BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-10-29 21:11:36 +0000 UTC,LastTransitionTime:2021-10-29 21:11:36 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-10-30 03:53:25 +0000 UTC,LastTransitionTime:2021-10-29 21:06:17 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-10-30 03:53:25 +0000 UTC,LastTransitionTime:2021-10-29 21:06:17 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-10-30 03:53:25 +0000 UTC,LastTransitionTime:2021-10-29 21:06:17 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-10-30 03:53:25 +0000 UTC,LastTransitionTime:2021-10-29 21:08:35 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.204,},NodeAddress{Type:Hostname,Address:master3,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:de18dcb6cb4c493e9f4d987da2c8b3fd,SystemUUID:008B1444-141E-E711-906E-0017A4403562,BootID:89235c4b-b1f5-4716-bbd7-18b41c0bde74,KernelVersion:3.10.0-1160.45.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.10,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:724463471,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:125930239,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[k8s.gcr.io/nfd/node-feature-discovery@sha256:74a1cbd82354f148277e20cdce25d57816e355a896bc67f67a0f722164b16945 k8s.gcr.io/nfd/node-feature-discovery:v0.8.2],SizeBytes:108486428,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[quay.io/prometheus-operator/prometheus-operator@sha256:850c86bfeda4389bc9c757a9fd17ca5a090ea6b424968178d4467492cfa13921 quay.io/prometheus-operator/prometheus-operator:v0.44.1],SizeBytes:42617274,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e k8s.gcr.io/coredns/coredns:v1.8.0],SizeBytes:42454755,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 aquasec/kube-bench:0.3.1],SizeBytes:19301876,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},}
Oct 30 03:53:33.413: INFO: 
Logging kubelet events for node master3
Oct 30 03:53:33.414: INFO: 
Logging pods the kubelet thinks is on node master3
Oct 30 03:53:33.430: INFO: kube-flannel-rbdlt started at 2021-10-29 21:08:25 +0000 UTC (1+1 container statuses recorded)
Oct 30 03:53:33.430: INFO: 	Init container install-cni ready: true, restart count 2
Oct 30 03:53:33.430: INFO: 	Container kube-flannel ready: true, restart count 2
Oct 30 03:53:33.431: INFO: kube-multus-ds-amd64-bdwh9 started at 2021-10-29 21:08:34 +0000 UTC (0+1 container statuses recorded)
Oct 30 03:53:33.431: INFO: 	Container kube-multus ready: true, restart count 1
Oct 30 03:53:33.431: INFO: coredns-8474476ff8-wrwwv started at 2021-10-29 21:09:00 +0000 UTC (0+1 container statuses recorded)
Oct 30 03:53:33.431: INFO: 	Container coredns ready: true, restart count 1
Oct 30 03:53:33.431: INFO: prometheus-operator-585ccfb458-czbr2 started at 2021-10-29 21:21:06 +0000 UTC (0+2 container statuses recorded)
Oct 30 03:53:33.431: INFO: 	Container kube-rbac-proxy ready: true, restart count 0
Oct 30 03:53:33.431: INFO: 	Container prometheus-operator ready: true, restart count 0
Oct 30 03:53:33.431: INFO: node-exporter-bv946 started at 2021-10-29 21:21:15 +0000 UTC (0+2 container statuses recorded)
Oct 30 03:53:33.431: INFO: 	Container kube-rbac-proxy ready: true, restart count 0
Oct 30 03:53:33.431: INFO: 	Container node-exporter ready: true, restart count 0
Oct 30 03:53:33.431: INFO: kube-controller-manager-master3 started at 2021-10-29 21:14:43 +0000 UTC (0+1 container statuses recorded)
Oct 30 03:53:33.431: INFO: 	Container kube-controller-manager ready: true, restart count 1
Oct 30 03:53:33.431: INFO: kube-proxy-r6fpx started at 2021-10-29 21:07:31 +0000 UTC (0+1 container statuses recorded)
Oct 30 03:53:33.431: INFO: 	Container kube-proxy ready: true, restart count 1
Oct 30 03:53:33.431: INFO: dns-autoscaler-7df78bfcfb-phsdx started at 2021-10-29 21:09:02 +0000 UTC (0+1 container statuses recorded)
Oct 30 03:53:33.431: INFO: 	Container autoscaler ready: true, restart count 1
Oct 30 03:53:33.431: INFO: node-feature-discovery-controller-cff799f9f-qq7g4 started at 2021-10-29 21:15:58 +0000 UTC (0+1 container statuses recorded)
Oct 30 03:53:33.431: INFO: 	Container nfd-controller ready: true, restart count 0
Oct 30 03:53:33.431: INFO: kube-apiserver-master3 started at 2021-10-29 21:11:10 +0000 UTC (0+1 container statuses recorded)
Oct 30 03:53:33.431: INFO: 	Container kube-apiserver ready: true, restart count 0
Oct 30 03:53:33.431: INFO: kube-scheduler-master3 started at 2021-10-29 21:14:43 +0000 UTC (0+1 container statuses recorded)
Oct 30 03:53:33.431: INFO: 	Container kube-scheduler ready: true, restart count 2
W1030 03:53:33.444499      28 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled.
Oct 30 03:53:33.530: INFO: 
Latency metrics for node master3
Oct 30 03:53:33.530: INFO: 
Logging node info for node node1
Oct 30 03:53:33.533: INFO: Node Info: &Node{ObjectMeta:{node1    ddef9269-94c5-4165-81fb-a3b0c4ac5c75 145790 0 2021-10-29 21:07:27 +0000 UTC   map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux cmk.intel.com/cmk-node:true feature.node.kubernetes.io/cpu-cpuid.ADX:true feature.node.kubernetes.io/cpu-cpuid.AESNI:true feature.node.kubernetes.io/cpu-cpuid.AVX:true feature.node.kubernetes.io/cpu-cpuid.AVX2:true feature.node.kubernetes.io/cpu-cpuid.AVX512BW:true feature.node.kubernetes.io/cpu-cpuid.AVX512CD:true feature.node.kubernetes.io/cpu-cpuid.AVX512DQ:true feature.node.kubernetes.io/cpu-cpuid.AVX512F:true feature.node.kubernetes.io/cpu-cpuid.AVX512VL:true feature.node.kubernetes.io/cpu-cpuid.FMA3:true feature.node.kubernetes.io/cpu-cpuid.HLE:true feature.node.kubernetes.io/cpu-cpuid.IBPB:true feature.node.kubernetes.io/cpu-cpuid.MPX:true feature.node.kubernetes.io/cpu-cpuid.RTM:true feature.node.kubernetes.io/cpu-cpuid.SSE4:true feature.node.kubernetes.io/cpu-cpuid.SSE42:true feature.node.kubernetes.io/cpu-cpuid.STIBP:true feature.node.kubernetes.io/cpu-cpuid.VMX:true feature.node.kubernetes.io/cpu-cstate.enabled:true feature.node.kubernetes.io/cpu-hardware_multithreading:true feature.node.kubernetes.io/cpu-pstate.status:active feature.node.kubernetes.io/cpu-pstate.turbo:true feature.node.kubernetes.io/cpu-rdt.RDTCMT:true feature.node.kubernetes.io/cpu-rdt.RDTL3CA:true feature.node.kubernetes.io/cpu-rdt.RDTMBA:true feature.node.kubernetes.io/cpu-rdt.RDTMBM:true feature.node.kubernetes.io/cpu-rdt.RDTMON:true feature.node.kubernetes.io/kernel-config.NO_HZ:true feature.node.kubernetes.io/kernel-config.NO_HZ_FULL:true feature.node.kubernetes.io/kernel-selinux.enabled:true feature.node.kubernetes.io/kernel-version.full:3.10.0-1160.45.1.el7.x86_64 feature.node.kubernetes.io/kernel-version.major:3 feature.node.kubernetes.io/kernel-version.minor:10 feature.node.kubernetes.io/kernel-version.revision:0 feature.node.kubernetes.io/memory-numa:true feature.node.kubernetes.io/network-sriov.capable:true feature.node.kubernetes.io/network-sriov.configured:true feature.node.kubernetes.io/pci-0300_1a03.present:true feature.node.kubernetes.io/storage-nonrotationaldisk:true feature.node.kubernetes.io/system-os_release.ID:centos feature.node.kubernetes.io/system-os_release.VERSION_ID:7 feature.node.kubernetes.io/system-os_release.VERSION_ID.major:7 kubernetes.io/arch:amd64 kubernetes.io/hostname:node1 kubernetes.io/os:linux] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.207 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/extended-resources: nfd.node.kubernetes.io/feature-labels:cpu-cpuid.ADX,cpu-cpuid.AESNI,cpu-cpuid.AVX,cpu-cpuid.AVX2,cpu-cpuid.AVX512BW,cpu-cpuid.AVX512CD,cpu-cpuid.AVX512DQ,cpu-cpuid.AVX512F,cpu-cpuid.AVX512VL,cpu-cpuid.FMA3,cpu-cpuid.HLE,cpu-cpuid.IBPB,cpu-cpuid.MPX,cpu-cpuid.RTM,cpu-cpuid.SSE4,cpu-cpuid.SSE42,cpu-cpuid.STIBP,cpu-cpuid.VMX,cpu-cstate.enabled,cpu-hardware_multithreading,cpu-pstate.status,cpu-pstate.turbo,cpu-rdt.RDTCMT,cpu-rdt.RDTL3CA,cpu-rdt.RDTMBA,cpu-rdt.RDTMBM,cpu-rdt.RDTMON,kernel-config.NO_HZ,kernel-config.NO_HZ_FULL,kernel-selinux.enabled,kernel-version.full,kernel-version.major,kernel-version.minor,kernel-version.revision,memory-numa,network-sriov.capable,network-sriov.configured,pci-0300_1a03.present,storage-nonrotationaldisk,system-os_release.ID,system-os_release.VERSION_ID,system-os_release.VERSION_ID.major nfd.node.kubernetes.io/worker.version:v0.8.2 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  [{kube-controller-manager Update v1 2021-10-29 21:07:28 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.3.0/24\"":{}}}}} {kubeadm Update v1 2021-10-29 21:07:28 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {flanneld Update v1 2021-10-29 21:08:29 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {nfd-master Update v1 2021-10-29 21:16:05 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/extended-resources":{},"f:nfd.node.kubernetes.io/feature-labels":{},"f:nfd.node.kubernetes.io/worker.version":{}},"f:labels":{"f:feature.node.kubernetes.io/cpu-cpuid.ADX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AESNI":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX2":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512BW":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512CD":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512DQ":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512F":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512VL":{},"f:feature.node.kubernetes.io/cpu-cpuid.FMA3":{},"f:feature.node.kubernetes.io/cpu-cpuid.HLE":{},"f:feature.node.kubernetes.io/cpu-cpuid.IBPB":{},"f:feature.node.kubernetes.io/cpu-cpuid.MPX":{},"f:feature.node.kubernetes.io/cpu-cpuid.RTM":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE4":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE42":{},"f:feature.node.kubernetes.io/cpu-cpuid.STIBP":{},"f:feature.node.kubernetes.io/cpu-cpuid.VMX":{},"f:feature.node.kubernetes.io/cpu-cstate.enabled":{},"f:feature.node.kubernetes.io/cpu-hardware_multithreading":{},"f:feature.node.kubernetes.io/cpu-pstate.status":{},"f:feature.node.kubernetes.io/cpu-pstate.turbo":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTCMT":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTL3CA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBM":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMON":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ_FULL":{},"f:feature.node.kubernetes.io/kernel-selinux.enabled":{},"f:feature.node.kubernetes.io/kernel-version.full":{},"f:feature.node.kubernetes.io/kernel-version.major":{},"f:feature.node.kubernetes.io/kernel-version.minor":{},"f:feature.node.kubernetes.io/kernel-version.revision":{},"f:feature.node.kubernetes.io/memory-numa":{},"f:feature.node.kubernetes.io/network-sriov.capable":{},"f:feature.node.kubernetes.io/network-sriov.configured":{},"f:feature.node.kubernetes.io/pci-0300_1a03.present":{},"f:feature.node.kubernetes.io/storage-nonrotationaldisk":{},"f:feature.node.kubernetes.io/system-os_release.ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID.major":{}}}}} {Swagger-Codegen Update v1 2021-10-29 21:19:31 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:cmk.intel.com/cmk-node":{}}},"f:status":{"f:capacity":{"f:cmk.intel.com/exclusive-cores":{}}}}} {kubelet Update v1 2021-10-30 01:59:04 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:cmk.intel.com/exclusive-cores":{},"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{},"f:scheduling.k8s.io/foo":{}},"f:capacity":{"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}} {e2e.test Update v1 2021-10-30 03:08:22 +0000 UTC FieldsV1 {"f:status":{"f:capacity":{"f:scheduling.k8s.io/foo":{}}}}}]},Spec:NodeSpec{PodCIDR:10.244.3.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.3.0/24],},Status:NodeStatus{Capacity:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{201269633024 0} {}  BinarySI},pods: {{110 0} {} 110 DecimalSI},scheduling.k8s.io/foo: {{3 0} {} 3 DecimalSI},},Allocatable:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{77 0} {} 77 DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{178884632576 0} {}  BinarySI},pods: {{110 0} {} 110 DecimalSI},scheduling.k8s.io/foo: {{3 0} {} 3 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-10-29 21:11:38 +0000 UTC,LastTransitionTime:2021-10-29 21:11:38 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-10-30 03:53:28 +0000 UTC,LastTransitionTime:2021-10-29 21:07:27 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-10-30 03:53:28 +0000 UTC,LastTransitionTime:2021-10-29 21:07:27 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-10-30 03:53:28 +0000 UTC,LastTransitionTime:2021-10-29 21:07:27 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-10-30 03:53:28 +0000 UTC,LastTransitionTime:2021-10-29 21:08:36 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.207,},NodeAddress{Type:Hostname,Address:node1,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:3bf4179125e4495c89c046ed0ae7baf7,SystemUUID:00CDA902-D022-E711-906E-0017A4403562,BootID:ce868148-dc5e-4c7c-a555-42ee929547f7,KernelVersion:3.10.0-1160.45.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.10,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[opnfv/barometer-collectd@sha256:f30e965aa6195e6ac4ca2410f5a15e3704c92e4afa5208178ca22a7911975d66],SizeBytes:1075575763,},ContainerImage{Names:[@ :],SizeBytes:1003432289,},ContainerImage{Names:[localhost:30500/cmk@sha256:430843a71fa03faf488543c9f5b50d3efbef49988d6784f9f48b8077cc806f60 cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:724463471,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[golang@sha256:db2475a1dbb2149508e5db31d7d77a75e6600d54be645f37681f03f2762169ba golang:alpine3.12],SizeBytes:301186719,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[grafana/grafana@sha256:ba39bf5131dcc0464134a3ff0e26e8c6380415249fa725e5f619176601255172 grafana/grafana:7.5.4],SizeBytes:203572842,},ContainerImage{Names:[quay.io/prometheus/prometheus@sha256:b899dbd1b9017b9a379f76ce5b40eead01a62762c4f2057eacef945c3c22d210 quay.io/prometheus/prometheus:v2.22.1],SizeBytes:168344243,},ContainerImage{Names:[nginx@sha256:644a70516a26004c97d0d85c7fe1d0c3a67ea8ab7ddf4aff193d9f301670cf36 nginx:latest],SizeBytes:133277153,},ContainerImage{Names:[nginx@sha256:a05b0cdd4fc1be3b224ba9662ebdf98fe44c09c0c9215b45f84344c12867002e nginx:1.21.1],SizeBytes:133175493,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:716d2f68314c5c4ddd5ecdb45183fcb4ed8019015982c1321571f863989b70b0 k8s.gcr.io/e2e-test-images/httpd:2.4.39-1],SizeBytes:126894770,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:125930239,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:1f36a24cfb5e0c3f725d7565a867c2384282fcbeccc77b07b423c9da95763a9a k8s.gcr.io/e2e-test-images/nautilus:1.4],SizeBytes:121748345,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[k8s.gcr.io/nfd/node-feature-discovery@sha256:74a1cbd82354f148277e20cdce25d57816e355a896bc67f67a0f722164b16945 k8s.gcr.io/nfd/node-feature-discovery:v0.8.2],SizeBytes:108486428,},ContainerImage{Names:[directxman12/k8s-prometheus-adapter@sha256:2b09a571757a12c0245f2f1a74db4d1b9386ff901cf57f5ce48a0a682bd0e3af directxman12/k8s-prometheus-adapter:v0.8.2],SizeBytes:68230450,},ContainerImage{Names:[k8s.gcr.io/build-image/debian-iptables@sha256:d226f3fd5f293ff513f53573a40c069b89d57d42338a1045b493bf702ac6b1f6 k8s.gcr.io/build-image/debian-iptables:buster-v1.6.5],SizeBytes:60182158,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[localhost:30500/sriov-device-plugin@sha256:2f1ff7ac170c0ac8079e232ea4ee89d23b7906d1b824d901927acb4e399c52c9 nfvpe/sriov-device-plugin:latest localhost:30500/sriov-device-plugin:v3.3.2],SizeBytes:42674030,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[prom/collectd-exporter@sha256:73fbda4d24421bff3b741c27efc36f1b6fbe7c57c378d56d4ff78101cd556654],SizeBytes:17463681,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:16032814,},ContainerImage{Names:[quay.io/prometheus-operator/prometheus-config-reloader@sha256:4dee0fcf1820355ddd6986c1317b555693776c731315544a99d6cc59a7e34ce9 quay.io/prometheus-operator/prometheus-config-reloader:v0.44.1],SizeBytes:13433274,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:11443478,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonewprivs@sha256:8ac1264691820febacf3aea5d152cbde6d10685731ec14966a9401c6f47a68ac k8s.gcr.io/e2e-test-images/nonewprivs:1.3],SizeBytes:7107254,},ContainerImage{Names:[appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172cead6502fec989198b98d8bb appropriate/curl:edge],SizeBytes:5654234,},ContainerImage{Names:[alpine@sha256:a296b4c6f6ee2b88f095b61e95c7ef4f51ba25598835b4978c9256d8c8ace48a alpine:3.12],SizeBytes:5581415,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:1154361,},ContainerImage{Names:[busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 busybox:1.28],SizeBytes:1146369,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},}
Oct 30 03:53:33.534: INFO: 
Logging kubelet events for node node1
Oct 30 03:53:33.536: INFO: 
Logging pods the kubelet thinks is on node node1
Oct 30 03:53:33.553: INFO: kube-flannel-phg88 started at 2021-10-29 21:08:25 +0000 UTC (1+1 container statuses recorded)
Oct 30 03:53:33.553: INFO: 	Init container install-cni ready: true, restart count 2
Oct 30 03:53:33.553: INFO: 	Container kube-flannel ready: true, restart count 2
Oct 30 03:53:33.553: INFO: kube-proxy-z5hqt started at 2021-10-29 21:07:31 +0000 UTC (0+1 container statuses recorded)
Oct 30 03:53:33.553: INFO: 	Container kube-proxy ready: true, restart count 1
Oct 30 03:53:33.553: INFO: collectd-d45rv started at 2021-10-29 21:25:13 +0000 UTC (0+3 container statuses recorded)
Oct 30 03:53:33.553: INFO: 	Container collectd ready: true, restart count 0
Oct 30 03:53:33.553: INFO: 	Container collectd-exporter ready: true, restart count 0
Oct 30 03:53:33.553: INFO: 	Container rbac-proxy ready: true, restart count 0
Oct 30 03:53:33.553: INFO: nginx-proxy-node1 started at 2021-10-29 21:07:28 +0000 UTC (0+1 container statuses recorded)
Oct 30 03:53:33.553: INFO: 	Container nginx-proxy ready: true, restart count 2
Oct 30 03:53:33.553: INFO: cmk-init-discover-node1-n4mcc started at 2021-10-29 21:19:28 +0000 UTC (0+3 container statuses recorded)
Oct 30 03:53:33.553: INFO: 	Container discover ready: false, restart count 0
Oct 30 03:53:33.553: INFO: 	Container init ready: false, restart count 0
Oct 30 03:53:33.553: INFO: 	Container install ready: false, restart count 0
Oct 30 03:53:33.553: INFO: cmk-89lqq started at 2021-10-29 21:20:10 +0000 UTC (0+2 container statuses recorded)
Oct 30 03:53:33.553: INFO: 	Container nodereport ready: true, restart count 0
Oct 30 03:53:33.553: INFO: 	Container reconcile ready: true, restart count 0
Oct 30 03:53:33.553: INFO: netserver-0 started at 2021-10-30 03:53:20 +0000 UTC (0+1 container statuses recorded)
Oct 30 03:53:33.553: INFO: 	Container webserver ready: false, restart count 0
Oct 30 03:53:33.553: INFO: nodeport-update-service-cwmvf started at 2021-10-30 03:51:24 +0000 UTC (0+1 container statuses recorded)
Oct 30 03:53:33.554: INFO: 	Container nodeport-update-service ready: true, restart count 0
Oct 30 03:53:33.554: INFO: kube-multus-ds-amd64-68wrz started at 2021-10-29 21:08:34 +0000 UTC (0+1 container statuses recorded)
Oct 30 03:53:33.554: INFO: 	Container kube-multus ready: true, restart count 1
Oct 30 03:53:33.554: INFO: netserver-0 started at 2021-10-30 03:51:42 +0000 UTC (0+1 container statuses recorded)
Oct 30 03:53:33.554: INFO: 	Container webserver ready: true, restart count 0
Oct 30 03:53:33.554: INFO: host-test-container-pod started at 2021-10-30 03:53:26 +0000 UTC (0+1 container statuses recorded)
Oct 30 03:53:33.554: INFO: 	Container agnhost-container ready: true, restart count 0
Oct 30 03:53:33.554: INFO: service-proxy-toggled-jbjrr started at 2021-10-30 03:52:05 +0000 UTC (0+1 container statuses recorded)
Oct 30 03:53:33.554: INFO: 	Container service-proxy-toggled ready: true, restart count 0
Oct 30 03:53:33.554: INFO: service-headless-6c2wk started at 2021-10-30 03:52:06 +0000 UTC (0+1 container statuses recorded)
Oct 30 03:53:33.554: INFO: 	Container service-headless ready: true, restart count 0
Oct 30 03:53:33.554: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-t789r started at 2021-10-29 21:17:10 +0000 UTC (0+1 container statuses recorded)
Oct 30 03:53:33.554: INFO: 	Container kube-sriovdp ready: true, restart count 0
Oct 30 03:53:33.554: INFO: netserver-0 started at 2021-10-30 03:53:05 +0000 UTC (0+1 container statuses recorded)
Oct 30 03:53:33.554: INFO: 	Container webserver ready: true, restart count 0
Oct 30 03:53:33.554: INFO: test-container-pod started at 2021-10-30 03:53:27 +0000 UTC (0+1 container statuses recorded)
Oct 30 03:53:33.554: INFO: 	Container webserver ready: true, restart count 0
Oct 30 03:53:33.554: INFO: service-proxy-disabled-dl9rv started at 2021-10-30 03:51:53 +0000 UTC (0+1 container statuses recorded)
Oct 30 03:53:33.554: INFO: 	Container service-proxy-disabled ready: true, restart count 0
Oct 30 03:53:33.554: INFO: e2e-net-exec started at 2021-10-30 03:52:54 +0000 UTC (0+1 container statuses recorded)
Oct 30 03:53:33.554: INFO: 	Container e2e-net-exec ready: true, restart count 0
Oct 30 03:53:33.554: INFO: kubernetes-metrics-scraper-5558854cb-5rmjw started at 2021-10-29 21:09:04 +0000 UTC (0+1 container statuses recorded)
Oct 30 03:53:33.554: INFO: 	Container kubernetes-metrics-scraper ready: true, restart count 1
Oct 30 03:53:33.554: INFO: node-exporter-256wm started at 2021-10-29 21:21:15 +0000 UTC (0+2 container statuses recorded)
Oct 30 03:53:33.554: INFO: 	Container kube-rbac-proxy ready: true, restart count 0
Oct 30 03:53:33.554: INFO: 	Container node-exporter ready: true, restart count 0
Oct 30 03:53:33.554: INFO: prometheus-k8s-0 started at 2021-10-29 21:21:17 +0000 UTC (0+4 container statuses recorded)
Oct 30 03:53:33.554: INFO: 	Container config-reloader ready: true, restart count 0
Oct 30 03:53:33.554: INFO: 	Container custom-metrics-apiserver ready: true, restart count 0
Oct 30 03:53:33.554: INFO: 	Container grafana ready: true, restart count 0
Oct 30 03:53:33.554: INFO: 	Container prometheus ready: true, restart count 1
Oct 30 03:53:33.554: INFO: service-headless-toggled-vq4td started at 2021-10-30 03:52:18 +0000 UTC (0+1 container statuses recorded)
Oct 30 03:53:33.554: INFO: 	Container service-headless-toggled ready: false, restart count 0
Oct 30 03:53:33.554: INFO: pod-client started at 2021-10-30 03:53:07 +0000 UTC (0+1 container statuses recorded)
Oct 30 03:53:33.554: INFO: 	Container pod-client ready: true, restart count 0
Oct 30 03:53:33.554: INFO: node-feature-discovery-worker-w5vdb started at 2021-10-29 21:15:58 +0000 UTC (0+1 container statuses recorded)
Oct 30 03:53:33.554: INFO: 	Container nfd-worker ready: true, restart count 0
Oct 30 03:53:33.554: INFO: service-headless-g66pl started at 2021-10-30 03:52:06 +0000 UTC (0+1 container statuses recorded)
Oct 30 03:53:33.554: INFO: 	Container service-headless ready: true, restart count 0
Oct 30 03:53:33.554: INFO: service-headless-toggled-qhh25 started at 2021-10-30 03:52:18 +0000 UTC (0+1 container statuses recorded)
Oct 30 03:53:33.554: INFO: 	Container service-headless-toggled ready: true, restart count 0
Oct 30 03:53:33.554: INFO: netserver-0 started at 2021-10-30 03:53:21 +0000 UTC (0+1 container statuses recorded)
Oct 30 03:53:33.554: INFO: 	Container webserver ready: false, restart count 0
Oct 30 03:53:33.554: INFO: netserver-0 started at 2021-10-30 03:53:06 +0000 UTC (0+1 container statuses recorded)
Oct 30 03:53:33.554: INFO: 	Container webserver ready: true, restart count 0
W1030 03:53:33.567419      28 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled.
Oct 30 03:53:33.908: INFO: 
Latency metrics for node node1
Oct 30 03:53:33.908: INFO: 
Logging node info for node node2
Oct 30 03:53:33.911: INFO: Node Info: &Node{ObjectMeta:{node2    3b49ad19-ba56-4f4a-b1fa-eef102063de9 145620 0 2021-10-29 21:07:28 +0000 UTC   map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux cmk.intel.com/cmk-node:true feature.node.kubernetes.io/cpu-cpuid.ADX:true feature.node.kubernetes.io/cpu-cpuid.AESNI:true feature.node.kubernetes.io/cpu-cpuid.AVX:true feature.node.kubernetes.io/cpu-cpuid.AVX2:true feature.node.kubernetes.io/cpu-cpuid.AVX512BW:true feature.node.kubernetes.io/cpu-cpuid.AVX512CD:true feature.node.kubernetes.io/cpu-cpuid.AVX512DQ:true feature.node.kubernetes.io/cpu-cpuid.AVX512F:true feature.node.kubernetes.io/cpu-cpuid.AVX512VL:true feature.node.kubernetes.io/cpu-cpuid.FMA3:true feature.node.kubernetes.io/cpu-cpuid.HLE:true feature.node.kubernetes.io/cpu-cpuid.IBPB:true feature.node.kubernetes.io/cpu-cpuid.MPX:true feature.node.kubernetes.io/cpu-cpuid.RTM:true feature.node.kubernetes.io/cpu-cpuid.SSE4:true feature.node.kubernetes.io/cpu-cpuid.SSE42:true feature.node.kubernetes.io/cpu-cpuid.STIBP:true feature.node.kubernetes.io/cpu-cpuid.VMX:true feature.node.kubernetes.io/cpu-cstate.enabled:true feature.node.kubernetes.io/cpu-hardware_multithreading:true feature.node.kubernetes.io/cpu-pstate.status:active feature.node.kubernetes.io/cpu-pstate.turbo:true feature.node.kubernetes.io/cpu-rdt.RDTCMT:true feature.node.kubernetes.io/cpu-rdt.RDTL3CA:true feature.node.kubernetes.io/cpu-rdt.RDTMBA:true feature.node.kubernetes.io/cpu-rdt.RDTMBM:true feature.node.kubernetes.io/cpu-rdt.RDTMON:true feature.node.kubernetes.io/kernel-config.NO_HZ:true feature.node.kubernetes.io/kernel-config.NO_HZ_FULL:true feature.node.kubernetes.io/kernel-selinux.enabled:true feature.node.kubernetes.io/kernel-version.full:3.10.0-1160.45.1.el7.x86_64 feature.node.kubernetes.io/kernel-version.major:3 feature.node.kubernetes.io/kernel-version.minor:10 feature.node.kubernetes.io/kernel-version.revision:0 feature.node.kubernetes.io/memory-numa:true feature.node.kubernetes.io/network-sriov.capable:true feature.node.kubernetes.io/network-sriov.configured:true feature.node.kubernetes.io/pci-0300_1a03.present:true feature.node.kubernetes.io/storage-nonrotationaldisk:true feature.node.kubernetes.io/system-os_release.ID:centos feature.node.kubernetes.io/system-os_release.VERSION_ID:7 feature.node.kubernetes.io/system-os_release.VERSION_ID.major:7 kubernetes.io/arch:amd64 kubernetes.io/hostname:node2 kubernetes.io/os:linux] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.208 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/extended-resources: nfd.node.kubernetes.io/feature-labels:cpu-cpuid.ADX,cpu-cpuid.AESNI,cpu-cpuid.AVX,cpu-cpuid.AVX2,cpu-cpuid.AVX512BW,cpu-cpuid.AVX512CD,cpu-cpuid.AVX512DQ,cpu-cpuid.AVX512F,cpu-cpuid.AVX512VL,cpu-cpuid.FMA3,cpu-cpuid.HLE,cpu-cpuid.IBPB,cpu-cpuid.MPX,cpu-cpuid.RTM,cpu-cpuid.SSE4,cpu-cpuid.SSE42,cpu-cpuid.STIBP,cpu-cpuid.VMX,cpu-cstate.enabled,cpu-hardware_multithreading,cpu-pstate.status,cpu-pstate.turbo,cpu-rdt.RDTCMT,cpu-rdt.RDTL3CA,cpu-rdt.RDTMBA,cpu-rdt.RDTMBM,cpu-rdt.RDTMON,kernel-config.NO_HZ,kernel-config.NO_HZ_FULL,kernel-selinux.enabled,kernel-version.full,kernel-version.major,kernel-version.minor,kernel-version.revision,memory-numa,network-sriov.capable,network-sriov.configured,pci-0300_1a03.present,storage-nonrotationaldisk,system-os_release.ID,system-os_release.VERSION_ID,system-os_release.VERSION_ID.major nfd.node.kubernetes.io/worker.version:v0.8.2 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  [{kube-controller-manager Update v1 2021-10-29 21:07:28 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.4.0/24\"":{}}}}} {kubeadm Update v1 2021-10-29 21:07:28 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {flanneld Update v1 2021-10-29 21:08:30 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {nfd-master Update v1 2021-10-29 21:16:05 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/extended-resources":{},"f:nfd.node.kubernetes.io/feature-labels":{},"f:nfd.node.kubernetes.io/worker.version":{}},"f:labels":{"f:feature.node.kubernetes.io/cpu-cpuid.ADX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AESNI":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX2":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512BW":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512CD":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512DQ":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512F":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512VL":{},"f:feature.node.kubernetes.io/cpu-cpuid.FMA3":{},"f:feature.node.kubernetes.io/cpu-cpuid.HLE":{},"f:feature.node.kubernetes.io/cpu-cpuid.IBPB":{},"f:feature.node.kubernetes.io/cpu-cpuid.MPX":{},"f:feature.node.kubernetes.io/cpu-cpuid.RTM":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE4":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE42":{},"f:feature.node.kubernetes.io/cpu-cpuid.STIBP":{},"f:feature.node.kubernetes.io/cpu-cpuid.VMX":{},"f:feature.node.kubernetes.io/cpu-cstate.enabled":{},"f:feature.node.kubernetes.io/cpu-hardware_multithreading":{},"f:feature.node.kubernetes.io/cpu-pstate.status":{},"f:feature.node.kubernetes.io/cpu-pstate.turbo":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTCMT":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTL3CA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBM":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMON":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ_FULL":{},"f:feature.node.kubernetes.io/kernel-selinux.enabled":{},"f:feature.node.kubernetes.io/kernel-version.full":{},"f:feature.node.kubernetes.io/kernel-version.major":{},"f:feature.node.kubernetes.io/kernel-version.minor":{},"f:feature.node.kubernetes.io/kernel-version.revision":{},"f:feature.node.kubernetes.io/memory-numa":{},"f:feature.node.kubernetes.io/network-sriov.capable":{},"f:feature.node.kubernetes.io/network-sriov.configured":{},"f:feature.node.kubernetes.io/pci-0300_1a03.present":{},"f:feature.node.kubernetes.io/storage-nonrotationaldisk":{},"f:feature.node.kubernetes.io/system-os_release.ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID.major":{}}}}} {Swagger-Codegen Update v1 2021-10-29 21:19:53 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:cmk.intel.com/cmk-node":{}}},"f:status":{"f:capacity":{"f:cmk.intel.com/exclusive-cores":{}}}}} {e2e.test Update v1 2021-10-30 01:59:00 +0000 UTC FieldsV1 {"f:status":{"f:capacity":{"f:scheduling.k8s.io/foo":{}}}}} {kubelet Update v1 2021-10-30 01:59:08 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:cmk.intel.com/exclusive-cores":{},"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{},"f:scheduling.k8s.io/foo":{}},"f:capacity":{"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.4.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.4.0/24],},Status:NodeStatus{Capacity:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{201269628928 0} {} 196552372Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},scheduling.k8s.io/foo: {{3 0} {} 3 DecimalSI},},Allocatable:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{77 0} {} 77 DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{178884628480 0} {} 174692020Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},scheduling.k8s.io/foo: {{3 0} {} 3 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-10-29 21:11:34 +0000 UTC,LastTransitionTime:2021-10-29 21:11:34 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-10-30 03:53:26 +0000 UTC,LastTransitionTime:2021-10-29 21:07:28 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-10-30 03:53:26 +0000 UTC,LastTransitionTime:2021-10-29 21:07:28 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-10-30 03:53:26 +0000 UTC,LastTransitionTime:2021-10-29 21:07:28 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-10-30 03:53:26 +0000 UTC,LastTransitionTime:2021-10-29 21:08:36 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.208,},NodeAddress{Type:Hostname,Address:node2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:7283436dd9e34722a6e4df817add95ed,SystemUUID:80B3CD56-852F-E711-906E-0017A4403562,BootID:c219e7bd-582b-4d6c-b379-1161acc70676,KernelVersion:3.10.0-1160.45.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.10,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[opnfv/barometer-collectd@sha256:f30e965aa6195e6ac4ca2410f5a15e3704c92e4afa5208178ca22a7911975d66],SizeBytes:1075575763,},ContainerImage{Names:[localhost:30500/cmk@sha256:430843a71fa03faf488543c9f5b50d3efbef49988d6784f9f48b8077cc806f60 localhost:30500/cmk:v1.5.1],SizeBytes:724463471,},ContainerImage{Names:[cmk:v1.5.1],SizeBytes:724463471,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[aquasec/kube-hunter@sha256:2be6820bc1d7e0f57193a9a27d5a3e16b2fd93c53747b03ce8ca48c6fc323781 aquasec/kube-hunter:0.3.1],SizeBytes:347611549,},ContainerImage{Names:[sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 sirot/netperf-latest:latest],SizeBytes:282025213,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[k8s.gcr.io/etcd@sha256:4ad90a11b55313b182afc186b9876c8e891531b8db4c9bf1541953021618d0e2 k8s.gcr.io/etcd:3.4.13-0],SizeBytes:253392289,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:702a992280fb7c3303e84a5801acbb4c9c7fcf48cffe0e9c8be3f0c60f74cf89 k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.4],SizeBytes:253371792,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[nginx@sha256:644a70516a26004c97d0d85c7fe1d0c3a67ea8ab7ddf4aff193d9f301670cf36 nginx:latest],SizeBytes:133277153,},ContainerImage{Names:[nginx@sha256:a05b0cdd4fc1be3b224ba9662ebdf98fe44c09c0c9215b45f84344c12867002e nginx:1.21.1],SizeBytes:133175493,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:716d2f68314c5c4ddd5ecdb45183fcb4ed8019015982c1321571f863989b70b0 k8s.gcr.io/e2e-test-images/httpd:2.4.39-1],SizeBytes:126894770,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:125930239,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:1f36a24cfb5e0c3f725d7565a867c2384282fcbeccc77b07b423c9da95763a9a k8s.gcr.io/e2e-test-images/nautilus:1.4],SizeBytes:121748345,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[k8s.gcr.io/nfd/node-feature-discovery@sha256:74a1cbd82354f148277e20cdce25d57816e355a896bc67f67a0f722164b16945 k8s.gcr.io/nfd/node-feature-discovery:v0.8.2],SizeBytes:108486428,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/sample-apiserver@sha256:e7fddbaac4c3451da2365ab90bad149d32f11409738034e41e0f460927f7c276 k8s.gcr.io/e2e-test-images/sample-apiserver:1.17.4],SizeBytes:58172101,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/regression-issue-74839@sha256:b4f1d8d61bdad84bd50442d161d5460e4019d53e989b64220fdbc62fc87d76bf k8s.gcr.io/e2e-test-images/regression-issue-74839:1.2],SizeBytes:44576952,},ContainerImage{Names:[localhost:30500/sriov-device-plugin@sha256:2f1ff7ac170c0ac8079e232ea4ee89d23b7906d1b824d901927acb4e399c52c9 localhost:30500/sriov-device-plugin:v3.3.2],SizeBytes:42674030,},ContainerImage{Names:[localhost:30500/tasextender@sha256:b7d2fa8154ac5d9cff45866e4d3d210a7d390f8576611c301a2eed2b57273227 localhost:30500/tasextender:0.4],SizeBytes:28910791,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 aquasec/kube-bench:0.3.1],SizeBytes:19301876,},ContainerImage{Names:[prom/collectd-exporter@sha256:73fbda4d24421bff3b741c27efc36f1b6fbe7c57c378d56d4ff78101cd556654],SizeBytes:17463681,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:16032814,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:11443478,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonewprivs@sha256:8ac1264691820febacf3aea5d152cbde6d10685731ec14966a9401c6f47a68ac k8s.gcr.io/e2e-test-images/nonewprivs:1.3],SizeBytes:7107254,},ContainerImage{Names:[appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172cead6502fec989198b98d8bb appropriate/curl:edge],SizeBytes:5654234,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:1154361,},ContainerImage{Names:[busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 busybox:1.28],SizeBytes:1146369,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},}
Oct 30 03:53:33.911: INFO: 
Logging kubelet events for node node2
Oct 30 03:53:33.914: INFO: 
Logging pods the kubelet thinks is on node node2
Oct 30 03:53:33.933: INFO: service-proxy-disabled-hpn52 started at 2021-10-30 03:51:53 +0000 UTC (0+1 container statuses recorded)
Oct 30 03:53:33.933: INFO: 	Container service-proxy-disabled ready: true, restart count 0
Oct 30 03:53:33.933: INFO: service-proxy-disabled-4kx8d started at 2021-10-30 03:51:53 +0000 UTC (0+1 container statuses recorded)
Oct 30 03:53:33.933: INFO: 	Container service-proxy-disabled ready: true, restart count 0
Oct 30 03:53:33.933: INFO: service-headless-toggled-7vkvw started at 2021-10-30 03:52:18 +0000 UTC (0+1 container statuses recorded)
Oct 30 03:53:33.933: INFO: 	Container service-headless-toggled ready: true, restart count 0
Oct 30 03:53:33.933: INFO: netserver-1 started at 2021-10-30 03:53:21 +0000 UTC (0+1 container statuses recorded)
Oct 30 03:53:33.933: INFO: 	Container webserver ready: false, restart count 0
Oct 30 03:53:33.933: INFO: node-feature-discovery-worker-h6lcp started at 2021-10-29 21:15:58 +0000 UTC (0+1 container statuses recorded)
Oct 30 03:53:33.933: INFO: 	Container nfd-worker ready: true, restart count 0
Oct 30 03:53:33.933: INFO: e2e-net-server started at 2021-10-30 03:53:06 +0000 UTC (0+1 container statuses recorded)
Oct 30 03:53:33.933: INFO: 	Container e2e-net-server ready: true, restart count 0
Oct 30 03:53:33.933: INFO: cmk-webhook-6c9d5f8578-ffk66 started at 2021-10-29 21:20:11 +0000 UTC (0+1 container statuses recorded)
Oct 30 03:53:33.933: INFO: 	Container cmk-webhook ready: true, restart count 0
Oct 30 03:53:33.933: INFO: netserver-1 started at 2021-10-30 03:53:04 +0000 UTC (0+1 container statuses recorded)
Oct 30 03:53:33.933: INFO: 	Container webserver ready: true, restart count 0
Oct 30 03:53:33.933: INFO: kube-flannel-f6s5v started at 2021-10-29 21:08:25 +0000 UTC (1+1 container statuses recorded)
Oct 30 03:53:33.933: INFO: 	Init container install-cni ready: true, restart count 2
Oct 30 03:53:33.933: INFO: 	Container kube-flannel ready: true, restart count 3
Oct 30 03:53:33.933: INFO: kube-multus-ds-amd64-7tvbl started at 2021-10-29 21:08:34 +0000 UTC (0+1 container statuses recorded)
Oct 30 03:53:33.933: INFO: 	Container kube-multus ready: true, restart count 1
Oct 30 03:53:33.933: INFO: cmk-init-discover-node2-2fmmt started at 2021-10-29 21:19:48 +0000 UTC (0+3 container statuses recorded)
Oct 30 03:53:33.933: INFO: 	Container discover ready: false, restart count 0
Oct 30 03:53:33.933: INFO: 	Container init ready: false, restart count 0
Oct 30 03:53:33.933: INFO: 	Container install ready: false, restart count 0
Oct 30 03:53:33.933: INFO: up-down-1-6425n started at 2021-10-30 03:53:22 +0000 UTC (0+1 container statuses recorded)
Oct 30 03:53:33.933: INFO: 	Container up-down-1 ready: true, restart count 0
Oct 30 03:53:33.933: INFO: netserver-1 started at 2021-10-30 03:51:42 +0000 UTC (0+1 container statuses recorded)
Oct 30 03:53:33.933: INFO: 	Container webserver ready: true, restart count 0
Oct 30 03:53:33.933: INFO: nginx-proxy-node2 started at 2021-10-29 21:07:28 +0000 UTC (0+1 container statuses recorded)
Oct 30 03:53:33.933: INFO: 	Container nginx-proxy ready: true, restart count 2
Oct 30 03:53:33.933: INFO: kubernetes-dashboard-785dcbb76d-pbjjt started at 2021-10-29 21:09:04 +0000 UTC (0+1 container statuses recorded)
Oct 30 03:53:33.933: INFO: 	Container kubernetes-dashboard ready: true, restart count 1
Oct 30 03:53:33.933: INFO: test-container-pod started at 2021-10-30 03:52:04 +0000 UTC (0+1 container statuses recorded)
Oct 30 03:53:33.933: INFO: 	Container webserver ready: true, restart count 0
Oct 30 03:53:33.933: INFO: netserver-1 started at 2021-10-30 03:53:05 +0000 UTC (0+1 container statuses recorded)
Oct 30 03:53:33.933: INFO: 	Container webserver ready: true, restart count 0
Oct 30 03:53:33.933: INFO: test-container-pod started at 2021-10-30 03:53:26 +0000 UTC (0+1 container statuses recorded)
Oct 30 03:53:33.933: INFO: 	Container webserver ready: false, restart count 0
Oct 30 03:53:33.933: INFO: pod-server-1 started at 2021-10-30 03:53:19 +0000 UTC (0+1 container statuses recorded)
Oct 30 03:53:33.933: INFO: 	Container agnhost-container ready: true, restart count 0
Oct 30 03:53:33.933: INFO: netserver-1 started at 2021-10-30 03:53:20 +0000 UTC (0+1 container statuses recorded)
Oct 30 03:53:33.933: INFO: 	Container webserver ready: false, restart count 0
Oct 30 03:53:33.933: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-69pkg started at 2021-10-29 21:17:10 +0000 UTC (0+1 container statuses recorded)
Oct 30 03:53:33.933: INFO: 	Container kube-sriovdp ready: true, restart count 0
Oct 30 03:53:33.933: INFO: execpod2s5ls started at 2021-10-30 03:51:36 +0000 UTC (0+1 container statuses recorded)
Oct 30 03:53:33.933: INFO: 	Container agnhost-container ready: true, restart count 0
Oct 30 03:53:33.933: INFO: tas-telemetry-aware-scheduling-84ff454dfb-989mh started at 2021-10-29 21:24:23 +0000 UTC (0+1 container statuses recorded)
Oct 30 03:53:33.933: INFO: 	Container tas-extender ready: true, restart count 0
Oct 30 03:53:33.933: INFO: up-down-1-vrthx started at 2021-10-30 03:53:22 +0000 UTC (0+1 container statuses recorded)
Oct 30 03:53:33.933: INFO: 	Container up-down-1 ready: true, restart count 0
Oct 30 03:53:33.933: INFO: cmk-8bpbf started at 2021-10-29 21:20:11 +0000 UTC (0+2 container statuses recorded)
Oct 30 03:53:33.933: INFO: 	Container nodereport ready: true, restart count 0
Oct 30 03:53:33.933: INFO: 	Container reconcile ready: true, restart count 0
Oct 30 03:53:33.933: INFO: node-exporter-r77s4 started at 2021-10-29 21:21:15 +0000 UTC (0+2 container statuses recorded)
Oct 30 03:53:33.933: INFO: 	Container kube-rbac-proxy ready: true, restart count 0
Oct 30 03:53:33.933: INFO: 	Container node-exporter ready: true, restart count 0
Oct 30 03:53:33.933: INFO: collectd-flvhl started at 2021-10-29 21:25:13 +0000 UTC (0+3 container statuses recorded)
Oct 30 03:53:33.933: INFO: 	Container collectd ready: true, restart count 0
Oct 30 03:53:33.933: INFO: 	Container collectd-exporter ready: true, restart count 0
Oct 30 03:53:33.933: INFO: 	Container rbac-proxy ready: true, restart count 0
Oct 30 03:53:33.933: INFO: service-proxy-toggled-cj5j7 started at 2021-10-30 03:52:05 +0000 UTC (0+1 container statuses recorded)
Oct 30 03:53:33.933: INFO: 	Container service-proxy-toggled ready: true, restart count 0
Oct 30 03:53:33.933: INFO: up-down-1-zk986 started at 2021-10-30 03:53:22 +0000 UTC (0+1 container statuses recorded)
Oct 30 03:53:33.933: INFO: 	Container up-down-1 ready: true, restart count 0
Oct 30 03:53:33.933: INFO: hostexec started at 2021-10-30 03:53:23 +0000 UTC (0+1 container statuses recorded)
Oct 30 03:53:33.933: INFO: 	Container agnhost-container ready: true, restart count 0
Oct 30 03:53:33.933: INFO: service-proxy-toggled-trxd7 started at 2021-10-30 03:52:05 +0000 UTC (0+1 container statuses recorded)
Oct 30 03:53:33.933: INFO: 	Container service-proxy-toggled ready: true, restart count 0
Oct 30 03:53:33.933: INFO: kube-proxy-76285 started at 2021-10-29 21:07:31 +0000 UTC (0+1 container statuses recorded)
Oct 30 03:53:33.933: INFO: 	Container kube-proxy ready: true, restart count 1
Oct 30 03:53:33.933: INFO: nodeport-update-service-rc5hp started at 2021-10-30 03:51:24 +0000 UTC (0+1 container statuses recorded)
Oct 30 03:53:33.933: INFO: 	Container nodeport-update-service ready: true, restart count 0
W1030 03:53:33.944592      28 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled.
Oct 30 03:53:34.726: INFO: 
Latency metrics for node node2
Oct 30 03:53:34.726: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "nettest-3511" for this suite.


• Failure [112.268 seconds]
[sig-network] Networking
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  Granular Checks: Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:151
    should function for pod-Service: udp [It]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:168

    Oct 30 03:53:33.181: failed dialing endpoint, did not find expected responses... 
    Tries 34
    Command curl -g -q -s 'http://10.244.4.250:9080/dial?request=hostname&protocol=udp&host=10.10.190.207&port=32181&tries=1'
    retrieved map[]
    expected map[netserver-0:{} netserver-1:{}]

    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes/runner.go:113
------------------------------
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct 30 03:53:04.770: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename nettest
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:83
STEP: Executing a successful http request from the external internet
[It] should function for service endpoints using hostNetwork
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:474
STEP: Performing setup for networking test in namespace nettest-4119
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Oct 30 03:53:04.875: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
Oct 30 03:53:04.907: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Oct 30 03:53:06.911: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Oct 30 03:53:08.911: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Oct 30 03:53:10.911: INFO: The status of Pod netserver-0 is Running (Ready = false)
Oct 30 03:53:12.911: INFO: The status of Pod netserver-0 is Running (Ready = false)
Oct 30 03:53:14.911: INFO: The status of Pod netserver-0 is Running (Ready = false)
Oct 30 03:53:16.912: INFO: The status of Pod netserver-0 is Running (Ready = false)
Oct 30 03:53:18.912: INFO: The status of Pod netserver-0 is Running (Ready = false)
Oct 30 03:53:20.911: INFO: The status of Pod netserver-0 is Running (Ready = false)
Oct 30 03:53:22.911: INFO: The status of Pod netserver-0 is Running (Ready = false)
Oct 30 03:53:24.913: INFO: The status of Pod netserver-0 is Running (Ready = false)
Oct 30 03:53:26.912: INFO: The status of Pod netserver-0 is Running (Ready = true)
Oct 30 03:53:26.915: INFO: The status of Pod netserver-1 is Running (Ready = true)
STEP: Creating test pods
Oct 30 03:53:34.952: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2
STEP: Getting node addresses
Oct 30 03:53:34.952: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
Oct 30 03:53:34.959: INFO: Requires at least 2 nodes (not -1)
[AfterEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct 30 03:53:34.961: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "nettest-4119" for this suite.


S [SKIPPING] [30.199 seconds]
[sig-network] Networking
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  Granular Checks: Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:151
    should function for service endpoints using hostNetwork [It]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:474

    Requires at least 2 nodes (not -1)

    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/network/utils.go:782
------------------------------
Oct 30 03:53:34.970: INFO: Running AfterSuite actions on all nodes


[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct 30 03:51:24.873: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
W1030 03:51:24.893745      33 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
Oct 30 03:51:24.893: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled
Oct 30 03:51:24.895: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746
[It] should be able to update service type to NodePort listening on same port number but different protocols
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1211
STEP: creating a TCP service nodeport-update-service with type=ClusterIP in namespace services-61
Oct 30 03:51:24.903: INFO: Service Port TCP: 80
STEP: changing the TCP service to type=NodePort
STEP: creating replication controller nodeport-update-service in namespace services-61
I1030 03:51:24.916390      33 runners.go:190] Created replication controller with name: nodeport-update-service, namespace: services-61, replica count: 2
I1030 03:51:27.968329      33 runners.go:190] nodeport-update-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1030 03:51:30.969289      33 runners.go:190] nodeport-update-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1030 03:51:33.970706      33 runners.go:190] nodeport-update-service Pods: 2 out of 2 created, 1 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1030 03:51:36.973658      33 runners.go:190] nodeport-update-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Oct 30 03:51:36.973: INFO: Creating new exec pod
Oct 30 03:51:42.000: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-61 exec execpod2s5ls -- /bin/sh -x -c echo hostName | nc -v -t -w 2 nodeport-update-service 80'
Oct 30 03:51:42.312: INFO: stderr: "+ nc -v -t -w 2 nodeport-update-service 80\n+ echo hostName\nConnection to nodeport-update-service 80 port [tcp/http] succeeded!\n"
Oct 30 03:51:42.312: INFO: stdout: "nodeport-update-service-rc5hp"
Oct 30 03:51:42.313: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-61 exec execpod2s5ls -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.233.15.18 80'
Oct 30 03:51:42.546: INFO: stderr: "+ nc -v -t -w 2 10.233.15.18 80\n+ echo hostName\nConnection to 10.233.15.18 80 port [tcp/http] succeeded!\n"
Oct 30 03:51:42.546: INFO: stdout: "nodeport-update-service-cwmvf"
Oct 30 03:51:42.546: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-61 exec execpod2s5ls -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30926'
Oct 30 03:51:42.980: INFO: rc: 1
Oct 30 03:51:42.980: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-61 exec execpod2s5ls -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30926:
Command stdout:

stderr:
+ nc -v -t -w 2 10.10.190.207 30926
+ echo hostName
nc: connect to 10.10.190.207 port 30926 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 30 03:51:43.984: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-61 exec execpod2s5ls -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30926'
Oct 30 03:51:44.289: INFO: rc: 1
Oct 30 03:51:44.290: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-61 exec execpod2s5ls -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30926:
Command stdout:

stderr:
+ nc -v -t -w 2 10.10.190.207 30926
+ echo hostName
nc: connect to 10.10.190.207 port 30926 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 30 03:51:44.980: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-61 exec execpod2s5ls -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30926'
Oct 30 03:51:45.225: INFO: rc: 1
Oct 30 03:51:45.225: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-61 exec execpod2s5ls -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30926:
Command stdout:

stderr:
+ nc -v -t -w 2 10.10.190.207 30926
+ echo hostName
nc: connect to 10.10.190.207 port 30926 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 30 03:51:45.981: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-61 exec execpod2s5ls -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30926'
Oct 30 03:51:46.235: INFO: rc: 1
Oct 30 03:51:46.235: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-61 exec execpod2s5ls -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30926:
Command stdout:

stderr:
+ nc -v -t -w 2 10.10.190.207 30926
+ echo hostName
nc: connect to 10.10.190.207 port 30926 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 30 03:51:46.981: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-61 exec execpod2s5ls -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30926'
Oct 30 03:51:47.668: INFO: rc: 1
Oct 30 03:51:47.668: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-61 exec execpod2s5ls -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30926:
Command stdout:

stderr:
+ nc -v -t -w 2 10.10.190.207 30926
+ echo hostName
nc: connect to 10.10.190.207 port 30926 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 30 03:51:47.980: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-61 exec execpod2s5ls -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30926'
Oct 30 03:51:48.344: INFO: rc: 1
Oct 30 03:51:48.344: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-61 exec execpod2s5ls -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30926:
Command stdout:

stderr:
+ nc -v -t -w 2 10.10.190.207 30926
+ echo hostName
nc: connect to 10.10.190.207 port 30926 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 30 03:51:48.982: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-61 exec execpod2s5ls -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30926'
Oct 30 03:51:49.672: INFO: rc: 1
Oct 30 03:51:49.672: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-61 exec execpod2s5ls -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30926:
Command stdout:

stderr:
+ nc -v -t -w 2 10.10.190.207 30926
+ echo hostName
nc: connect to 10.10.190.207 port 30926 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 30 03:51:49.980: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-61 exec execpod2s5ls -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30926'
Oct 30 03:51:50.262: INFO: rc: 1
Oct 30 03:51:50.262: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-61 exec execpod2s5ls -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30926:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30926
nc: connect to 10.10.190.207 port 30926 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 30 03:51:50.981: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-61 exec execpod2s5ls -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30926'
Oct 30 03:51:51.500: INFO: rc: 1
Oct 30 03:51:51.500: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-61 exec execpod2s5ls -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30926:
Command stdout:

stderr:
+ nc -v -t -w 2 10.10.190.207 30926
+ echo hostName
nc: connect to 10.10.190.207 port 30926 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 30 03:51:51.981: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-61 exec execpod2s5ls -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30926'
Oct 30 03:51:52.212: INFO: rc: 1
Oct 30 03:51:52.212: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-61 exec execpod2s5ls -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30926:
Command stdout:

stderr:
+ nc -v -t -w 2 10.10.190.207 30926
+ echo hostName
nc: connect to 10.10.190.207 port 30926 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 30 03:51:52.980: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-61 exec execpod2s5ls -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30926'
Oct 30 03:51:53.221: INFO: rc: 1
Oct 30 03:51:53.221: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-61 exec execpod2s5ls -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30926:
Command stdout:

stderr:
+ nc -v -t -w 2 10.10.190.207 30926
+ echo hostName
nc: connect to 10.10.190.207 port 30926 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 30 03:51:53.981: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-61 exec execpod2s5ls -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30926'
Oct 30 03:51:54.256: INFO: rc: 1
Oct 30 03:51:54.256: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-61 exec execpod2s5ls -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30926:
Command stdout:

stderr:
+ nc -v -t -w 2 10.10.190.207 30926
+ echo hostName
nc: connect to 10.10.190.207 port 30926 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 30 03:51:54.983: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-61 exec execpod2s5ls -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30926'
Oct 30 03:51:55.438: INFO: rc: 1
Oct 30 03:51:55.438: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-61 exec execpod2s5ls -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30926:
Command stdout:

stderr:
+ nc -v -t -w 2 10.10.190.207 30926
+ echo hostName
nc: connect to 10.10.190.207 port 30926 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 30 03:51:55.980: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-61 exec execpod2s5ls -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30926'
Oct 30 03:51:56.298: INFO: rc: 1
Oct 30 03:51:56.298: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-61 exec execpod2s5ls -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30926:
Command stdout:

stderr:
+ nc -v -t -w 2 10.10.190.207 30926
+ echo hostName
nc: connect to 10.10.190.207 port 30926 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 30 03:51:56.982: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-61 exec execpod2s5ls -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30926'
Oct 30 03:51:57.838: INFO: rc: 1
Oct 30 03:51:57.838: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-61 exec execpod2s5ls -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30926:
Command stdout:

stderr:
+ nc -v -t -w 2 10.10.190.207 30926
+ echo hostName
nc: connect to 10.10.190.207 port 30926 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 30 03:51:57.981: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-61 exec execpod2s5ls -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30926'
Oct 30 03:51:58.470: INFO: rc: 1
Oct 30 03:51:58.470: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-61 exec execpod2s5ls -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30926:
Command stdout:

stderr:
+ nc -v -t -w 2 10.10.190.207 30926
+ echo hostName
nc: connect to 10.10.190.207 port 30926 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 30 03:51:58.982: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-61 exec execpod2s5ls -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30926'
Oct 30 03:51:59.414: INFO: rc: 1
Oct 30 03:51:59.414: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-61 exec execpod2s5ls -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30926:
Command stdout:

stderr:
+ nc -v -t -w 2 10.10.190.207 30926
+ echo hostName
nc: connect to 10.10.190.207 port 30926 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 30 03:51:59.981: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-61 exec execpod2s5ls -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30926'
Oct 30 03:52:00.347: INFO: rc: 1
Oct 30 03:52:00.347: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-61 exec execpod2s5ls -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30926:
Command stdout:

stderr:
+ nc -v -t -w 2 10.10.190.207 30926
+ echo hostName
nc: connect to 10.10.190.207 port 30926 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 30 03:52:00.981: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-61 exec execpod2s5ls -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30926'
Oct 30 03:52:01.923: INFO: rc: 1
Oct 30 03:52:01.923: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-61 exec execpod2s5ls -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30926:
Command stdout:

stderr:
+ nc -v -t -w 2 10.10.190.207 30926
+ echo hostName
nc: connect to 10.10.190.207 port 30926 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 30 03:52:01.980: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-61 exec execpod2s5ls -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30926'
Oct 30 03:52:02.826: INFO: rc: 1
Oct 30 03:52:02.826: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-61 exec execpod2s5ls -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30926:
Command stdout:

stderr:
+ nc -v -t -w 2 10.10.190.207 30926
+ echo hostName
nc: connect to 10.10.190.207 port 30926 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 30 03:52:02.980: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-61 exec execpod2s5ls -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30926'
Oct 30 03:52:03.331: INFO: rc: 1
Oct 30 03:52:03.331: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-61 exec execpod2s5ls -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30926:
Command stdout:

stderr:
+ nc -v -t -w 2 10.10.190.207 30926
+ echo hostName
nc: connect to 10.10.190.207 port 30926 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 30 03:52:03.981: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-61 exec execpod2s5ls -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30926'
Oct 30 03:52:04.244: INFO: rc: 1
Oct 30 03:52:04.244: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-61 exec execpod2s5ls -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30926:
Command stdout:

stderr:
+ nc -v -t -w 2 10.10.190.207 30926
+ echo hostName
nc: connect to 10.10.190.207 port 30926 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 30 03:52:04.982: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-61 exec execpod2s5ls -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30926'
Oct 30 03:52:05.385: INFO: rc: 1
Oct 30 03:52:05.385: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-61 exec execpod2s5ls -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30926:
Command stdout:

stderr:
+ nc -v -t -w 2 10.10.190.207 30926
+ echo hostName
nc: connect to 10.10.190.207 port 30926 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 30 03:52:05.982: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-61 exec execpod2s5ls -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30926'
Oct 30 03:52:06.384: INFO: rc: 1
Oct 30 03:52:06.384: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-61 exec execpod2s5ls -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30926:
Command stdout:

stderr:
+ nc -v -t -w 2 10.10.190.207 30926
+ echo hostName
nc: connect to 10.10.190.207 port 30926 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 30 03:52:06.982: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-61 exec execpod2s5ls -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30926'
Oct 30 03:52:07.671: INFO: rc: 1
Oct 30 03:52:07.672: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-61 exec execpod2s5ls -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30926:
Command stdout:

stderr:
+ nc -v -t -w 2 10.10.190.207 30926
+ echo hostName
nc: connect to 10.10.190.207 port 30926 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 30 03:52:07.981: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-61 exec execpod2s5ls -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30926'
Oct 30 03:52:08.484: INFO: rc: 1
Oct 30 03:52:08.485: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-61 exec execpod2s5ls -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30926:
Command stdout:

stderr:
+ nc -v -t -w 2 10.10.190.207 30926
+ echo hostName
nc: connect to 10.10.190.207 port 30926 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 30 03:52:08.981: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-61 exec execpod2s5ls -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30926'
Oct 30 03:52:09.439: INFO: rc: 1
Oct 30 03:52:09.439: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-61 exec execpod2s5ls -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30926:
Command stdout:

stderr:
+ nc -v -t -w 2 10.10.190.207 30926
+ echo hostName
nc: connect to 10.10.190.207 port 30926 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 30 03:52:09.982: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-61 exec execpod2s5ls -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30926'
Oct 30 03:52:10.214: INFO: rc: 1
Oct 30 03:52:10.215: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-61 exec execpod2s5ls -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30926:
Command stdout:

stderr:
+ nc -v -t -w 2 10.10.190.207 30926
+ echo hostName
nc: connect to 10.10.190.207 port 30926 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 30 03:52:10.982: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-61 exec execpod2s5ls -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30926'
Oct 30 03:52:12.140: INFO: rc: 1
Oct 30 03:52:12.140: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-61 exec execpod2s5ls -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30926:
Command stdout:

stderr:
+ nc -v -t -w 2 10.10.190.207 30926
+ echo hostName
nc: connect to 10.10.190.207 port 30926 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 30 03:52:12.980: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-61 exec execpod2s5ls -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30926'
Oct 30 03:52:13.442: INFO: rc: 1
Oct 30 03:52:13.442: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-61 exec execpod2s5ls -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30926:
Command stdout:

stderr:
+ nc -v -t -w 2 10.10.190.207 30926
+ echo hostName
nc: connect to 10.10.190.207 port 30926 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 30 03:52:13.982: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-61 exec execpod2s5ls -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30926'
Oct 30 03:52:14.648: INFO: rc: 1
Oct 30 03:52:14.648: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-61 exec execpod2s5ls -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30926:
Command stdout:

stderr:
+ nc -v -t -w 2 10.10.190.207 30926
+ echo hostName
nc: connect to 10.10.190.207 port 30926 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 30 03:52:14.981: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-61 exec execpod2s5ls -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30926'
Oct 30 03:52:15.249: INFO: rc: 1
Oct 30 03:52:15.249: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-61 exec execpod2s5ls -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30926:
Command stdout:

stderr:
+ nc -v -t -w 2 10.10.190.207 30926
+ echo hostName
nc: connect to 10.10.190.207 port 30926 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 30 03:52:15.981: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-61 exec execpod2s5ls -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30926'
Oct 30 03:52:16.224: INFO: rc: 1
Oct 30 03:52:16.224: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-61 exec execpod2s5ls -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30926:
Command stdout:

stderr:
+ nc -v -t -w 2 10.10.190.207 30926
+ echo hostName
nc: connect to 10.10.190.207 port 30926 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 30 03:52:16.983: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-61 exec execpod2s5ls -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30926'
Oct 30 03:52:17.221: INFO: rc: 1
Oct 30 03:52:17.221: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-61 exec execpod2s5ls -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30926:
Command stdout:

stderr:
+ nc -v -t -w 2 10.10.190.207 30926
+ echo hostName
nc: connect to 10.10.190.207 port 30926 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 30 03:52:17.981: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-61 exec execpod2s5ls -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30926'
Oct 30 03:52:18.218: INFO: rc: 1
Oct 30 03:52:18.218: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-61 exec execpod2s5ls -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30926:
Command stdout:

stderr:
+ nc -v -t -w 2 10.10.190.207 30926
+ echo hostName
nc: connect to 10.10.190.207 port 30926 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 30 03:52:18.982: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-61 exec execpod2s5ls -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30926'
Oct 30 03:52:19.447: INFO: rc: 1
Oct 30 03:52:19.447: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-61 exec execpod2s5ls -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30926:
Command stdout:

stderr:
+ nc -v -t -w 2 10.10.190.207 30926
+ echo hostName
nc: connect to 10.10.190.207 port 30926 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 30 03:52:19.982: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-61 exec execpod2s5ls -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30926'
Oct 30 03:52:20.324: INFO: rc: 1
Oct 30 03:52:20.324: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-61 exec execpod2s5ls -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30926:
Command stdout:

stderr:
+ nc -v -t -w 2 10.10.190.207 30926
+ echo hostName
nc: connect to 10.10.190.207 port 30926 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 30 03:52:20.981: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-61 exec execpod2s5ls -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30926'
Oct 30 03:52:21.300: INFO: rc: 1
Oct 30 03:52:21.301: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-61 exec execpod2s5ls -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30926:
Command stdout:

stderr:
+ nc -v -t -w 2 10.10.190.207 30926
+ echo hostName
nc: connect to 10.10.190.207 port 30926 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 30 03:52:21.981: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-61 exec execpod2s5ls -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30926'
Oct 30 03:52:22.361: INFO: rc: 1
Oct 30 03:52:22.361: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-61 exec execpod2s5ls -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30926:
Command stdout:

stderr:
+ nc -v -t -w 2 10.10.190.207 30926
+ echo hostName
nc: connect to 10.10.190.207 port 30926 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 30 03:52:22.981: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-61 exec execpod2s5ls -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30926'
Oct 30 03:52:23.382: INFO: rc: 1
Oct 30 03:52:23.382: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-61 exec execpod2s5ls -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30926:
Command stdout:

stderr:
+ nc -v -t -w 2 10.10.190.207 30926
+ echo hostName
nc: connect to 10.10.190.207 port 30926 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 30 03:52:23.981: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-61 exec execpod2s5ls -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30926'
Oct 30 03:52:24.390: INFO: rc: 1
Oct 30 03:52:24.390: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-61 exec execpod2s5ls -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30926:
Command stdout:

stderr:
+ nc -v -t -w 2 10.10.190.207 30926
+ echo hostName
nc: connect to 10.10.190.207 port 30926 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 30 03:52:24.980: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-61 exec execpod2s5ls -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30926'
Oct 30 03:52:25.202: INFO: rc: 1
Oct 30 03:52:25.202: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-61 exec execpod2s5ls -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30926:
Command stdout:

stderr:
+ nc -v -t -w 2 10.10.190.207 30926
+ echo hostName
nc: connect to 10.10.190.207 port 30926 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 30 03:52:25.982: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-61 exec execpod2s5ls -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30926'
Oct 30 03:52:26.243: INFO: rc: 1
Oct 30 03:52:26.243: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-61 exec execpod2s5ls -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30926:
Command stdout:

stderr:
+ nc -v -t -w 2 10.10.190.207 30926
+ echo hostName
nc: connect to 10.10.190.207 port 30926 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 30 03:52:26.981: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-61 exec execpod2s5ls -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30926'
Oct 30 03:52:27.309: INFO: rc: 1
Oct 30 03:52:27.309: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-61 exec execpod2s5ls -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30926:
Command stdout:

stderr:
+ nc -v -t -w 2 10.10.190.207 30926
+ echo hostName
nc: connect to 10.10.190.207 port 30926 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 30 03:52:27.981: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-61 exec execpod2s5ls -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30926'
Oct 30 03:52:28.267: INFO: rc: 1
Oct 30 03:52:28.267: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-61 exec execpod2s5ls -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30926:
Command stdout:

stderr:
+ nc -v -t -w 2 10.10.190.207 30926
+ echo hostName
nc: connect to 10.10.190.207 port 30926 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 30 03:52:28.982: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-61 exec execpod2s5ls -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30926'
Oct 30 03:52:29.328: INFO: rc: 1
Oct 30 03:52:29.328: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-61 exec execpod2s5ls -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30926:
Command stdout:

stderr:
+ nc -v -t -w 2 10.10.190.207 30926
+ echo hostName
nc: connect to 10.10.190.207 port 30926 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 30 03:52:29.980: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-61 exec execpod2s5ls -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30926'
Oct 30 03:52:30.264: INFO: rc: 1
Oct 30 03:52:30.264: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-61 exec execpod2s5ls -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30926:
Command stdout:

stderr:
+ nc -v -t -w 2 10.10.190.207 30926
+ echo hostName
nc: connect to 10.10.190.207 port 30926 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 30 03:52:30.982: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-61 exec execpod2s5ls -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30926'
Oct 30 03:52:31.233: INFO: rc: 1
Oct 30 03:52:31.233: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-61 exec execpod2s5ls -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30926:
Command stdout:

stderr:
+ nc -v -t -w 2 10.10.190.207 30926
+ echo hostName
nc: connect to 10.10.190.207 port 30926 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 30 03:52:31.982: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-61 exec execpod2s5ls -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30926'
Oct 30 03:52:32.229: INFO: rc: 1
Oct 30 03:52:32.229: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-61 exec execpod2s5ls -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30926:
Command stdout:

stderr:
+ nc -v -t -w 2 10.10.190.207 30926
+ echo hostName
nc: connect to 10.10.190.207 port 30926 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 30 03:52:32.981: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-61 exec execpod2s5ls -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30926'
Oct 30 03:52:33.246: INFO: rc: 1
Oct 30 03:52:33.247: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-61 exec execpod2s5ls -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30926:
Command stdout:

stderr:
+ nc -v -t -w 2 10.10.190.207 30926
+ echo hostName
nc: connect to 10.10.190.207 port 30926 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 30 03:52:33.981: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-61 exec execpod2s5ls -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30926'
Oct 30 03:52:34.558: INFO: rc: 1
Oct 30 03:52:34.558: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-61 exec execpod2s5ls -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30926:
Command stdout:

stderr:
+ nc -v -t -w 2 10.10.190.207 30926
+ echo hostName
nc: connect to 10.10.190.207 port 30926 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 30 03:52:34.981: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-61 exec execpod2s5ls -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30926'
Oct 30 03:52:35.403: INFO: rc: 1
Oct 30 03:52:35.403: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-61 exec execpod2s5ls -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30926:
Command stdout:

stderr:
+ nc -v -t -w 2 10.10.190.207 30926
+ echo hostName
nc: connect to 10.10.190.207 port 30926 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 30 03:52:35.981: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-61 exec execpod2s5ls -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30926'
Oct 30 03:52:36.683: INFO: rc: 1
Oct 30 03:52:36.683: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-61 exec execpod2s5ls -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30926:
Command stdout:

stderr:
+ nc -v -t -w 2 10.10.190.207 30926
+ echo hostName
nc: connect to 10.10.190.207 port 30926 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 30 03:52:36.981: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-61 exec execpod2s5ls -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30926'
Oct 30 03:52:37.234: INFO: rc: 1
Oct 30 03:52:37.234: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-61 exec execpod2s5ls -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30926:
Command stdout:

stderr:
+ nc -v -t -w 2 10.10.190.207 30926
+ echo hostName
nc: connect to 10.10.190.207 port 30926 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 30 03:52:37.981: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-61 exec execpod2s5ls -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30926'
Oct 30 03:52:38.230: INFO: rc: 1
Oct 30 03:52:38.230: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-61 exec execpod2s5ls -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30926:
Command stdout:

stderr:
+ nc -v -t -w 2 10.10.190.207 30926
+ echo hostName
nc: connect to 10.10.190.207 port 30926 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 30 03:52:38.981: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-61 exec execpod2s5ls -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30926'
Oct 30 03:52:39.268: INFO: rc: 1
Oct 30 03:52:39.268: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-61 exec execpod2s5ls -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30926:
Command stdout:

stderr:
+ nc -v -t -w 2 10.10.190.207 30926
+ echo hostName
nc: connect to 10.10.190.207 port 30926 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 30 03:52:39.981: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-61 exec execpod2s5ls -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30926'
Oct 30 03:52:40.279: INFO: rc: 1
Oct 30 03:52:40.279: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-61 exec execpod2s5ls -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30926:
Command stdout:

stderr:
+ nc -v -t -w 2 10.10.190.207 30926
+ echo hostName
nc: connect to 10.10.190.207 port 30926 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 30 03:52:40.980: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-61 exec execpod2s5ls -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30926'
Oct 30 03:52:42.349: INFO: rc: 1
Oct 30 03:52:42.349: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-61 exec execpod2s5ls -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30926:
Command stdout:

stderr:
+ nc -v -t -w 2 10.10.190.207 30926
+ echo hostName
nc: connect to 10.10.190.207 port 30926 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 30 03:52:42.981: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-61 exec execpod2s5ls -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30926'
Oct 30 03:52:43.222: INFO: rc: 1
Oct 30 03:52:43.222: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-61 exec execpod2s5ls -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30926:
Command stdout:

stderr:
+ nc -v -t -w 2 10.10.190.207 30926
+ echo hostName
nc: connect to 10.10.190.207 port 30926 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 30 03:52:43.981: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-61 exec execpod2s5ls -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30926'
Oct 30 03:52:44.328: INFO: rc: 1
Oct 30 03:52:44.328: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-61 exec execpod2s5ls -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30926:
Command stdout:

stderr:
+ nc -v -t -w 2 10.10.190.207 30926
+ echo hostName
nc: connect to 10.10.190.207 port 30926 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 30 03:52:44.983: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-61 exec execpod2s5ls -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30926'
Oct 30 03:52:45.255: INFO: rc: 1
Oct 30 03:52:45.255: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-61 exec execpod2s5ls -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30926:
Command stdout:

stderr:
+ nc -v -t -w 2 10.10.190.207 30926
+ echo hostName
nc: connect to 10.10.190.207 port 30926 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 30 03:52:45.981: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-61 exec execpod2s5ls -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30926'
Oct 30 03:52:46.215: INFO: rc: 1
Oct 30 03:52:46.215: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-61 exec execpod2s5ls -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30926:
Command stdout:

stderr:
+ nc -v -t -w 2 10.10.190.207 30926
+ echo hostName
nc: connect to 10.10.190.207 port 30926 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 30 03:52:46.982: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-61 exec execpod2s5ls -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30926'
Oct 30 03:52:47.281: INFO: rc: 1
Oct 30 03:52:47.282: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-61 exec execpod2s5ls -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30926:
Command stdout:

stderr:
+ nc -v -t -w 2 10.10.190.207 30926
+ echo hostName
nc: connect to 10.10.190.207 port 30926 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 30 03:52:47.981: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-61 exec execpod2s5ls -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30926'
Oct 30 03:52:48.271: INFO: rc: 1
Oct 30 03:52:48.271: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-61 exec execpod2s5ls -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30926:
Command stdout:

stderr:
+ nc -v -t -w 2 10.10.190.207 30926
+ echo hostName
nc: connect to 10.10.190.207 port 30926 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 30 03:52:48.981: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-61 exec execpod2s5ls -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30926'
Oct 30 03:52:49.298: INFO: rc: 1
Oct 30 03:52:49.298: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-61 exec execpod2s5ls -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30926:
Command stdout:

stderr:
+ nc -v -t -w 2 10.10.190.207 30926
+ echo hostName
nc: connect to 10.10.190.207 port 30926 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 30 03:52:49.981: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-61 exec execpod2s5ls -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30926'
Oct 30 03:52:50.397: INFO: rc: 1
Oct 30 03:52:50.397: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-61 exec execpod2s5ls -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30926:
Command stdout:

stderr:
+ nc -v -t -w 2 10.10.190.207 30926
+ echo hostName
nc: connect to 10.10.190.207 port 30926 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 30 03:52:50.981: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-61 exec execpod2s5ls -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30926'
Oct 30 03:52:51.224: INFO: rc: 1
Oct 30 03:52:51.224: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-61 exec execpod2s5ls -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30926:
Command stdout:

stderr:
+ nc -v -t -w 2 10.10.190.207 30926
+ echo hostName
nc: connect to 10.10.190.207 port 30926 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 30 03:52:51.982: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-61 exec execpod2s5ls -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30926'
Oct 30 03:52:52.246: INFO: rc: 1
Oct 30 03:52:52.246: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-61 exec execpod2s5ls -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30926:
Command stdout:

stderr:
+ nc -v -t -w 2 10.10.190.207 30926
+ echo hostName
nc: connect to 10.10.190.207 port 30926 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 30 03:52:52.980: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-61 exec execpod2s5ls -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30926'
Oct 30 03:52:53.270: INFO: rc: 1
Oct 30 03:52:53.270: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-61 exec execpod2s5ls -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30926:
Command stdout:

stderr:
+ nc -v -t -w 2 10.10.190.207 30926
+ echo hostName
nc: connect to 10.10.190.207 port 30926 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 30 03:52:53.981: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-61 exec execpod2s5ls -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30926'
Oct 30 03:52:54.396: INFO: rc: 1
Oct 30 03:52:54.396: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-61 exec execpod2s5ls -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30926:
Command stdout:

stderr:
+ nc -v -t -w 2 10.10.190.207 30926
+ echo hostName
nc: connect to 10.10.190.207 port 30926 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 30 03:52:54.981: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-61 exec execpod2s5ls -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30926'
Oct 30 03:52:55.282: INFO: rc: 1
Oct 30 03:52:55.282: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-61 exec execpod2s5ls -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30926:
Command stdout:

stderr:
+ nc -v -t -w 2 10.10.190.207 30926
+ echo hostName
nc: connect to 10.10.190.207 port 30926 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 30 03:52:55.982: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-61 exec execpod2s5ls -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30926'
Oct 30 03:52:56.251: INFO: rc: 1
Oct 30 03:52:56.251: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-61 exec execpod2s5ls -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30926:
Command stdout:

stderr:
+ nc -v -t -w 2 10.10.190.207 30926
+ echo hostName
nc: connect to 10.10.190.207 port 30926 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 30 03:52:56.981: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-61 exec execpod2s5ls -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30926'
Oct 30 03:52:57.233: INFO: rc: 1
Oct 30 03:52:57.233: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-61 exec execpod2s5ls -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30926:
Command stdout:

stderr:
+ nc -v -t -w 2 10.10.190.207 30926
+ echo hostName
nc: connect to 10.10.190.207 port 30926 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 30 03:52:57.981: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-61 exec execpod2s5ls -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30926'
Oct 30 03:52:58.214: INFO: rc: 1
Oct 30 03:52:58.214: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-61 exec execpod2s5ls -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30926:
Command stdout:

stderr:
+ nc -v -t -w 2 10.10.190.207 30926
+ echo hostName
nc: connect to 10.10.190.207 port 30926 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 30 03:52:58.982: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-61 exec execpod2s5ls -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30926'
Oct 30 03:52:59.480: INFO: rc: 1
Oct 30 03:52:59.480: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-61 exec execpod2s5ls -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30926:
Command stdout:

stderr:
+ nc -v -t -w 2 10.10.190.207 30926
+ echo hostName
nc: connect to 10.10.190.207 port 30926 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 30 03:52:59.981: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-61 exec execpod2s5ls -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30926'
Oct 30 03:53:00.231: INFO: rc: 1
Oct 30 03:53:00.231: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-61 exec execpod2s5ls -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30926:
Command stdout:

stderr:
+ nc -v -t -w 2 10.10.190.207 30926
+ echo hostName
nc: connect to 10.10.190.207 port 30926 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 30 03:53:00.981: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-61 exec execpod2s5ls -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30926'
Oct 30 03:53:01.302: INFO: rc: 1
Oct 30 03:53:01.302: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-61 exec execpod2s5ls -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30926:
Command stdout:

stderr:
+ nc -v -t -w 2 10.10.190.207 30926
+ echo hostName
nc: connect to 10.10.190.207 port 30926 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 30 03:53:01.981: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-61 exec execpod2s5ls -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30926'
Oct 30 03:53:02.427: INFO: rc: 1
Oct 30 03:53:02.427: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-61 exec execpod2s5ls -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30926:
Command stdout:

stderr:
+ nc -v -t -w 2 10.10.190.207 30926
+ echo hostName
nc: connect to 10.10.190.207 port 30926 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 30 03:53:02.981: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-61 exec execpod2s5ls -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30926'
Oct 30 03:53:03.218: INFO: rc: 1
Oct 30 03:53:03.218: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-61 exec execpod2s5ls -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30926:
Command stdout:

stderr:
+ nc -v -t -w 2 10.10.190.207 30926
+ echo hostName
nc: connect to 10.10.190.207 port 30926 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 30 03:53:03.981: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-61 exec execpod2s5ls -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30926'
Oct 30 03:53:04.244: INFO: rc: 1
Oct 30 03:53:04.244: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-61 exec execpod2s5ls -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30926:
Command stdout:

stderr:
+ nc -v -t -w 2 10.10.190.207 30926
+ echo hostName
nc: connect to 10.10.190.207 port 30926 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 30 03:53:04.981: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-61 exec execpod2s5ls -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30926'
Oct 30 03:53:05.212: INFO: rc: 1
Oct 30 03:53:05.212: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-61 exec execpod2s5ls -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30926:
Command stdout:

stderr:
+ nc -v -t -w 2 10.10.190.207 30926
+ echo hostName
nc: connect to 10.10.190.207 port 30926 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 30 03:53:05.981: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-61 exec execpod2s5ls -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30926'
Oct 30 03:53:06.695: INFO: rc: 1
Oct 30 03:53:06.695: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-61 exec execpod2s5ls -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30926:
Command stdout:

stderr:
+ nc -v -t -w 2 10.10.190.207 30926
+ echo hostName
nc: connect to 10.10.190.207 port 30926 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 30 03:53:06.980: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-61 exec execpod2s5ls -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30926'
Oct 30 03:53:07.269: INFO: rc: 1
Oct 30 03:53:07.269: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-61 exec execpod2s5ls -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30926:
Command stdout:

stderr:
+ nc -v -t -w 2 10.10.190.207 30926
+ echo hostName
nc: connect to 10.10.190.207 port 30926 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 30 03:53:07.981: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-61 exec execpod2s5ls -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30926'
Oct 30 03:53:08.257: INFO: rc: 1
Oct 30 03:53:08.257: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-61 exec execpod2s5ls -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30926:
Command stdout:

stderr:
+ nc -v -t -w 2 10.10.190.207 30926
+ echo hostName
nc: connect to 10.10.190.207 port 30926 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 30 03:53:08.981: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-61 exec execpod2s5ls -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30926'
Oct 30 03:53:09.239: INFO: rc: 1
Oct 30 03:53:09.239: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-61 exec execpod2s5ls -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30926:
Command stdout:

stderr:
+ nc -v -t -w 2 10.10.190.207 30926
+ echo hostName
nc: connect to 10.10.190.207 port 30926 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 30 03:53:09.981: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-61 exec execpod2s5ls -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30926'
Oct 30 03:53:10.289: INFO: rc: 1
Oct 30 03:53:10.289: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-61 exec execpod2s5ls -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30926:
Command stdout:

stderr:
+ nc -v -t -w 2 10.10.190.207 30926
+ echo hostName
nc: connect to 10.10.190.207 port 30926 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 30 03:53:10.982: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-61 exec execpod2s5ls -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30926'
Oct 30 03:53:12.575: INFO: rc: 1
Oct 30 03:53:12.575: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-61 exec execpod2s5ls -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30926:
Command stdout:

stderr:
+ nc -v -t -w 2 10.10.190.207 30926
+ echo hostName
nc: connect to 10.10.190.207 port 30926 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 30 03:53:12.981: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-61 exec execpod2s5ls -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30926'
Oct 30 03:53:13.276: INFO: rc: 1
Oct 30 03:53:13.276: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-61 exec execpod2s5ls -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30926:
Command stdout:

stderr:
+ nc -v -t -w 2 10.10.190.207 30926
+ echo hostName
nc: connect to 10.10.190.207 port 30926 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 30 03:53:13.982: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-61 exec execpod2s5ls -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30926'
Oct 30 03:53:14.242: INFO: rc: 1
Oct 30 03:53:14.242: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-61 exec execpod2s5ls -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30926:
Command stdout:

stderr:
+ nc -v -t -w 2 10.10.190.207 30926
+ echo hostName
nc: connect to 10.10.190.207 port 30926 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 30 03:53:14.983: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-61 exec execpod2s5ls -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30926'
Oct 30 03:53:15.243: INFO: rc: 1
Oct 30 03:53:15.243: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-61 exec execpod2s5ls -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30926:
Command stdout:

stderr:
+ nc -v -t -w 2 10.10.190.207 30926
+ echo hostName
nc: connect to 10.10.190.207 port 30926 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 30 03:53:15.982: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-61 exec execpod2s5ls -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30926'
Oct 30 03:53:16.276: INFO: rc: 1
Oct 30 03:53:16.276: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-61 exec execpod2s5ls -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30926:
Command stdout:

stderr:
+ nc -v -t -w 2 10.10.190.207 30926
+ echo hostName
nc: connect to 10.10.190.207 port 30926 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 30 03:53:16.981: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-61 exec execpod2s5ls -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30926'
Oct 30 03:53:17.337: INFO: rc: 1
Oct 30 03:53:17.337: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-61 exec execpod2s5ls -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30926:
Command stdout:

stderr:
+ nc -v -t -w 2 10.10.190.207 30926
+ echo hostName
nc: connect to 10.10.190.207 port 30926 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 30 03:53:17.981: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-61 exec execpod2s5ls -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30926'
Oct 30 03:53:18.307: INFO: rc: 1
Oct 30 03:53:18.307: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-61 exec execpod2s5ls -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30926:
Command stdout:

stderr:
+ nc -v -t -w 2 10.10.190.207 30926
+ echo hostName
nc: connect to 10.10.190.207 port 30926 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 30 03:53:18.981: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-61 exec execpod2s5ls -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30926'
Oct 30 03:53:19.296: INFO: rc: 1
Oct 30 03:53:19.296: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-61 exec execpod2s5ls -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30926:
Command stdout:

stderr:
+ nc -v -t -w 2 10.10.190.207 30926
+ echo hostName
nc: connect to 10.10.190.207 port 30926 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 30 03:53:19.982: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-61 exec execpod2s5ls -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30926'
Oct 30 03:53:20.238: INFO: rc: 1
Oct 30 03:53:20.238: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-61 exec execpod2s5ls -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30926:
Command stdout:

stderr:
+ nc -v -t -w 2 10.10.190.207 30926
+ echo hostName
nc: connect to 10.10.190.207 port 30926 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 30 03:53:20.981: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-61 exec execpod2s5ls -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30926'
Oct 30 03:53:21.245: INFO: rc: 1
Oct 30 03:53:21.245: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-61 exec execpod2s5ls -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30926:
Command stdout:

stderr:
+ nc -v -t -w 2 10.10.190.207 30926
+ echo hostName
nc: connect to 10.10.190.207 port 30926 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 30 03:53:21.982: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-61 exec execpod2s5ls -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30926'
Oct 30 03:53:22.346: INFO: rc: 1
Oct 30 03:53:22.346: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-61 exec execpod2s5ls -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30926:
Command stdout:

stderr:
+ nc -v -t -w 2 10.10.190.207 30926
+ echo hostName
nc: connect to 10.10.190.207 port 30926 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 30 03:53:22.981: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-61 exec execpod2s5ls -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30926'
Oct 30 03:53:23.580: INFO: rc: 1
Oct 30 03:53:23.580: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-61 exec execpod2s5ls -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30926:
Command stdout:

stderr:
+ nc -v -t -w 2 10.10.190.207 30926
+ echo hostName
nc: connect to 10.10.190.207 port 30926 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 30 03:53:23.981: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-61 exec execpod2s5ls -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30926'
Oct 30 03:53:24.823: INFO: rc: 1
Oct 30 03:53:24.823: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-61 exec execpod2s5ls -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30926:
Command stdout:

stderr:
+ nc -v -t -w 2 10.10.190.207 30926
+ echo hostName
nc: connect to 10.10.190.207 port 30926 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 30 03:53:24.981: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-61 exec execpod2s5ls -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30926'
Oct 30 03:53:25.403: INFO: rc: 1
Oct 30 03:53:25.403: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-61 exec execpod2s5ls -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30926:
Command stdout:

stderr:
+ nc -v -t -w 2 10.10.190.207 30926
+ echo hostName
nc: connect to 10.10.190.207 port 30926 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 30 03:53:25.981: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-61 exec execpod2s5ls -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30926'
Oct 30 03:53:26.469: INFO: rc: 1
Oct 30 03:53:26.469: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-61 exec execpod2s5ls -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30926:
Command stdout:

stderr:
+ nc -v -t -w 2 10.10.190.207 30926
+ echo hostName
nc: connect to 10.10.190.207 port 30926 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 30 03:53:26.981: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-61 exec execpod2s5ls -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30926'
Oct 30 03:53:27.541: INFO: rc: 1
Oct 30 03:53:27.541: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-61 exec execpod2s5ls -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30926:
Command stdout:

stderr:
+ nc -v -t -w 2 10.10.190.207 30926
+ echo hostName
nc: connect to 10.10.190.207 port 30926 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 30 03:53:27.981: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-61 exec execpod2s5ls -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30926'
Oct 30 03:53:28.408: INFO: rc: 1
Oct 30 03:53:28.408: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-61 exec execpod2s5ls -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30926:
Command stdout:

stderr:
+ nc -v -t -w 2 10.10.190.207 30926
+ echo hostName
nc: connect to 10.10.190.207 port 30926 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 30 03:53:28.981: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-61 exec execpod2s5ls -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30926'
Oct 30 03:53:29.323: INFO: rc: 1
Oct 30 03:53:29.324: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-61 exec execpod2s5ls -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30926:
Command stdout:

stderr:
+ nc -v -t -w 2 10.10.190.207 30926
+ echo hostName
nc: connect to 10.10.190.207 port 30926 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 30 03:53:29.981: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-61 exec execpod2s5ls -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30926'
Oct 30 03:53:30.351: INFO: rc: 1
Oct 30 03:53:30.351: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-61 exec execpod2s5ls -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30926:
Command stdout:

stderr:
+ nc -v -t -w 2 10.10.190.207 30926
+ echo hostName
nc: connect to 10.10.190.207 port 30926 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 30 03:53:30.982: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-61 exec execpod2s5ls -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30926'
Oct 30 03:53:31.322: INFO: rc: 1
Oct 30 03:53:31.322: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-61 exec execpod2s5ls -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30926:
Command stdout:

stderr:
+ nc -v -t -w 2 10.10.190.207 30926
+ echo hostName
nc: connect to 10.10.190.207 port 30926 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 30 03:53:31.981: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-61 exec execpod2s5ls -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30926'
Oct 30 03:53:32.236: INFO: rc: 1
Oct 30 03:53:32.236: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-61 exec execpod2s5ls -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30926:
Command stdout:

stderr:
+ nc -v -t -w 2 10.10.190.207 30926
+ echo hostName
nc: connect to 10.10.190.207 port 30926 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 30 03:53:32.981: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-61 exec execpod2s5ls -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30926'
Oct 30 03:53:33.218: INFO: rc: 1
Oct 30 03:53:33.218: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-61 exec execpod2s5ls -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30926:
Command stdout:

stderr:
+ nc -v -t -w 2 10.10.190.207 30926
+ echo hostName
nc: connect to 10.10.190.207 port 30926 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 30 03:53:33.982: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-61 exec execpod2s5ls -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30926'
Oct 30 03:53:34.243: INFO: rc: 1
Oct 30 03:53:34.243: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-61 exec execpod2s5ls -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30926:
Command stdout:

stderr:
+ nc -v -t -w 2 10.10.190.207 30926
+ echo hostName
nc: connect to 10.10.190.207 port 30926 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 30 03:53:34.981: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-61 exec execpod2s5ls -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30926'
Oct 30 03:53:35.246: INFO: rc: 1
Oct 30 03:53:35.246: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-61 exec execpod2s5ls -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30926:
Command stdout:

stderr:
+ nc -v -t -w 2 10.10.190.207 30926
+ echo hostName
nc: connect to 10.10.190.207 port 30926 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 30 03:53:35.983: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-61 exec execpod2s5ls -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30926'
Oct 30 03:53:36.513: INFO: rc: 1
Oct 30 03:53:36.514: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-61 exec execpod2s5ls -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30926:
Command stdout:

stderr:
+ nc -v -t -w 2 10.10.190.207 30926
+ echo hostName
nc: connect to 10.10.190.207 port 30926 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 30 03:53:36.981: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-61 exec execpod2s5ls -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30926'
Oct 30 03:53:37.210: INFO: rc: 1
Oct 30 03:53:37.210: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-61 exec execpod2s5ls -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30926:
Command stdout:

stderr:
+ nc -v -t -w 2 10.10.190.207 30926
+ echo hostName
nc: connect to 10.10.190.207 port 30926 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 30 03:53:37.981: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-61 exec execpod2s5ls -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30926'
Oct 30 03:53:38.235: INFO: rc: 1
Oct 30 03:53:38.235: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-61 exec execpod2s5ls -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30926:
Command stdout:

stderr:
+ nc -v -t -w 2 10.10.190.207 30926
+ echo hostName
nc: connect to 10.10.190.207 port 30926 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 30 03:53:38.981: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-61 exec execpod2s5ls -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30926'
Oct 30 03:53:39.380: INFO: rc: 1
Oct 30 03:53:39.380: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-61 exec execpod2s5ls -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30926:
Command stdout:

stderr:
+ nc -v -t -w 2 10.10.190.207 30926
+ echo hostName
nc: connect to 10.10.190.207 port 30926 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 30 03:53:39.982: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-61 exec execpod2s5ls -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30926'
Oct 30 03:53:40.581: INFO: rc: 1
Oct 30 03:53:40.581: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-61 exec execpod2s5ls -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30926:
Command stdout:

stderr:
+ nc -v -t -w 2 10.10.190.207 30926
+ echo hostName
nc: connect to 10.10.190.207 port 30926 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 30 03:53:40.981: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-61 exec execpod2s5ls -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30926'
Oct 30 03:53:42.622: INFO: rc: 1
Oct 30 03:53:42.622: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-61 exec execpod2s5ls -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30926:
Command stdout:

stderr:
+ nc -v -t -w 2 10.10.190.207 30926
+ echo hostName
nc: connect to 10.10.190.207 port 30926 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 30 03:53:42.981: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-61 exec execpod2s5ls -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30926'
Oct 30 03:53:43.533: INFO: rc: 1
Oct 30 03:53:43.533: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-61 exec execpod2s5ls -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30926:
Command stdout:

stderr:
+ nc -v -t -w 2 10.10.190.207 30926
+ echo hostName
nc: connect to 10.10.190.207 port 30926 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 30 03:53:43.533: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-61 exec execpod2s5ls -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30926'
Oct 30 03:53:43.935: INFO: rc: 1
Oct 30 03:53:43.935: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-61 exec execpod2s5ls -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30926:
Command stdout:

stderr:
+ nc -v -t -w 2 10.10.190.207 30926
+ echo hostName
nc: connect to 10.10.190.207 port 30926 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 30 03:53:43.936: FAIL: Unexpected error:
    <*errors.errorString | 0xc00464c8f0>: {
        s: "service is not reachable within 2m0s timeout on endpoint 10.10.190.207:30926 over TCP protocol",
    }
    service is not reachable within 2m0s timeout on endpoint 10.10.190.207:30926 over TCP protocol
occurred

Full Stack Trace
k8s.io/kubernetes/test/e2e/network.glob..func24.13()
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1245 +0x431
k8s.io/kubernetes/test/e2e.RunE2ETests(0xc001e80780)
	_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x36c
k8s.io/kubernetes/test/e2e.TestE2E(0xc001e80780)
	_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:144 +0x2b
testing.tRunner(0xc001e80780, 0x70e7b58)
	/usr/local/go/src/testing/testing.go:1193 +0xef
created by testing.(*T).Run
	/usr/local/go/src/testing/testing.go:1238 +0x2b3
Oct 30 03:53:43.937: INFO: Cleaning up the updating NodePorts test service
[AfterEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
STEP: Collecting events from namespace "services-61".
STEP: Found 17 events.
Oct 30 03:53:43.963: INFO: At 2021-10-30 03:51:24 +0000 UTC - event for nodeport-update-service: {replication-controller } SuccessfulCreate: Created pod: nodeport-update-service-cwmvf
Oct 30 03:53:43.963: INFO: At 2021-10-30 03:51:24 +0000 UTC - event for nodeport-update-service: {replication-controller } SuccessfulCreate: Created pod: nodeport-update-service-rc5hp
Oct 30 03:53:43.963: INFO: At 2021-10-30 03:51:24 +0000 UTC - event for nodeport-update-service-cwmvf: {default-scheduler } Scheduled: Successfully assigned services-61/nodeport-update-service-cwmvf to node1
Oct 30 03:53:43.963: INFO: At 2021-10-30 03:51:24 +0000 UTC - event for nodeport-update-service-rc5hp: {default-scheduler } Scheduled: Successfully assigned services-61/nodeport-update-service-rc5hp to node2
Oct 30 03:53:43.963: INFO: At 2021-10-30 03:51:30 +0000 UTC - event for nodeport-update-service-cwmvf: {kubelet node1} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/agnhost:2.32"
Oct 30 03:53:43.963: INFO: At 2021-10-30 03:51:31 +0000 UTC - event for nodeport-update-service-rc5hp: {kubelet node2} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/agnhost:2.32"
Oct 30 03:53:43.963: INFO: At 2021-10-30 03:51:32 +0000 UTC - event for nodeport-update-service-rc5hp: {kubelet node2} Created: Created container nodeport-update-service
Oct 30 03:53:43.964: INFO: At 2021-10-30 03:51:32 +0000 UTC - event for nodeport-update-service-rc5hp: {kubelet node2} Started: Started container nodeport-update-service
Oct 30 03:53:43.964: INFO: At 2021-10-30 03:51:32 +0000 UTC - event for nodeport-update-service-rc5hp: {kubelet node2} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/agnhost:2.32" in 1.093792356s
Oct 30 03:53:43.964: INFO: At 2021-10-30 03:51:33 +0000 UTC - event for nodeport-update-service-cwmvf: {kubelet node1} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/agnhost:2.32" in 3.22062565s
Oct 30 03:53:43.964: INFO: At 2021-10-30 03:51:33 +0000 UTC - event for nodeport-update-service-cwmvf: {kubelet node1} Started: Started container nodeport-update-service
Oct 30 03:53:43.964: INFO: At 2021-10-30 03:51:33 +0000 UTC - event for nodeport-update-service-cwmvf: {kubelet node1} Created: Created container nodeport-update-service
Oct 30 03:53:43.964: INFO: At 2021-10-30 03:51:36 +0000 UTC - event for execpod2s5ls: {default-scheduler } Scheduled: Successfully assigned services-61/execpod2s5ls to node2
Oct 30 03:53:43.964: INFO: At 2021-10-30 03:51:38 +0000 UTC - event for execpod2s5ls: {kubelet node2} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/agnhost:2.32"
Oct 30 03:53:43.964: INFO: At 2021-10-30 03:51:39 +0000 UTC - event for execpod2s5ls: {kubelet node2} Created: Created container agnhost-container
Oct 30 03:53:43.964: INFO: At 2021-10-30 03:51:39 +0000 UTC - event for execpod2s5ls: {kubelet node2} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/agnhost:2.32" in 406.463828ms
Oct 30 03:53:43.964: INFO: At 2021-10-30 03:51:40 +0000 UTC - event for execpod2s5ls: {kubelet node2} Started: Started container agnhost-container
Oct 30 03:53:43.966: INFO: POD                            NODE   PHASE    GRACE  CONDITIONS
Oct 30 03:53:43.967: INFO: execpod2s5ls                   node2  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-10-30 03:51:36 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2021-10-30 03:51:40 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2021-10-30 03:51:40 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-10-30 03:51:36 +0000 UTC  }]
Oct 30 03:53:43.967: INFO: nodeport-update-service-cwmvf  node1  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-10-30 03:51:24 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2021-10-30 03:51:34 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2021-10-30 03:51:34 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-10-30 03:51:24 +0000 UTC  }]
Oct 30 03:53:43.967: INFO: nodeport-update-service-rc5hp  node2  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-10-30 03:51:24 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2021-10-30 03:51:32 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2021-10-30 03:51:32 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-10-30 03:51:24 +0000 UTC  }]
Oct 30 03:53:43.967: INFO: 
Oct 30 03:53:43.971: INFO: 
Logging node info for node master1
Oct 30 03:53:43.973: INFO: Node Info: &Node{ObjectMeta:{master1    b47c04d5-47a7-4a95-8e97-481e6e60af54 146101 0 2021-10-29 21:05:34 +0000 UTC   map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master1 kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.202 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  [{kubeadm Update v1 2021-10-29 21:05:37 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}}} {kube-controller-manager Update v1 2021-10-29 21:05:53 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.0.0/24\"":{}},"f:taints":{}}}} {flanneld Update v1 2021-10-29 21:08:30 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kubelet Update v1 2021-10-29 21:13:13 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234767872 0} {}  BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324603904 0} {}  BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-10-29 21:11:27 +0000 UTC,LastTransitionTime:2021-10-29 21:11:27 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-10-30 03:53:39 +0000 UTC,LastTransitionTime:2021-10-29 21:05:32 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-10-30 03:53:39 +0000 UTC,LastTransitionTime:2021-10-29 21:05:32 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-10-30 03:53:39 +0000 UTC,LastTransitionTime:2021-10-29 21:05:32 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-10-30 03:53:39 +0000 UTC,LastTransitionTime:2021-10-29 21:08:35 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.202,},NodeAddress{Type:Hostname,Address:master1,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:5d3ed60c561e427db72df14bd9006ed0,SystemUUID:00ACFB60-0631-E711-906E-0017A4403562,BootID:01b9d6bc-4126-4864-a1df-901a1bee4906,KernelVersion:3.10.0-1160.45.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.10,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:724463471,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 sirot/netperf-latest:latest],SizeBytes:282025213,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:125930239,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e k8s.gcr.io/coredns/coredns:v1.8.0],SizeBytes:42454755,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[localhost:30500/tasextender@sha256:b7d2fa8154ac5d9cff45866e4d3d210a7d390f8576611c301a2eed2b57273227 tasextender:latest localhost:30500/tasextender:0.4],SizeBytes:28910791,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[registry@sha256:1cd9409a311350c3072fe510b52046f104416376c126a479cef9a4dfe692cf57 registry:2.7.0],SizeBytes:24191168,},ContainerImage{Names:[nginx@sha256:2012644549052fa07c43b0d19f320c871a25e105d0b23e33645e4f1bcf8fcd97 nginx:1.20.1-alpine],SizeBytes:22650454,},ContainerImage{Names:[@ :],SizeBytes:5577654,},ContainerImage{Names:[alpine@sha256:c0e9560cda118f9ec63ddefb4a173a2b2a0347082d7dff7dc14272e7841a5b5a alpine:3.12.1],SizeBytes:5573013,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},}
Oct 30 03:53:43.974: INFO: 
Logging kubelet events for node master1
Oct 30 03:53:43.976: INFO: 
Logging pods the kubelet thinks is on node master1
Oct 30 03:53:43.999: INFO: kube-proxy-z5k8p started at 2021-10-29 21:07:31 +0000 UTC (0+1 container statuses recorded)
Oct 30 03:53:43.999: INFO: 	Container kube-proxy ready: true, restart count 1
Oct 30 03:53:43.999: INFO: coredns-8474476ff8-lczbr started at 2021-10-29 21:09:04 +0000 UTC (0+1 container statuses recorded)
Oct 30 03:53:43.999: INFO: 	Container coredns ready: true, restart count 1
Oct 30 03:53:43.999: INFO: container-registry-65d7c44b96-zzkfl started at 2021-10-29 21:12:56 +0000 UTC (0+2 container statuses recorded)
Oct 30 03:53:43.999: INFO: 	Container docker-registry ready: true, restart count 0
Oct 30 03:53:43.999: INFO: 	Container nginx ready: true, restart count 0
Oct 30 03:53:43.999: INFO: node-exporter-fv84w started at 2021-10-29 21:21:15 +0000 UTC (0+2 container statuses recorded)
Oct 30 03:53:43.999: INFO: 	Container kube-rbac-proxy ready: true, restart count 0
Oct 30 03:53:43.999: INFO: 	Container node-exporter ready: true, restart count 0
Oct 30 03:53:43.999: INFO: kube-scheduler-master1 started at 2021-10-29 21:11:09 +0000 UTC (0+1 container statuses recorded)
Oct 30 03:53:43.999: INFO: 	Container kube-scheduler ready: true, restart count 0
Oct 30 03:53:43.999: INFO: kube-controller-manager-master1 started at 2021-10-29 21:14:43 +0000 UTC (0+1 container statuses recorded)
Oct 30 03:53:43.999: INFO: 	Container kube-controller-manager ready: true, restart count 2
Oct 30 03:53:43.999: INFO: kube-flannel-d4pmt started at 2021-10-29 21:08:25 +0000 UTC (1+1 container statuses recorded)
Oct 30 03:53:43.999: INFO: 	Init container install-cni ready: true, restart count 0
Oct 30 03:53:43.999: INFO: 	Container kube-flannel ready: true, restart count 2
Oct 30 03:53:43.999: INFO: kube-multus-ds-amd64-wgkfq started at 2021-10-29 21:08:34 +0000 UTC (0+1 container statuses recorded)
Oct 30 03:53:43.999: INFO: 	Container kube-multus ready: true, restart count 1
Oct 30 03:53:43.999: INFO: kube-apiserver-master1 started at 2021-10-29 21:11:09 +0000 UTC (0+1 container statuses recorded)
Oct 30 03:53:43.999: INFO: 	Container kube-apiserver ready: true, restart count 0
W1030 03:53:44.012632      33 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled.
Oct 30 03:53:44.080: INFO: 
Latency metrics for node master1
Oct 30 03:53:44.080: INFO: 
Logging node info for node master2
Oct 30 03:53:44.082: INFO: Node Info: &Node{ObjectMeta:{master2    208792d3-d365-4ddb-83d4-10e6e818079c 145921 0 2021-10-29 21:06:06 +0000 UTC   map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master2 kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.203 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  [{kubeadm Update v1 2021-10-29 21:06:07 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}}} {flanneld Update v1 2021-10-29 21:08:29 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2021-10-29 21:08:35 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}},"f:taints":{}}}} {kubelet Update v1 2021-10-29 21:18:18 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:capacity":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.1.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234767872 0} {}  BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324603904 0} {}  BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-10-29 21:11:19 +0000 UTC,LastTransitionTime:2021-10-29 21:11:19 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-10-30 03:53:35 +0000 UTC,LastTransitionTime:2021-10-29 21:06:06 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-10-30 03:53:35 +0000 UTC,LastTransitionTime:2021-10-29 21:06:06 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-10-30 03:53:35 +0000 UTC,LastTransitionTime:2021-10-29 21:06:06 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-10-30 03:53:35 +0000 UTC,LastTransitionTime:2021-10-29 21:08:35 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.203,},NodeAddress{Type:Hostname,Address:master2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:12290c1916d84ddda20431c28083da6a,SystemUUID:00A0DE53-E51D-E711-906E-0017A4403562,BootID:314e82b8-9747-4131-b883-220496309995,KernelVersion:3.10.0-1160.45.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.10,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:724463471,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 sirot/netperf-latest:latest],SizeBytes:282025213,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:125930239,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e k8s.gcr.io/coredns/coredns:v1.8.0],SizeBytes:42454755,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},}
Oct 30 03:53:44.083: INFO: 
Logging kubelet events for node master2
Oct 30 03:53:44.085: INFO: 
Logging pods the kubelet thinks is on node master2
Oct 30 03:53:44.092: INFO: kube-multus-ds-amd64-brkpk started at 2021-10-29 21:08:34 +0000 UTC (0+1 container statuses recorded)
Oct 30 03:53:44.092: INFO: 	Container kube-multus ready: true, restart count 1
Oct 30 03:53:44.092: INFO: node-exporter-lc9kk started at 2021-10-29 21:21:15 +0000 UTC (0+2 container statuses recorded)
Oct 30 03:53:44.092: INFO: 	Container kube-rbac-proxy ready: true, restart count 0
Oct 30 03:53:44.092: INFO: 	Container node-exporter ready: true, restart count 0
Oct 30 03:53:44.092: INFO: kube-apiserver-master2 started at 2021-10-29 21:11:09 +0000 UTC (0+1 container statuses recorded)
Oct 30 03:53:44.092: INFO: 	Container kube-apiserver ready: true, restart count 0
Oct 30 03:53:44.092: INFO: kube-controller-manager-master2 started at 2021-10-29 21:11:09 +0000 UTC (0+1 container statuses recorded)
Oct 30 03:53:44.092: INFO: 	Container kube-controller-manager ready: true, restart count 3
Oct 30 03:53:44.092: INFO: kube-scheduler-master2 started at 2021-10-29 21:11:09 +0000 UTC (0+1 container statuses recorded)
Oct 30 03:53:44.092: INFO: 	Container kube-scheduler ready: true, restart count 2
Oct 30 03:53:44.092: INFO: kube-proxy-5gz4v started at 2021-10-29 21:07:31 +0000 UTC (0+1 container statuses recorded)
Oct 30 03:53:44.092: INFO: 	Container kube-proxy ready: true, restart count 2
Oct 30 03:53:44.092: INFO: kube-flannel-qvqll started at 2021-10-29 21:08:25 +0000 UTC (1+1 container statuses recorded)
Oct 30 03:53:44.092: INFO: 	Init container install-cni ready: true, restart count 2
Oct 30 03:53:44.092: INFO: 	Container kube-flannel ready: true, restart count 1
W1030 03:53:44.106707      33 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled.
Oct 30 03:53:44.161: INFO: 
Latency metrics for node master2
Oct 30 03:53:44.161: INFO: 
Logging node info for node master3
Oct 30 03:53:44.163: INFO: Node Info: &Node{ObjectMeta:{master3    168f1589-e029-47ae-b194-10215fc22d6a 145915 0 2021-10-29 21:06:17 +0000 UTC   map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master3 kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.204 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/master.version:v0.8.2 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  [{kubeadm Update v1 2021-10-29 21:06:18 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}}} {flanneld Update v1 2021-10-29 21:08:29 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2021-10-29 21:08:35 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.2.0/24\"":{}},"f:taints":{}}}} {nfd-master Update v1 2021-10-29 21:16:04 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/master.version":{}}}}} {kubelet Update v1 2021-10-29 21:16:17 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:capacity":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.2.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234767872 0} {}  BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324603904 0} {}  BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-10-29 21:11:36 +0000 UTC,LastTransitionTime:2021-10-29 21:11:36 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-10-30 03:53:35 +0000 UTC,LastTransitionTime:2021-10-29 21:06:17 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-10-30 03:53:35 +0000 UTC,LastTransitionTime:2021-10-29 21:06:17 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-10-30 03:53:35 +0000 UTC,LastTransitionTime:2021-10-29 21:06:17 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-10-30 03:53:35 +0000 UTC,LastTransitionTime:2021-10-29 21:08:35 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.204,},NodeAddress{Type:Hostname,Address:master3,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:de18dcb6cb4c493e9f4d987da2c8b3fd,SystemUUID:008B1444-141E-E711-906E-0017A4403562,BootID:89235c4b-b1f5-4716-bbd7-18b41c0bde74,KernelVersion:3.10.0-1160.45.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.10,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:724463471,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:125930239,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[k8s.gcr.io/nfd/node-feature-discovery@sha256:74a1cbd82354f148277e20cdce25d57816e355a896bc67f67a0f722164b16945 k8s.gcr.io/nfd/node-feature-discovery:v0.8.2],SizeBytes:108486428,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[quay.io/prometheus-operator/prometheus-operator@sha256:850c86bfeda4389bc9c757a9fd17ca5a090ea6b424968178d4467492cfa13921 quay.io/prometheus-operator/prometheus-operator:v0.44.1],SizeBytes:42617274,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e k8s.gcr.io/coredns/coredns:v1.8.0],SizeBytes:42454755,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 aquasec/kube-bench:0.3.1],SizeBytes:19301876,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},}
Oct 30 03:53:44.164: INFO: 
Logging kubelet events for node master3
Oct 30 03:53:44.165: INFO: 
Logging pods the kubelet thinks is on node master3
Oct 30 03:53:44.175: INFO: kube-apiserver-master3 started at 2021-10-29 21:11:10 +0000 UTC (0+1 container statuses recorded)
Oct 30 03:53:44.175: INFO: 	Container kube-apiserver ready: true, restart count 0
Oct 30 03:53:44.175: INFO: kube-scheduler-master3 started at 2021-10-29 21:14:43 +0000 UTC (0+1 container statuses recorded)
Oct 30 03:53:44.175: INFO: 	Container kube-scheduler ready: true, restart count 2
Oct 30 03:53:44.175: INFO: dns-autoscaler-7df78bfcfb-phsdx started at 2021-10-29 21:09:02 +0000 UTC (0+1 container statuses recorded)
Oct 30 03:53:44.175: INFO: 	Container autoscaler ready: true, restart count 1
Oct 30 03:53:44.175: INFO: node-feature-discovery-controller-cff799f9f-qq7g4 started at 2021-10-29 21:15:58 +0000 UTC (0+1 container statuses recorded)
Oct 30 03:53:44.175: INFO: 	Container nfd-controller ready: true, restart count 0
Oct 30 03:53:44.175: INFO: kube-controller-manager-master3 started at 2021-10-29 21:14:43 +0000 UTC (0+1 container statuses recorded)
Oct 30 03:53:44.175: INFO: 	Container kube-controller-manager ready: true, restart count 1
Oct 30 03:53:44.175: INFO: kube-proxy-r6fpx started at 2021-10-29 21:07:31 +0000 UTC (0+1 container statuses recorded)
Oct 30 03:53:44.175: INFO: 	Container kube-proxy ready: true, restart count 1
Oct 30 03:53:44.175: INFO: kube-flannel-rbdlt started at 2021-10-29 21:08:25 +0000 UTC (1+1 container statuses recorded)
Oct 30 03:53:44.175: INFO: 	Init container install-cni ready: true, restart count 2
Oct 30 03:53:44.175: INFO: 	Container kube-flannel ready: true, restart count 2
Oct 30 03:53:44.175: INFO: kube-multus-ds-amd64-bdwh9 started at 2021-10-29 21:08:34 +0000 UTC (0+1 container statuses recorded)
Oct 30 03:53:44.175: INFO: 	Container kube-multus ready: true, restart count 1
Oct 30 03:53:44.175: INFO: coredns-8474476ff8-wrwwv started at 2021-10-29 21:09:00 +0000 UTC (0+1 container statuses recorded)
Oct 30 03:53:44.175: INFO: 	Container coredns ready: true, restart count 1
Oct 30 03:53:44.175: INFO: prometheus-operator-585ccfb458-czbr2 started at 2021-10-29 21:21:06 +0000 UTC (0+2 container statuses recorded)
Oct 30 03:53:44.175: INFO: 	Container kube-rbac-proxy ready: true, restart count 0
Oct 30 03:53:44.175: INFO: 	Container prometheus-operator ready: true, restart count 0
Oct 30 03:53:44.175: INFO: node-exporter-bv946 started at 2021-10-29 21:21:15 +0000 UTC (0+2 container statuses recorded)
Oct 30 03:53:44.175: INFO: 	Container kube-rbac-proxy ready: true, restart count 0
Oct 30 03:53:44.175: INFO: 	Container node-exporter ready: true, restart count 0
W1030 03:53:44.190165      33 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled.
Oct 30 03:53:44.270: INFO: 
Latency metrics for node master3
Oct 30 03:53:44.270: INFO: 
Logging node info for node node1
Oct 30 03:53:44.272: INFO: Node Info: &Node{ObjectMeta:{node1    ddef9269-94c5-4165-81fb-a3b0c4ac5c75 146093 0 2021-10-29 21:07:27 +0000 UTC   map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux cmk.intel.com/cmk-node:true feature.node.kubernetes.io/cpu-cpuid.ADX:true feature.node.kubernetes.io/cpu-cpuid.AESNI:true feature.node.kubernetes.io/cpu-cpuid.AVX:true feature.node.kubernetes.io/cpu-cpuid.AVX2:true feature.node.kubernetes.io/cpu-cpuid.AVX512BW:true feature.node.kubernetes.io/cpu-cpuid.AVX512CD:true feature.node.kubernetes.io/cpu-cpuid.AVX512DQ:true feature.node.kubernetes.io/cpu-cpuid.AVX512F:true feature.node.kubernetes.io/cpu-cpuid.AVX512VL:true feature.node.kubernetes.io/cpu-cpuid.FMA3:true feature.node.kubernetes.io/cpu-cpuid.HLE:true feature.node.kubernetes.io/cpu-cpuid.IBPB:true feature.node.kubernetes.io/cpu-cpuid.MPX:true feature.node.kubernetes.io/cpu-cpuid.RTM:true feature.node.kubernetes.io/cpu-cpuid.SSE4:true feature.node.kubernetes.io/cpu-cpuid.SSE42:true feature.node.kubernetes.io/cpu-cpuid.STIBP:true feature.node.kubernetes.io/cpu-cpuid.VMX:true feature.node.kubernetes.io/cpu-cstate.enabled:true feature.node.kubernetes.io/cpu-hardware_multithreading:true feature.node.kubernetes.io/cpu-pstate.status:active feature.node.kubernetes.io/cpu-pstate.turbo:true feature.node.kubernetes.io/cpu-rdt.RDTCMT:true feature.node.kubernetes.io/cpu-rdt.RDTL3CA:true feature.node.kubernetes.io/cpu-rdt.RDTMBA:true feature.node.kubernetes.io/cpu-rdt.RDTMBM:true feature.node.kubernetes.io/cpu-rdt.RDTMON:true feature.node.kubernetes.io/kernel-config.NO_HZ:true feature.node.kubernetes.io/kernel-config.NO_HZ_FULL:true feature.node.kubernetes.io/kernel-selinux.enabled:true feature.node.kubernetes.io/kernel-version.full:3.10.0-1160.45.1.el7.x86_64 feature.node.kubernetes.io/kernel-version.major:3 feature.node.kubernetes.io/kernel-version.minor:10 feature.node.kubernetes.io/kernel-version.revision:0 feature.node.kubernetes.io/memory-numa:true feature.node.kubernetes.io/network-sriov.capable:true feature.node.kubernetes.io/network-sriov.configured:true feature.node.kubernetes.io/pci-0300_1a03.present:true feature.node.kubernetes.io/storage-nonrotationaldisk:true feature.node.kubernetes.io/system-os_release.ID:centos feature.node.kubernetes.io/system-os_release.VERSION_ID:7 feature.node.kubernetes.io/system-os_release.VERSION_ID.major:7 kubernetes.io/arch:amd64 kubernetes.io/hostname:node1 kubernetes.io/os:linux] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.207 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/extended-resources: nfd.node.kubernetes.io/feature-labels:cpu-cpuid.ADX,cpu-cpuid.AESNI,cpu-cpuid.AVX,cpu-cpuid.AVX2,cpu-cpuid.AVX512BW,cpu-cpuid.AVX512CD,cpu-cpuid.AVX512DQ,cpu-cpuid.AVX512F,cpu-cpuid.AVX512VL,cpu-cpuid.FMA3,cpu-cpuid.HLE,cpu-cpuid.IBPB,cpu-cpuid.MPX,cpu-cpuid.RTM,cpu-cpuid.SSE4,cpu-cpuid.SSE42,cpu-cpuid.STIBP,cpu-cpuid.VMX,cpu-cstate.enabled,cpu-hardware_multithreading,cpu-pstate.status,cpu-pstate.turbo,cpu-rdt.RDTCMT,cpu-rdt.RDTL3CA,cpu-rdt.RDTMBA,cpu-rdt.RDTMBM,cpu-rdt.RDTMON,kernel-config.NO_HZ,kernel-config.NO_HZ_FULL,kernel-selinux.enabled,kernel-version.full,kernel-version.major,kernel-version.minor,kernel-version.revision,memory-numa,network-sriov.capable,network-sriov.configured,pci-0300_1a03.present,storage-nonrotationaldisk,system-os_release.ID,system-os_release.VERSION_ID,system-os_release.VERSION_ID.major nfd.node.kubernetes.io/worker.version:v0.8.2 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  [{kube-controller-manager Update v1 2021-10-29 21:07:28 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.3.0/24\"":{}}}}} {kubeadm Update v1 2021-10-29 21:07:28 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {flanneld Update v1 2021-10-29 21:08:29 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {nfd-master Update v1 2021-10-29 21:16:05 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/extended-resources":{},"f:nfd.node.kubernetes.io/feature-labels":{},"f:nfd.node.kubernetes.io/worker.version":{}},"f:labels":{"f:feature.node.kubernetes.io/cpu-cpuid.ADX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AESNI":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX2":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512BW":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512CD":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512DQ":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512F":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512VL":{},"f:feature.node.kubernetes.io/cpu-cpuid.FMA3":{},"f:feature.node.kubernetes.io/cpu-cpuid.HLE":{},"f:feature.node.kubernetes.io/cpu-cpuid.IBPB":{},"f:feature.node.kubernetes.io/cpu-cpuid.MPX":{},"f:feature.node.kubernetes.io/cpu-cpuid.RTM":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE4":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE42":{},"f:feature.node.kubernetes.io/cpu-cpuid.STIBP":{},"f:feature.node.kubernetes.io/cpu-cpuid.VMX":{},"f:feature.node.kubernetes.io/cpu-cstate.enabled":{},"f:feature.node.kubernetes.io/cpu-hardware_multithreading":{},"f:feature.node.kubernetes.io/cpu-pstate.status":{},"f:feature.node.kubernetes.io/cpu-pstate.turbo":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTCMT":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTL3CA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBM":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMON":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ_FULL":{},"f:feature.node.kubernetes.io/kernel-selinux.enabled":{},"f:feature.node.kubernetes.io/kernel-version.full":{},"f:feature.node.kubernetes.io/kernel-version.major":{},"f:feature.node.kubernetes.io/kernel-version.minor":{},"f:feature.node.kubernetes.io/kernel-version.revision":{},"f:feature.node.kubernetes.io/memory-numa":{},"f:feature.node.kubernetes.io/network-sriov.capable":{},"f:feature.node.kubernetes.io/network-sriov.configured":{},"f:feature.node.kubernetes.io/pci-0300_1a03.present":{},"f:feature.node.kubernetes.io/storage-nonrotationaldisk":{},"f:feature.node.kubernetes.io/system-os_release.ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID.major":{}}}}} {Swagger-Codegen Update v1 2021-10-29 21:19:31 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:cmk.intel.com/cmk-node":{}}},"f:status":{"f:capacity":{"f:cmk.intel.com/exclusive-cores":{}}}}} {kubelet Update v1 2021-10-30 01:59:04 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:cmk.intel.com/exclusive-cores":{},"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{},"f:scheduling.k8s.io/foo":{}},"f:capacity":{"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}} {e2e.test Update v1 2021-10-30 03:08:22 +0000 UTC FieldsV1 {"f:status":{"f:capacity":{"f:scheduling.k8s.io/foo":{}}}}}]},Spec:NodeSpec{PodCIDR:10.244.3.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.3.0/24],},Status:NodeStatus{Capacity:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{201269633024 0} {}  BinarySI},pods: {{110 0} {} 110 DecimalSI},scheduling.k8s.io/foo: {{3 0} {} 3 DecimalSI},},Allocatable:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{77 0} {} 77 DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{178884632576 0} {}  BinarySI},pods: {{110 0} {} 110 DecimalSI},scheduling.k8s.io/foo: {{3 0} {} 3 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-10-29 21:11:38 +0000 UTC,LastTransitionTime:2021-10-29 21:11:38 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-10-30 03:53:39 +0000 UTC,LastTransitionTime:2021-10-29 21:07:27 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-10-30 03:53:39 +0000 UTC,LastTransitionTime:2021-10-29 21:07:27 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-10-30 03:53:39 +0000 UTC,LastTransitionTime:2021-10-29 21:07:27 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-10-30 03:53:39 +0000 UTC,LastTransitionTime:2021-10-29 21:08:36 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.207,},NodeAddress{Type:Hostname,Address:node1,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:3bf4179125e4495c89c046ed0ae7baf7,SystemUUID:00CDA902-D022-E711-906E-0017A4403562,BootID:ce868148-dc5e-4c7c-a555-42ee929547f7,KernelVersion:3.10.0-1160.45.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.10,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[opnfv/barometer-collectd@sha256:f30e965aa6195e6ac4ca2410f5a15e3704c92e4afa5208178ca22a7911975d66],SizeBytes:1075575763,},ContainerImage{Names:[@ :],SizeBytes:1003432289,},ContainerImage{Names:[localhost:30500/cmk@sha256:430843a71fa03faf488543c9f5b50d3efbef49988d6784f9f48b8077cc806f60 cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:724463471,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[golang@sha256:db2475a1dbb2149508e5db31d7d77a75e6600d54be645f37681f03f2762169ba golang:alpine3.12],SizeBytes:301186719,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[grafana/grafana@sha256:ba39bf5131dcc0464134a3ff0e26e8c6380415249fa725e5f619176601255172 grafana/grafana:7.5.4],SizeBytes:203572842,},ContainerImage{Names:[quay.io/prometheus/prometheus@sha256:b899dbd1b9017b9a379f76ce5b40eead01a62762c4f2057eacef945c3c22d210 quay.io/prometheus/prometheus:v2.22.1],SizeBytes:168344243,},ContainerImage{Names:[nginx@sha256:644a70516a26004c97d0d85c7fe1d0c3a67ea8ab7ddf4aff193d9f301670cf36 nginx:latest],SizeBytes:133277153,},ContainerImage{Names:[nginx@sha256:a05b0cdd4fc1be3b224ba9662ebdf98fe44c09c0c9215b45f84344c12867002e nginx:1.21.1],SizeBytes:133175493,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:716d2f68314c5c4ddd5ecdb45183fcb4ed8019015982c1321571f863989b70b0 k8s.gcr.io/e2e-test-images/httpd:2.4.39-1],SizeBytes:126894770,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:125930239,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:1f36a24cfb5e0c3f725d7565a867c2384282fcbeccc77b07b423c9da95763a9a k8s.gcr.io/e2e-test-images/nautilus:1.4],SizeBytes:121748345,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[k8s.gcr.io/nfd/node-feature-discovery@sha256:74a1cbd82354f148277e20cdce25d57816e355a896bc67f67a0f722164b16945 k8s.gcr.io/nfd/node-feature-discovery:v0.8.2],SizeBytes:108486428,},ContainerImage{Names:[directxman12/k8s-prometheus-adapter@sha256:2b09a571757a12c0245f2f1a74db4d1b9386ff901cf57f5ce48a0a682bd0e3af directxman12/k8s-prometheus-adapter:v0.8.2],SizeBytes:68230450,},ContainerImage{Names:[k8s.gcr.io/build-image/debian-iptables@sha256:d226f3fd5f293ff513f53573a40c069b89d57d42338a1045b493bf702ac6b1f6 k8s.gcr.io/build-image/debian-iptables:buster-v1.6.5],SizeBytes:60182158,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[localhost:30500/sriov-device-plugin@sha256:2f1ff7ac170c0ac8079e232ea4ee89d23b7906d1b824d901927acb4e399c52c9 nfvpe/sriov-device-plugin:latest localhost:30500/sriov-device-plugin:v3.3.2],SizeBytes:42674030,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[prom/collectd-exporter@sha256:73fbda4d24421bff3b741c27efc36f1b6fbe7c57c378d56d4ff78101cd556654],SizeBytes:17463681,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:16032814,},ContainerImage{Names:[quay.io/prometheus-operator/prometheus-config-reloader@sha256:4dee0fcf1820355ddd6986c1317b555693776c731315544a99d6cc59a7e34ce9 quay.io/prometheus-operator/prometheus-config-reloader:v0.44.1],SizeBytes:13433274,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:11443478,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonewprivs@sha256:8ac1264691820febacf3aea5d152cbde6d10685731ec14966a9401c6f47a68ac k8s.gcr.io/e2e-test-images/nonewprivs:1.3],SizeBytes:7107254,},ContainerImage{Names:[appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172cead6502fec989198b98d8bb appropriate/curl:edge],SizeBytes:5654234,},ContainerImage{Names:[alpine@sha256:a296b4c6f6ee2b88f095b61e95c7ef4f51ba25598835b4978c9256d8c8ace48a alpine:3.12],SizeBytes:5581415,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:1154361,},ContainerImage{Names:[busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 busybox:1.28],SizeBytes:1146369,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},}
Oct 30 03:53:44.273: INFO: 
Logging kubelet events for node node1
Oct 30 03:53:44.275: INFO: 
Logging pods the kubelet thinks is on node node1
Oct 30 03:53:44.293: INFO: netserver-0 started at 2021-10-30 03:53:06 +0000 UTC (0+1 container statuses recorded)
Oct 30 03:53:44.293: INFO: 	Container webserver ready: false, restart count 0
Oct 30 03:53:44.293: INFO: node-feature-discovery-worker-w5vdb started at 2021-10-29 21:15:58 +0000 UTC (0+1 container statuses recorded)
Oct 30 03:53:44.293: INFO: 	Container nfd-worker ready: true, restart count 0
Oct 30 03:53:44.293: INFO: netserver-0 started at 2021-10-30 03:53:21 +0000 UTC (0+1 container statuses recorded)
Oct 30 03:53:44.293: INFO: 	Container webserver ready: true, restart count 0
Oct 30 03:53:44.293: INFO: kube-flannel-phg88 started at 2021-10-29 21:08:25 +0000 UTC (1+1 container statuses recorded)
Oct 30 03:53:44.293: INFO: 	Init container install-cni ready: true, restart count 2
Oct 30 03:53:44.293: INFO: 	Container kube-flannel ready: true, restart count 2
Oct 30 03:53:44.293: INFO: up-down-2-gd9xv started at 2021-10-30 03:53:34 +0000 UTC (0+1 container statuses recorded)
Oct 30 03:53:44.293: INFO: 	Container up-down-2 ready: true, restart count 0
Oct 30 03:53:44.293: INFO: verify-service-up-host-exec-pod started at 2021-10-30 03:53:40 +0000 UTC (0+1 container statuses recorded)
Oct 30 03:53:44.293: INFO: 	Container agnhost-container ready: false, restart count 0
Oct 30 03:53:44.293: INFO: kube-proxy-z5hqt started at 2021-10-29 21:07:31 +0000 UTC (0+1 container statuses recorded)
Oct 30 03:53:44.293: INFO: 	Container kube-proxy ready: true, restart count 1
Oct 30 03:53:44.293: INFO: collectd-d45rv started at 2021-10-29 21:25:13 +0000 UTC (0+3 container statuses recorded)
Oct 30 03:53:44.293: INFO: 	Container collectd ready: true, restart count 0
Oct 30 03:53:44.293: INFO: 	Container collectd-exporter ready: true, restart count 0
Oct 30 03:53:44.293: INFO: 	Container rbac-proxy ready: true, restart count 0
Oct 30 03:53:44.293: INFO: nginx-proxy-node1 started at 2021-10-29 21:07:28 +0000 UTC (0+1 container statuses recorded)
Oct 30 03:53:44.293: INFO: 	Container nginx-proxy ready: true, restart count 2
Oct 30 03:53:44.293: INFO: cmk-init-discover-node1-n4mcc started at 2021-10-29 21:19:28 +0000 UTC (0+3 container statuses recorded)
Oct 30 03:53:44.293: INFO: 	Container discover ready: false, restart count 0
Oct 30 03:53:44.293: INFO: 	Container init ready: false, restart count 0
Oct 30 03:53:44.293: INFO: 	Container install ready: false, restart count 0
Oct 30 03:53:44.293: INFO: cmk-89lqq started at 2021-10-29 21:20:10 +0000 UTC (0+2 container statuses recorded)
Oct 30 03:53:44.293: INFO: 	Container nodereport ready: true, restart count 0
Oct 30 03:53:44.293: INFO: 	Container reconcile ready: true, restart count 0
Oct 30 03:53:44.293: INFO: netserver-0 started at 2021-10-30 03:53:20 +0000 UTC (0+1 container statuses recorded)
Oct 30 03:53:44.293: INFO: 	Container webserver ready: true, restart count 0
Oct 30 03:53:44.293: INFO: nodeport-update-service-cwmvf started at 2021-10-30 03:51:24 +0000 UTC (0+1 container statuses recorded)
Oct 30 03:53:44.293: INFO: 	Container nodeport-update-service ready: true, restart count 0
Oct 30 03:53:44.293: INFO: kube-multus-ds-amd64-68wrz started at 2021-10-29 21:08:34 +0000 UTC (0+1 container statuses recorded)
Oct 30 03:53:44.293: INFO: 	Container kube-multus ready: true, restart count 1
Oct 30 03:53:44.293: INFO: netserver-0 started at 2021-10-30 03:51:42 +0000 UTC (0+1 container statuses recorded)
Oct 30 03:53:44.293: INFO: 	Container webserver ready: false, restart count 0
Oct 30 03:53:44.293: INFO: service-proxy-toggled-jbjrr started at 2021-10-30 03:52:05 +0000 UTC (0+1 container statuses recorded)
Oct 30 03:53:44.293: INFO: 	Container service-proxy-toggled ready: false, restart count 0
Oct 30 03:53:44.293: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-t789r started at 2021-10-29 21:17:10 +0000 UTC (0+1 container statuses recorded)
Oct 30 03:53:44.293: INFO: 	Container kube-sriovdp ready: true, restart count 0
Oct 30 03:53:44.293: INFO: test-container-pod started at 2021-10-30 03:53:27 +0000 UTC (0+1 container statuses recorded)
Oct 30 03:53:44.293: INFO: 	Container webserver ready: false, restart count 0
Oct 30 03:53:44.293: INFO: e2e-net-exec started at 2021-10-30 03:52:54 +0000 UTC (0+1 container statuses recorded)
Oct 30 03:53:44.293: INFO: 	Container e2e-net-exec ready: true, restart count 0
Oct 30 03:53:44.293: INFO: pod-client started at 2021-10-30 03:53:07 +0000 UTC (0+1 container statuses recorded)
Oct 30 03:53:44.293: INFO: 	Container pod-client ready: true, restart count 0
Oct 30 03:53:44.293: INFO: kubernetes-metrics-scraper-5558854cb-5rmjw started at 2021-10-29 21:09:04 +0000 UTC (0+1 container statuses recorded)
Oct 30 03:53:44.293: INFO: 	Container kubernetes-metrics-scraper ready: true, restart count 1
Oct 30 03:53:44.293: INFO: node-exporter-256wm started at 2021-10-29 21:21:15 +0000 UTC (0+2 container statuses recorded)
Oct 30 03:53:44.293: INFO: 	Container kube-rbac-proxy ready: true, restart count 0
Oct 30 03:53:44.293: INFO: 	Container node-exporter ready: true, restart count 0
Oct 30 03:53:44.293: INFO: prometheus-k8s-0 started at 2021-10-29 21:21:17 +0000 UTC (0+4 container statuses recorded)
Oct 30 03:53:44.293: INFO: 	Container config-reloader ready: true, restart count 0
Oct 30 03:53:44.293: INFO: 	Container custom-metrics-apiserver ready: true, restart count 0
Oct 30 03:53:44.293: INFO: 	Container grafana ready: true, restart count 0
Oct 30 03:53:44.293: INFO: 	Container prometheus ready: true, restart count 1
Oct 30 03:53:44.293: INFO: up-down-2-5cz4q started at 2021-10-30 03:53:34 +0000 UTC (0+1 container statuses recorded)
Oct 30 03:53:44.293: INFO: 	Container up-down-2 ready: true, restart count 0
Oct 30 03:53:44.293: INFO: test-container-pod started at  (0+0 container statuses recorded)
W1030 03:53:44.307943      33 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled.
Oct 30 03:53:45.009: INFO: 
Latency metrics for node node1
Oct 30 03:53:45.009: INFO: 
Logging node info for node node2
Oct 30 03:53:45.012: INFO: Node Info: &Node{ObjectMeta:{node2    3b49ad19-ba56-4f4a-b1fa-eef102063de9 145928 0 2021-10-29 21:07:28 +0000 UTC   map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux cmk.intel.com/cmk-node:true feature.node.kubernetes.io/cpu-cpuid.ADX:true feature.node.kubernetes.io/cpu-cpuid.AESNI:true feature.node.kubernetes.io/cpu-cpuid.AVX:true feature.node.kubernetes.io/cpu-cpuid.AVX2:true feature.node.kubernetes.io/cpu-cpuid.AVX512BW:true feature.node.kubernetes.io/cpu-cpuid.AVX512CD:true feature.node.kubernetes.io/cpu-cpuid.AVX512DQ:true feature.node.kubernetes.io/cpu-cpuid.AVX512F:true feature.node.kubernetes.io/cpu-cpuid.AVX512VL:true feature.node.kubernetes.io/cpu-cpuid.FMA3:true feature.node.kubernetes.io/cpu-cpuid.HLE:true feature.node.kubernetes.io/cpu-cpuid.IBPB:true feature.node.kubernetes.io/cpu-cpuid.MPX:true feature.node.kubernetes.io/cpu-cpuid.RTM:true feature.node.kubernetes.io/cpu-cpuid.SSE4:true feature.node.kubernetes.io/cpu-cpuid.SSE42:true feature.node.kubernetes.io/cpu-cpuid.STIBP:true feature.node.kubernetes.io/cpu-cpuid.VMX:true feature.node.kubernetes.io/cpu-cstate.enabled:true feature.node.kubernetes.io/cpu-hardware_multithreading:true feature.node.kubernetes.io/cpu-pstate.status:active feature.node.kubernetes.io/cpu-pstate.turbo:true feature.node.kubernetes.io/cpu-rdt.RDTCMT:true feature.node.kubernetes.io/cpu-rdt.RDTL3CA:true feature.node.kubernetes.io/cpu-rdt.RDTMBA:true feature.node.kubernetes.io/cpu-rdt.RDTMBM:true feature.node.kubernetes.io/cpu-rdt.RDTMON:true feature.node.kubernetes.io/kernel-config.NO_HZ:true feature.node.kubernetes.io/kernel-config.NO_HZ_FULL:true feature.node.kubernetes.io/kernel-selinux.enabled:true feature.node.kubernetes.io/kernel-version.full:3.10.0-1160.45.1.el7.x86_64 feature.node.kubernetes.io/kernel-version.major:3 feature.node.kubernetes.io/kernel-version.minor:10 feature.node.kubernetes.io/kernel-version.revision:0 feature.node.kubernetes.io/memory-numa:true feature.node.kubernetes.io/network-sriov.capable:true feature.node.kubernetes.io/network-sriov.configured:true feature.node.kubernetes.io/pci-0300_1a03.present:true feature.node.kubernetes.io/storage-nonrotationaldisk:true feature.node.kubernetes.io/system-os_release.ID:centos feature.node.kubernetes.io/system-os_release.VERSION_ID:7 feature.node.kubernetes.io/system-os_release.VERSION_ID.major:7 kubernetes.io/arch:amd64 kubernetes.io/hostname:node2 kubernetes.io/os:linux] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.208 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/extended-resources: nfd.node.kubernetes.io/feature-labels:cpu-cpuid.ADX,cpu-cpuid.AESNI,cpu-cpuid.AVX,cpu-cpuid.AVX2,cpu-cpuid.AVX512BW,cpu-cpuid.AVX512CD,cpu-cpuid.AVX512DQ,cpu-cpuid.AVX512F,cpu-cpuid.AVX512VL,cpu-cpuid.FMA3,cpu-cpuid.HLE,cpu-cpuid.IBPB,cpu-cpuid.MPX,cpu-cpuid.RTM,cpu-cpuid.SSE4,cpu-cpuid.SSE42,cpu-cpuid.STIBP,cpu-cpuid.VMX,cpu-cstate.enabled,cpu-hardware_multithreading,cpu-pstate.status,cpu-pstate.turbo,cpu-rdt.RDTCMT,cpu-rdt.RDTL3CA,cpu-rdt.RDTMBA,cpu-rdt.RDTMBM,cpu-rdt.RDTMON,kernel-config.NO_HZ,kernel-config.NO_HZ_FULL,kernel-selinux.enabled,kernel-version.full,kernel-version.major,kernel-version.minor,kernel-version.revision,memory-numa,network-sriov.capable,network-sriov.configured,pci-0300_1a03.present,storage-nonrotationaldisk,system-os_release.ID,system-os_release.VERSION_ID,system-os_release.VERSION_ID.major nfd.node.kubernetes.io/worker.version:v0.8.2 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  [{kube-controller-manager Update v1 2021-10-29 21:07:28 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.4.0/24\"":{}}}}} {kubeadm Update v1 2021-10-29 21:07:28 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {flanneld Update v1 2021-10-29 21:08:30 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {nfd-master Update v1 2021-10-29 21:16:05 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/extended-resources":{},"f:nfd.node.kubernetes.io/feature-labels":{},"f:nfd.node.kubernetes.io/worker.version":{}},"f:labels":{"f:feature.node.kubernetes.io/cpu-cpuid.ADX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AESNI":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX2":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512BW":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512CD":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512DQ":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512F":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512VL":{},"f:feature.node.kubernetes.io/cpu-cpuid.FMA3":{},"f:feature.node.kubernetes.io/cpu-cpuid.HLE":{},"f:feature.node.kubernetes.io/cpu-cpuid.IBPB":{},"f:feature.node.kubernetes.io/cpu-cpuid.MPX":{},"f:feature.node.kubernetes.io/cpu-cpuid.RTM":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE4":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE42":{},"f:feature.node.kubernetes.io/cpu-cpuid.STIBP":{},"f:feature.node.kubernetes.io/cpu-cpuid.VMX":{},"f:feature.node.kubernetes.io/cpu-cstate.enabled":{},"f:feature.node.kubernetes.io/cpu-hardware_multithreading":{},"f:feature.node.kubernetes.io/cpu-pstate.status":{},"f:feature.node.kubernetes.io/cpu-pstate.turbo":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTCMT":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTL3CA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBM":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMON":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ_FULL":{},"f:feature.node.kubernetes.io/kernel-selinux.enabled":{},"f:feature.node.kubernetes.io/kernel-version.full":{},"f:feature.node.kubernetes.io/kernel-version.major":{},"f:feature.node.kubernetes.io/kernel-version.minor":{},"f:feature.node.kubernetes.io/kernel-version.revision":{},"f:feature.node.kubernetes.io/memory-numa":{},"f:feature.node.kubernetes.io/network-sriov.capable":{},"f:feature.node.kubernetes.io/network-sriov.configured":{},"f:feature.node.kubernetes.io/pci-0300_1a03.present":{},"f:feature.node.kubernetes.io/storage-nonrotationaldisk":{},"f:feature.node.kubernetes.io/system-os_release.ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID.major":{}}}}} {Swagger-Codegen Update v1 2021-10-29 21:19:53 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:cmk.intel.com/cmk-node":{}}},"f:status":{"f:capacity":{"f:cmk.intel.com/exclusive-cores":{}}}}} {e2e.test Update v1 2021-10-30 01:59:00 +0000 UTC FieldsV1 {"f:status":{"f:capacity":{"f:scheduling.k8s.io/foo":{}}}}} {kubelet Update v1 2021-10-30 01:59:08 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:cmk.intel.com/exclusive-cores":{},"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{},"f:scheduling.k8s.io/foo":{}},"f:capacity":{"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.4.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.4.0/24],},Status:NodeStatus{Capacity:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{201269628928 0} {} 196552372Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},scheduling.k8s.io/foo: {{3 0} {} 3 DecimalSI},},Allocatable:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{77 0} {} 77 DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{178884628480 0} {} 174692020Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},scheduling.k8s.io/foo: {{3 0} {} 3 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-10-29 21:11:34 +0000 UTC,LastTransitionTime:2021-10-29 21:11:34 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-10-30 03:53:36 +0000 UTC,LastTransitionTime:2021-10-29 21:07:28 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-10-30 03:53:36 +0000 UTC,LastTransitionTime:2021-10-29 21:07:28 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-10-30 03:53:36 +0000 UTC,LastTransitionTime:2021-10-29 21:07:28 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-10-30 03:53:36 +0000 UTC,LastTransitionTime:2021-10-29 21:08:36 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.208,},NodeAddress{Type:Hostname,Address:node2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:7283436dd9e34722a6e4df817add95ed,SystemUUID:80B3CD56-852F-E711-906E-0017A4403562,BootID:c219e7bd-582b-4d6c-b379-1161acc70676,KernelVersion:3.10.0-1160.45.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.10,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[opnfv/barometer-collectd@sha256:f30e965aa6195e6ac4ca2410f5a15e3704c92e4afa5208178ca22a7911975d66],SizeBytes:1075575763,},ContainerImage{Names:[localhost:30500/cmk@sha256:430843a71fa03faf488543c9f5b50d3efbef49988d6784f9f48b8077cc806f60 localhost:30500/cmk:v1.5.1],SizeBytes:724463471,},ContainerImage{Names:[cmk:v1.5.1],SizeBytes:724463471,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[aquasec/kube-hunter@sha256:2be6820bc1d7e0f57193a9a27d5a3e16b2fd93c53747b03ce8ca48c6fc323781 aquasec/kube-hunter:0.3.1],SizeBytes:347611549,},ContainerImage{Names:[sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 sirot/netperf-latest:latest],SizeBytes:282025213,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[k8s.gcr.io/etcd@sha256:4ad90a11b55313b182afc186b9876c8e891531b8db4c9bf1541953021618d0e2 k8s.gcr.io/etcd:3.4.13-0],SizeBytes:253392289,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:702a992280fb7c3303e84a5801acbb4c9c7fcf48cffe0e9c8be3f0c60f74cf89 k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.4],SizeBytes:253371792,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[nginx@sha256:644a70516a26004c97d0d85c7fe1d0c3a67ea8ab7ddf4aff193d9f301670cf36 nginx:latest],SizeBytes:133277153,},ContainerImage{Names:[nginx@sha256:a05b0cdd4fc1be3b224ba9662ebdf98fe44c09c0c9215b45f84344c12867002e nginx:1.21.1],SizeBytes:133175493,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:716d2f68314c5c4ddd5ecdb45183fcb4ed8019015982c1321571f863989b70b0 k8s.gcr.io/e2e-test-images/httpd:2.4.39-1],SizeBytes:126894770,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:125930239,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:1f36a24cfb5e0c3f725d7565a867c2384282fcbeccc77b07b423c9da95763a9a k8s.gcr.io/e2e-test-images/nautilus:1.4],SizeBytes:121748345,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[k8s.gcr.io/nfd/node-feature-discovery@sha256:74a1cbd82354f148277e20cdce25d57816e355a896bc67f67a0f722164b16945 k8s.gcr.io/nfd/node-feature-discovery:v0.8.2],SizeBytes:108486428,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/sample-apiserver@sha256:e7fddbaac4c3451da2365ab90bad149d32f11409738034e41e0f460927f7c276 k8s.gcr.io/e2e-test-images/sample-apiserver:1.17.4],SizeBytes:58172101,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/regression-issue-74839@sha256:b4f1d8d61bdad84bd50442d161d5460e4019d53e989b64220fdbc62fc87d76bf k8s.gcr.io/e2e-test-images/regression-issue-74839:1.2],SizeBytes:44576952,},ContainerImage{Names:[localhost:30500/sriov-device-plugin@sha256:2f1ff7ac170c0ac8079e232ea4ee89d23b7906d1b824d901927acb4e399c52c9 localhost:30500/sriov-device-plugin:v3.3.2],SizeBytes:42674030,},ContainerImage{Names:[localhost:30500/tasextender@sha256:b7d2fa8154ac5d9cff45866e4d3d210a7d390f8576611c301a2eed2b57273227 localhost:30500/tasextender:0.4],SizeBytes:28910791,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 aquasec/kube-bench:0.3.1],SizeBytes:19301876,},ContainerImage{Names:[prom/collectd-exporter@sha256:73fbda4d24421bff3b741c27efc36f1b6fbe7c57c378d56d4ff78101cd556654],SizeBytes:17463681,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:16032814,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:11443478,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonewprivs@sha256:8ac1264691820febacf3aea5d152cbde6d10685731ec14966a9401c6f47a68ac k8s.gcr.io/e2e-test-images/nonewprivs:1.3],SizeBytes:7107254,},ContainerImage{Names:[appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172cead6502fec989198b98d8bb appropriate/curl:edge],SizeBytes:5654234,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:1154361,},ContainerImage{Names:[busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 busybox:1.28],SizeBytes:1146369,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},}
Oct 30 03:53:45.013: INFO: 
Logging kubelet events for node node2
Oct 30 03:53:45.015: INFO: 
Logging pods the kubelet thinks is on node node2
Oct 30 03:53:45.032: INFO: netserver-1 started at 2021-10-30 03:51:42 +0000 UTC (0+1 container statuses recorded)
Oct 30 03:53:45.032: INFO: 	Container webserver ready: false, restart count 0
Oct 30 03:53:45.032: INFO: nginx-proxy-node2 started at 2021-10-29 21:07:28 +0000 UTC (0+1 container statuses recorded)
Oct 30 03:53:45.032: INFO: 	Container nginx-proxy ready: true, restart count 2
Oct 30 03:53:45.032: INFO: kubernetes-dashboard-785dcbb76d-pbjjt started at 2021-10-29 21:09:04 +0000 UTC (0+1 container statuses recorded)
Oct 30 03:53:45.032: INFO: 	Container kubernetes-dashboard ready: true, restart count 1
Oct 30 03:53:45.032: INFO: cmk-init-discover-node2-2fmmt started at 2021-10-29 21:19:48 +0000 UTC (0+3 container statuses recorded)
Oct 30 03:53:45.032: INFO: 	Container discover ready: false, restart count 0
Oct 30 03:53:45.032: INFO: 	Container init ready: false, restart count 0
Oct 30 03:53:45.032: INFO: 	Container install ready: false, restart count 0
Oct 30 03:53:45.032: INFO: up-down-1-6425n started at 2021-10-30 03:53:22 +0000 UTC (0+1 container statuses recorded)
Oct 30 03:53:45.032: INFO: 	Container up-down-1 ready: true, restart count 0
Oct 30 03:53:45.032: INFO: host-test-container-pod started at 2021-10-30 03:53:43 +0000 UTC (0+1 container statuses recorded)
Oct 30 03:53:45.032: INFO: 	Container agnhost-container ready: false, restart count 0
Oct 30 03:53:45.032: INFO: pod-server-1 started at 2021-10-30 03:53:19 +0000 UTC (0+1 container statuses recorded)
Oct 30 03:53:45.032: INFO: 	Container agnhost-container ready: true, restart count 0
Oct 30 03:53:45.032: INFO: netserver-1 started at 2021-10-30 03:53:20 +0000 UTC (0+1 container statuses recorded)
Oct 30 03:53:45.032: INFO: 	Container webserver ready: true, restart count 0
Oct 30 03:53:45.032: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-69pkg started at 2021-10-29 21:17:10 +0000 UTC (0+1 container statuses recorded)
Oct 30 03:53:45.032: INFO: 	Container kube-sriovdp ready: true, restart count 0
Oct 30 03:53:45.032: INFO: execpod2s5ls started at 2021-10-30 03:51:36 +0000 UTC (0+1 container statuses recorded)
Oct 30 03:53:45.032: INFO: 	Container agnhost-container ready: true, restart count 0
Oct 30 03:53:45.032: INFO: test-container-pod started at 2021-10-30 03:52:04 +0000 UTC (0+1 container statuses recorded)
Oct 30 03:53:45.032: INFO: 	Container webserver ready: true, restart count 0
Oct 30 03:53:45.032: INFO: netserver-1 started at 2021-10-30 03:53:05 +0000 UTC (0+1 container statuses recorded)
Oct 30 03:53:45.032: INFO: 	Container webserver ready: false, restart count 0
Oct 30 03:53:45.032: INFO: host-test-container-pod started at 2021-10-30 03:53:43 +0000 UTC (0+1 container statuses recorded)
Oct 30 03:53:45.032: INFO: 	Container agnhost-container ready: false, restart count 0
Oct 30 03:53:45.032: INFO: cmk-8bpbf started at 2021-10-29 21:20:11 +0000 UTC (0+2 container statuses recorded)
Oct 30 03:53:45.032: INFO: 	Container nodereport ready: true, restart count 0
Oct 30 03:53:45.032: INFO: 	Container reconcile ready: true, restart count 0
Oct 30 03:53:45.032: INFO: node-exporter-r77s4 started at 2021-10-29 21:21:15 +0000 UTC (0+2 container statuses recorded)
Oct 30 03:53:45.032: INFO: 	Container kube-rbac-proxy ready: true, restart count 0
Oct 30 03:53:45.032: INFO: 	Container node-exporter ready: true, restart count 0
Oct 30 03:53:45.032: INFO: tas-telemetry-aware-scheduling-84ff454dfb-989mh started at 2021-10-29 21:24:23 +0000 UTC (0+1 container statuses recorded)
Oct 30 03:53:45.032: INFO: 	Container tas-extender ready: true, restart count 0
Oct 30 03:53:45.032: INFO: up-down-1-vrthx started at 2021-10-30 03:53:22 +0000 UTC (0+1 container statuses recorded)
Oct 30 03:53:45.032: INFO: 	Container up-down-1 ready: true, restart count 0
Oct 30 03:53:45.032: INFO: collectd-flvhl started at 2021-10-29 21:25:13 +0000 UTC (0+3 container statuses recorded)
Oct 30 03:53:45.032: INFO: 	Container collectd ready: true, restart count 0
Oct 30 03:53:45.032: INFO: 	Container collectd-exporter ready: true, restart count 0
Oct 30 03:53:45.032: INFO: 	Container rbac-proxy ready: true, restart count 0
Oct 30 03:53:45.032: INFO: service-proxy-toggled-cj5j7 started at 2021-10-30 03:52:05 +0000 UTC (0+1 container statuses recorded)
Oct 30 03:53:45.032: INFO: 	Container service-proxy-toggled ready: false, restart count 0
Oct 30 03:53:45.032: INFO: up-down-2-tgwxx started at 2021-10-30 03:53:34 +0000 UTC (0+1 container statuses recorded)
Oct 30 03:53:45.032: INFO: 	Container up-down-2 ready: true, restart count 0
Oct 30 03:53:45.032: INFO: service-proxy-toggled-trxd7 started at 2021-10-30 03:52:05 +0000 UTC (0+1 container statuses recorded)
Oct 30 03:53:45.032: INFO: 	Container service-proxy-toggled ready: false, restart count 0
Oct 30 03:53:45.032: INFO: kube-proxy-76285 started at 2021-10-29 21:07:31 +0000 UTC (0+1 container statuses recorded)
Oct 30 03:53:45.032: INFO: 	Container kube-proxy ready: true, restart count 1
Oct 30 03:53:45.032: INFO: nodeport-update-service-rc5hp started at 2021-10-30 03:51:24 +0000 UTC (0+1 container statuses recorded)
Oct 30 03:53:45.032: INFO: 	Container nodeport-update-service ready: true, restart count 0
Oct 30 03:53:45.032: INFO: up-down-1-zk986 started at 2021-10-30 03:53:22 +0000 UTC (0+1 container statuses recorded)
Oct 30 03:53:45.032: INFO: 	Container up-down-1 ready: true, restart count 0
Oct 30 03:53:45.032: INFO: service-proxy-disabled-hpn52 started at 2021-10-30 03:51:53 +0000 UTC (0+1 container statuses recorded)
Oct 30 03:53:45.032: INFO: 	Container service-proxy-disabled ready: false, restart count 0
Oct 30 03:53:45.032: INFO: test-container-pod started at 2021-10-30 03:53:42 +0000 UTC (0+1 container statuses recorded)
Oct 30 03:53:45.032: INFO: 	Container webserver ready: false, restart count 0
Oct 30 03:53:45.032: INFO: netserver-1 started at 2021-10-30 03:53:21 +0000 UTC (0+1 container statuses recorded)
Oct 30 03:53:45.032: INFO: 	Container webserver ready: true, restart count 0
Oct 30 03:53:45.032: INFO: node-feature-discovery-worker-h6lcp started at 2021-10-29 21:15:58 +0000 UTC (0+1 container statuses recorded)
Oct 30 03:53:45.032: INFO: 	Container nfd-worker ready: true, restart count 0
Oct 30 03:53:45.032: INFO: service-proxy-disabled-4kx8d started at 2021-10-30 03:51:53 +0000 UTC (0+1 container statuses recorded)
Oct 30 03:53:45.032: INFO: 	Container service-proxy-disabled ready: true, restart count 0
Oct 30 03:53:45.032: INFO: kube-flannel-f6s5v started at 2021-10-29 21:08:25 +0000 UTC (1+1 container statuses recorded)
Oct 30 03:53:45.032: INFO: 	Init container install-cni ready: true, restart count 2
Oct 30 03:53:45.032: INFO: 	Container kube-flannel ready: true, restart count 3
Oct 30 03:53:45.032: INFO: kube-multus-ds-amd64-7tvbl started at 2021-10-29 21:08:34 +0000 UTC (0+1 container statuses recorded)
Oct 30 03:53:45.032: INFO: 	Container kube-multus ready: true, restart count 1
Oct 30 03:53:45.032: INFO: cmk-webhook-6c9d5f8578-ffk66 started at 2021-10-29 21:20:11 +0000 UTC (0+1 container statuses recorded)
Oct 30 03:53:45.033: INFO: 	Container cmk-webhook ready: true, restart count 0
W1030 03:53:45.044917      33 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled.
Oct 30 03:53:46.214: INFO: 
Latency metrics for node node2
Oct 30 03:53:46.214: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-61" for this suite.
[AfterEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750


• Failure [141.351 seconds]
[sig-network] Services
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should be able to update service type to NodePort listening on same port number but different protocols [It]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1211

  Oct 30 03:53:43.936: Unexpected error:
      <*errors.errorString | 0xc00464c8f0>: {
          s: "service is not reachable within 2m0s timeout on endpoint 10.10.190.207:30926 over TCP protocol",
      }
      service is not reachable within 2m0s timeout on endpoint 10.10.190.207:30926 over TCP protocol
  occurred

  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1245
------------------------------
{"msg":"FAILED [sig-network] Services should be able to update service type to NodePort listening on same port number but different protocols","total":-1,"completed":0,"skipped":170,"failed":1,"failures":["[sig-network] Services should be able to update service type to NodePort listening on same port number but different protocols"]}
Oct 30 03:53:46.235: INFO: Running AfterSuite actions on all nodes


[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct 30 03:53:20.816: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename nettest
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:83
STEP: Executing a successful http request from the external internet
[It] should check kube-proxy urls
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:138
STEP: Performing setup for networking test in namespace nettest-8534
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Oct 30 03:53:20.937: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
Oct 30 03:53:20.969: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Oct 30 03:53:22.972: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Oct 30 03:53:24.973: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Oct 30 03:53:26.974: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Oct 30 03:53:28.975: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Oct 30 03:53:30.972: INFO: The status of Pod netserver-0 is Running (Ready = false)
Oct 30 03:53:32.973: INFO: The status of Pod netserver-0 is Running (Ready = false)
Oct 30 03:53:34.972: INFO: The status of Pod netserver-0 is Running (Ready = false)
Oct 30 03:53:36.974: INFO: The status of Pod netserver-0 is Running (Ready = false)
Oct 30 03:53:38.973: INFO: The status of Pod netserver-0 is Running (Ready = false)
Oct 30 03:53:40.974: INFO: The status of Pod netserver-0 is Running (Ready = false)
Oct 30 03:53:42.973: INFO: The status of Pod netserver-0 is Running (Ready = true)
Oct 30 03:53:42.978: INFO: The status of Pod netserver-1 is Running (Ready = true)
STEP: Creating test pods
Oct 30 03:53:49.018: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2
STEP: Getting node addresses
Oct 30 03:53:49.018: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
Oct 30 03:53:49.025: INFO: Requires at least 2 nodes (not -1)
[AfterEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct 30 03:53:49.027: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "nettest-8534" for this suite.


S [SKIPPING] [28.220 seconds]
[sig-network] Networking
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should check kube-proxy urls [It]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:138

  Requires at least 2 nodes (not -1)

  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/network/utils.go:782
------------------------------
Oct 30 03:53:49.037: INFO: Running AfterSuite actions on all nodes


[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct 30 03:53:21.254: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename nettest
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:83
STEP: Executing a successful http request from the external internet
[It] should function for node-Service: udp
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:212
STEP: Performing setup for networking test in namespace nettest-92
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Oct 30 03:53:21.369: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
Oct 30 03:53:21.401: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Oct 30 03:53:23.405: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Oct 30 03:53:25.404: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Oct 30 03:53:27.406: INFO: The status of Pod netserver-0 is Running (Ready = false)
Oct 30 03:53:29.405: INFO: The status of Pod netserver-0 is Running (Ready = false)
Oct 30 03:53:31.405: INFO: The status of Pod netserver-0 is Running (Ready = false)
Oct 30 03:53:33.405: INFO: The status of Pod netserver-0 is Running (Ready = false)
Oct 30 03:53:35.405: INFO: The status of Pod netserver-0 is Running (Ready = false)
Oct 30 03:53:37.406: INFO: The status of Pod netserver-0 is Running (Ready = false)
Oct 30 03:53:39.406: INFO: The status of Pod netserver-0 is Running (Ready = false)
Oct 30 03:53:41.407: INFO: The status of Pod netserver-0 is Running (Ready = false)
Oct 30 03:53:43.405: INFO: The status of Pod netserver-0 is Running (Ready = true)
Oct 30 03:53:43.409: INFO: The status of Pod netserver-1 is Running (Ready = true)
STEP: Creating test pods
Oct 30 03:53:49.443: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2
STEP: Getting node addresses
Oct 30 03:53:49.443: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
Oct 30 03:53:49.451: INFO: Requires at least 2 nodes (not -1)
[AfterEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct 30 03:53:49.452: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "nettest-92" for this suite.


S [SKIPPING] [28.207 seconds]
[sig-network] Networking
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  Granular Checks: Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:151
    should function for node-Service: udp [It]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:212

    Requires at least 2 nodes (not -1)

    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/network/utils.go:782
------------------------------
Oct 30 03:53:49.463: INFO: Running AfterSuite actions on all nodes


[BeforeEach] [sig-network] Conntrack
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct 30 03:53:07.254: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename conntrack
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Conntrack
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/conntrack.go:96
[It] should be able to preserve UDP traffic when server pod cycles for a NodePort service
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/conntrack.go:130
STEP: creating a UDP service svc-udp with type=NodePort in conntrack-1111
STEP: creating a client pod for probing the service svc-udp
Oct 30 03:53:07.320: INFO: The status of Pod pod-client is Pending, waiting for it to be Running (with Ready = true)
Oct 30 03:53:09.323: INFO: The status of Pod pod-client is Pending, waiting for it to be Running (with Ready = true)
Oct 30 03:53:11.324: INFO: The status of Pod pod-client is Pending, waiting for it to be Running (with Ready = true)
Oct 30 03:53:13.325: INFO: The status of Pod pod-client is Pending, waiting for it to be Running (with Ready = true)
Oct 30 03:53:15.324: INFO: The status of Pod pod-client is Pending, waiting for it to be Running (with Ready = true)
Oct 30 03:53:17.325: INFO: The status of Pod pod-client is Pending, waiting for it to be Running (with Ready = true)
Oct 30 03:53:19.324: INFO: The status of Pod pod-client is Running (Ready = true)
Oct 30 03:53:19.419: INFO: Pod client logs: Sat Oct 30 03:53:16 UTC 2021
Sat Oct 30 03:53:16 UTC 2021 Try: 1

Sat Oct 30 03:53:16 UTC 2021 Try: 2

Sat Oct 30 03:53:16 UTC 2021 Try: 3

Sat Oct 30 03:53:16 UTC 2021 Try: 4

Sat Oct 30 03:53:16 UTC 2021 Try: 5

Sat Oct 30 03:53:16 UTC 2021 Try: 6

Sat Oct 30 03:53:16 UTC 2021 Try: 7

STEP: creating a backend pod pod-server-1 for the service svc-udp
Oct 30 03:53:19.434: INFO: The status of Pod pod-server-1 is Pending, waiting for it to be Running (with Ready = true)
Oct 30 03:53:21.437: INFO: The status of Pod pod-server-1 is Pending, waiting for it to be Running (with Ready = true)
Oct 30 03:53:23.437: INFO: The status of Pod pod-server-1 is Pending, waiting for it to be Running (with Ready = true)
Oct 30 03:53:25.440: INFO: The status of Pod pod-server-1 is Pending, waiting for it to be Running (with Ready = true)
Oct 30 03:53:27.438: INFO: The status of Pod pod-server-1 is Pending, waiting for it to be Running (with Ready = true)
Oct 30 03:53:29.437: INFO: The status of Pod pod-server-1 is Running (Ready = true)
STEP: waiting up to 3m0s for service svc-udp in namespace conntrack-1111 to expose endpoints map[pod-server-1:[80]]
Oct 30 03:53:29.448: INFO: successfully validated that service svc-udp in namespace conntrack-1111 exposes endpoints map[pod-server-1:[80]]
STEP: checking client pod connected to the backend 1 on Node IP 10.10.190.208
Oct 30 03:54:29.473: INFO: Pod client logs: Sat Oct 30 03:53:16 UTC 2021
Sat Oct 30 03:53:16 UTC 2021 Try: 1

Sat Oct 30 03:53:16 UTC 2021 Try: 2

Sat Oct 30 03:53:16 UTC 2021 Try: 3

Sat Oct 30 03:53:16 UTC 2021 Try: 4

Sat Oct 30 03:53:16 UTC 2021 Try: 5

Sat Oct 30 03:53:16 UTC 2021 Try: 6

Sat Oct 30 03:53:16 UTC 2021 Try: 7

Sat Oct 30 03:53:21 UTC 2021 Try: 8

Sat Oct 30 03:53:21 UTC 2021 Try: 9

Sat Oct 30 03:53:21 UTC 2021 Try: 10

Sat Oct 30 03:53:21 UTC 2021 Try: 11

Sat Oct 30 03:53:21 UTC 2021 Try: 12

Sat Oct 30 03:53:21 UTC 2021 Try: 13

Sat Oct 30 03:53:26 UTC 2021 Try: 14

Sat Oct 30 03:53:26 UTC 2021 Try: 15

Sat Oct 30 03:53:26 UTC 2021 Try: 16

Sat Oct 30 03:53:26 UTC 2021 Try: 17

Sat Oct 30 03:53:26 UTC 2021 Try: 18

Sat Oct 30 03:53:26 UTC 2021 Try: 19

Sat Oct 30 03:53:31 UTC 2021 Try: 20

Sat Oct 30 03:53:31 UTC 2021 Try: 21

Sat Oct 30 03:53:31 UTC 2021 Try: 22

Sat Oct 30 03:53:31 UTC 2021 Try: 23

Sat Oct 30 03:53:31 UTC 2021 Try: 24

Sat Oct 30 03:53:31 UTC 2021 Try: 25

Sat Oct 30 03:53:36 UTC 2021 Try: 26

Sat Oct 30 03:53:36 UTC 2021 Try: 27

Sat Oct 30 03:53:36 UTC 2021 Try: 28

Sat Oct 30 03:53:36 UTC 2021 Try: 29

Sat Oct 30 03:53:36 UTC 2021 Try: 30

Sat Oct 30 03:53:36 UTC 2021 Try: 31

Sat Oct 30 03:53:41 UTC 2021 Try: 32

Sat Oct 30 03:53:41 UTC 2021 Try: 33

Sat Oct 30 03:53:41 UTC 2021 Try: 34

Sat Oct 30 03:53:41 UTC 2021 Try: 35

Sat Oct 30 03:53:41 UTC 2021 Try: 36

Sat Oct 30 03:53:41 UTC 2021 Try: 37

Sat Oct 30 03:53:46 UTC 2021 Try: 38

Sat Oct 30 03:53:46 UTC 2021 Try: 39

Sat Oct 30 03:53:46 UTC 2021 Try: 40

Sat Oct 30 03:53:46 UTC 2021 Try: 41

Sat Oct 30 03:53:46 UTC 2021 Try: 42

Sat Oct 30 03:53:46 UTC 2021 Try: 43

Sat Oct 30 03:53:51 UTC 2021 Try: 44

Sat Oct 30 03:53:51 UTC 2021 Try: 45

Sat Oct 30 03:53:51 UTC 2021 Try: 46

Sat Oct 30 03:53:51 UTC 2021 Try: 47

Sat Oct 30 03:53:51 UTC 2021 Try: 48

Sat Oct 30 03:53:51 UTC 2021 Try: 49

Sat Oct 30 03:53:56 UTC 2021 Try: 50

Sat Oct 30 03:53:56 UTC 2021 Try: 51

Sat Oct 30 03:53:56 UTC 2021 Try: 52

Sat Oct 30 03:53:56 UTC 2021 Try: 53

Sat Oct 30 03:53:56 UTC 2021 Try: 54

Sat Oct 30 03:53:56 UTC 2021 Try: 55

Sat Oct 30 03:54:01 UTC 2021 Try: 56

Sat Oct 30 03:54:01 UTC 2021 Try: 57

Sat Oct 30 03:54:01 UTC 2021 Try: 58

Sat Oct 30 03:54:01 UTC 2021 Try: 59

Sat Oct 30 03:54:01 UTC 2021 Try: 60

Sat Oct 30 03:54:01 UTC 2021 Try: 61

Sat Oct 30 03:54:06 UTC 2021 Try: 62

Sat Oct 30 03:54:06 UTC 2021 Try: 63

Sat Oct 30 03:54:06 UTC 2021 Try: 64

Sat Oct 30 03:54:06 UTC 2021 Try: 65

Sat Oct 30 03:54:06 UTC 2021 Try: 66

Sat Oct 30 03:54:06 UTC 2021 Try: 67

Sat Oct 30 03:54:12 UTC 2021 Try: 68

Sat Oct 30 03:54:12 UTC 2021 Try: 69

Sat Oct 30 03:54:12 UTC 2021 Try: 70

Sat Oct 30 03:54:12 UTC 2021 Try: 71

Sat Oct 30 03:54:12 UTC 2021 Try: 72

Sat Oct 30 03:54:12 UTC 2021 Try: 73

Sat Oct 30 03:54:17 UTC 2021 Try: 74

Sat Oct 30 03:54:17 UTC 2021 Try: 75

Sat Oct 30 03:54:17 UTC 2021 Try: 76

Sat Oct 30 03:54:17 UTC 2021 Try: 77

Sat Oct 30 03:54:17 UTC 2021 Try: 78

Sat Oct 30 03:54:17 UTC 2021 Try: 79

Sat Oct 30 03:54:22 UTC 2021 Try: 80

Sat Oct 30 03:54:22 UTC 2021 Try: 81

Sat Oct 30 03:54:22 UTC 2021 Try: 82

Sat Oct 30 03:54:22 UTC 2021 Try: 83

Sat Oct 30 03:54:22 UTC 2021 Try: 84

Sat Oct 30 03:54:22 UTC 2021 Try: 85

Sat Oct 30 03:54:27 UTC 2021 Try: 86

Sat Oct 30 03:54:27 UTC 2021 Try: 87

Sat Oct 30 03:54:27 UTC 2021 Try: 88

Sat Oct 30 03:54:27 UTC 2021 Try: 89

Sat Oct 30 03:54:27 UTC 2021 Try: 90

Sat Oct 30 03:54:27 UTC 2021 Try: 91

Oct 30 03:54:29.474: FAIL: Failed to connect to backend 1

Full Stack Trace
k8s.io/kubernetes/test/e2e.RunE2ETests(0xc000683200)
	_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x36c
k8s.io/kubernetes/test/e2e.TestE2E(0xc000683200)
	_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:144 +0x2b
testing.tRunner(0xc000683200, 0x70e7b58)
	/usr/local/go/src/testing/testing.go:1193 +0xef
created by testing.(*T).Run
	/usr/local/go/src/testing/testing.go:1238 +0x2b3
[AfterEach] [sig-network] Conntrack
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
STEP: Collecting events from namespace "conntrack-1111".
STEP: Found 8 events.
Oct 30 03:54:29.478: INFO: At 2021-10-30 03:53:15 +0000 UTC - event for pod-client: {kubelet node1} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/agnhost:2.32"
Oct 30 03:54:29.478: INFO: At 2021-10-30 03:53:16 +0000 UTC - event for pod-client: {kubelet node1} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/agnhost:2.32" in 408.468082ms
Oct 30 03:54:29.478: INFO: At 2021-10-30 03:53:16 +0000 UTC - event for pod-client: {kubelet node1} Created: Created container pod-client
Oct 30 03:54:29.478: INFO: At 2021-10-30 03:53:17 +0000 UTC - event for pod-client: {kubelet node1} Started: Started container pod-client
Oct 30 03:54:29.478: INFO: At 2021-10-30 03:53:20 +0000 UTC - event for pod-server-1: {kubelet node2} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/agnhost:2.32"
Oct 30 03:54:29.478: INFO: At 2021-10-30 03:53:21 +0000 UTC - event for pod-server-1: {kubelet node2} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/agnhost:2.32" in 291.893568ms
Oct 30 03:54:29.478: INFO: At 2021-10-30 03:53:21 +0000 UTC - event for pod-server-1: {kubelet node2} Created: Created container agnhost-container
Oct 30 03:54:29.478: INFO: At 2021-10-30 03:53:22 +0000 UTC - event for pod-server-1: {kubelet node2} Started: Started container agnhost-container
Oct 30 03:54:29.480: INFO: POD           NODE   PHASE    GRACE  CONDITIONS
Oct 30 03:54:29.480: INFO: pod-client    node1  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-10-30 03:53:07 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2021-10-30 03:53:18 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2021-10-30 03:53:18 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-10-30 03:53:07 +0000 UTC  }]
Oct 30 03:54:29.480: INFO: pod-server-1  node2  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-10-30 03:53:19 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2021-10-30 03:53:25 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2021-10-30 03:53:25 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-10-30 03:53:19 +0000 UTC  }]
Oct 30 03:54:29.480: INFO: 
Oct 30 03:54:29.484: INFO: 
Logging node info for node master1
Oct 30 03:54:29.487: INFO: Node Info: &Node{ObjectMeta:{master1    b47c04d5-47a7-4a95-8e97-481e6e60af54 146542 0 2021-10-29 21:05:34 +0000 UTC   map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master1 kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.202 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  [{kubeadm Update v1 2021-10-29 21:05:37 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}}} {kube-controller-manager Update v1 2021-10-29 21:05:53 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.0.0/24\"":{}},"f:taints":{}}}} {flanneld Update v1 2021-10-29 21:08:30 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kubelet Update v1 2021-10-29 21:13:13 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234767872 0} {}  BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324603904 0} {}  BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-10-29 21:11:27 +0000 UTC,LastTransitionTime:2021-10-29 21:11:27 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-10-30 03:54:19 +0000 UTC,LastTransitionTime:2021-10-29 21:05:32 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-10-30 03:54:19 +0000 UTC,LastTransitionTime:2021-10-29 21:05:32 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-10-30 03:54:19 +0000 UTC,LastTransitionTime:2021-10-29 21:05:32 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-10-30 03:54:19 +0000 UTC,LastTransitionTime:2021-10-29 21:08:35 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.202,},NodeAddress{Type:Hostname,Address:master1,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:5d3ed60c561e427db72df14bd9006ed0,SystemUUID:00ACFB60-0631-E711-906E-0017A4403562,BootID:01b9d6bc-4126-4864-a1df-901a1bee4906,KernelVersion:3.10.0-1160.45.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.10,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:724463471,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 sirot/netperf-latest:latest],SizeBytes:282025213,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:125930239,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e k8s.gcr.io/coredns/coredns:v1.8.0],SizeBytes:42454755,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[localhost:30500/tasextender@sha256:b7d2fa8154ac5d9cff45866e4d3d210a7d390f8576611c301a2eed2b57273227 tasextender:latest localhost:30500/tasextender:0.4],SizeBytes:28910791,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[registry@sha256:1cd9409a311350c3072fe510b52046f104416376c126a479cef9a4dfe692cf57 registry:2.7.0],SizeBytes:24191168,},ContainerImage{Names:[nginx@sha256:2012644549052fa07c43b0d19f320c871a25e105d0b23e33645e4f1bcf8fcd97 nginx:1.20.1-alpine],SizeBytes:22650454,},ContainerImage{Names:[@ :],SizeBytes:5577654,},ContainerImage{Names:[alpine@sha256:c0e9560cda118f9ec63ddefb4a173a2b2a0347082d7dff7dc14272e7841a5b5a alpine:3.12.1],SizeBytes:5573013,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},}
Oct 30 03:54:29.487: INFO: 
Logging kubelet events for node master1
Oct 30 03:54:29.489: INFO: 
Logging pods the kubelet thinks is on node master1
Oct 30 03:54:29.508: INFO: container-registry-65d7c44b96-zzkfl started at 2021-10-29 21:12:56 +0000 UTC (0+2 container statuses recorded)
Oct 30 03:54:29.509: INFO: 	Container docker-registry ready: true, restart count 0
Oct 30 03:54:29.509: INFO: 	Container nginx ready: true, restart count 0
Oct 30 03:54:29.509: INFO: node-exporter-fv84w started at 2021-10-29 21:21:15 +0000 UTC (0+2 container statuses recorded)
Oct 30 03:54:29.509: INFO: 	Container kube-rbac-proxy ready: true, restart count 0
Oct 30 03:54:29.509: INFO: 	Container node-exporter ready: true, restart count 0
Oct 30 03:54:29.509: INFO: kube-scheduler-master1 started at 2021-10-29 21:11:09 +0000 UTC (0+1 container statuses recorded)
Oct 30 03:54:29.509: INFO: 	Container kube-scheduler ready: true, restart count 0
Oct 30 03:54:29.509: INFO: kube-proxy-z5k8p started at 2021-10-29 21:07:31 +0000 UTC (0+1 container statuses recorded)
Oct 30 03:54:29.509: INFO: 	Container kube-proxy ready: true, restart count 1
Oct 30 03:54:29.509: INFO: coredns-8474476ff8-lczbr started at 2021-10-29 21:09:04 +0000 UTC (0+1 container statuses recorded)
Oct 30 03:54:29.509: INFO: 	Container coredns ready: true, restart count 1
Oct 30 03:54:29.509: INFO: kube-multus-ds-amd64-wgkfq started at 2021-10-29 21:08:34 +0000 UTC (0+1 container statuses recorded)
Oct 30 03:54:29.509: INFO: 	Container kube-multus ready: true, restart count 1
Oct 30 03:54:29.509: INFO: kube-apiserver-master1 started at 2021-10-29 21:11:09 +0000 UTC (0+1 container statuses recorded)
Oct 30 03:54:29.509: INFO: 	Container kube-apiserver ready: true, restart count 0
Oct 30 03:54:29.509: INFO: kube-controller-manager-master1 started at 2021-10-29 21:14:43 +0000 UTC (0+1 container statuses recorded)
Oct 30 03:54:29.509: INFO: 	Container kube-controller-manager ready: true, restart count 2
Oct 30 03:54:29.509: INFO: kube-flannel-d4pmt started at 2021-10-29 21:08:25 +0000 UTC (1+1 container statuses recorded)
Oct 30 03:54:29.509: INFO: 	Init container install-cni ready: true, restart count 0
Oct 30 03:54:29.509: INFO: 	Container kube-flannel ready: true, restart count 2
W1030 03:54:29.523782      31 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled.
Oct 30 03:54:29.593: INFO: 
Latency metrics for node master1
Oct 30 03:54:29.593: INFO: 
Logging node info for node master2
Oct 30 03:54:29.596: INFO: Node Info: &Node{ObjectMeta:{master2    208792d3-d365-4ddb-83d4-10e6e818079c 146579 0 2021-10-29 21:06:06 +0000 UTC   map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master2 kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.203 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  [{kubeadm Update v1 2021-10-29 21:06:07 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}}} {flanneld Update v1 2021-10-29 21:08:29 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2021-10-29 21:08:35 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}},"f:taints":{}}}} {kubelet Update v1 2021-10-29 21:18:18 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:capacity":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.1.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234767872 0} {}  BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324603904 0} {}  BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-10-29 21:11:19 +0000 UTC,LastTransitionTime:2021-10-29 21:11:19 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-10-30 03:54:26 +0000 UTC,LastTransitionTime:2021-10-29 21:06:06 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-10-30 03:54:26 +0000 UTC,LastTransitionTime:2021-10-29 21:06:06 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-10-30 03:54:26 +0000 UTC,LastTransitionTime:2021-10-29 21:06:06 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-10-30 03:54:26 +0000 UTC,LastTransitionTime:2021-10-29 21:08:35 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.203,},NodeAddress{Type:Hostname,Address:master2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:12290c1916d84ddda20431c28083da6a,SystemUUID:00A0DE53-E51D-E711-906E-0017A4403562,BootID:314e82b8-9747-4131-b883-220496309995,KernelVersion:3.10.0-1160.45.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.10,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:724463471,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 sirot/netperf-latest:latest],SizeBytes:282025213,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:125930239,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e k8s.gcr.io/coredns/coredns:v1.8.0],SizeBytes:42454755,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},}
Oct 30 03:54:29.597: INFO: 
Logging kubelet events for node master2
Oct 30 03:54:29.598: INFO: 
Logging pods the kubelet thinks is on node master2
Oct 30 03:54:29.606: INFO: kube-apiserver-master2 started at 2021-10-29 21:11:09 +0000 UTC (0+1 container statuses recorded)
Oct 30 03:54:29.607: INFO: 	Container kube-apiserver ready: true, restart count 0
Oct 30 03:54:29.607: INFO: kube-controller-manager-master2 started at 2021-10-29 21:11:09 +0000 UTC (0+1 container statuses recorded)
Oct 30 03:54:29.607: INFO: 	Container kube-controller-manager ready: true, restart count 3
Oct 30 03:54:29.607: INFO: kube-scheduler-master2 started at 2021-10-29 21:11:09 +0000 UTC (0+1 container statuses recorded)
Oct 30 03:54:29.607: INFO: 	Container kube-scheduler ready: true, restart count 2
Oct 30 03:54:29.607: INFO: kube-proxy-5gz4v started at 2021-10-29 21:07:31 +0000 UTC (0+1 container statuses recorded)
Oct 30 03:54:29.607: INFO: 	Container kube-proxy ready: true, restart count 2
Oct 30 03:54:29.607: INFO: kube-flannel-qvqll started at 2021-10-29 21:08:25 +0000 UTC (1+1 container statuses recorded)
Oct 30 03:54:29.607: INFO: 	Init container install-cni ready: true, restart count 2
Oct 30 03:54:29.607: INFO: 	Container kube-flannel ready: true, restart count 1
Oct 30 03:54:29.607: INFO: kube-multus-ds-amd64-brkpk started at 2021-10-29 21:08:34 +0000 UTC (0+1 container statuses recorded)
Oct 30 03:54:29.607: INFO: 	Container kube-multus ready: true, restart count 1
Oct 30 03:54:29.607: INFO: node-exporter-lc9kk started at 2021-10-29 21:21:15 +0000 UTC (0+2 container statuses recorded)
Oct 30 03:54:29.607: INFO: 	Container kube-rbac-proxy ready: true, restart count 0
Oct 30 03:54:29.607: INFO: 	Container node-exporter ready: true, restart count 0
W1030 03:54:29.620595      31 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled.
Oct 30 03:54:29.676: INFO: 
Latency metrics for node master2
Oct 30 03:54:29.676: INFO: 
Logging node info for node master3
Oct 30 03:54:29.680: INFO: Node Info: &Node{ObjectMeta:{master3    168f1589-e029-47ae-b194-10215fc22d6a 146577 0 2021-10-29 21:06:17 +0000 UTC   map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master3 kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.204 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/master.version:v0.8.2 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  [{kubeadm Update v1 2021-10-29 21:06:18 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}}} {flanneld Update v1 2021-10-29 21:08:29 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2021-10-29 21:08:35 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.2.0/24\"":{}},"f:taints":{}}}} {nfd-master Update v1 2021-10-29 21:16:04 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/master.version":{}}}}} {kubelet Update v1 2021-10-29 21:16:17 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:capacity":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.2.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234767872 0} {}  BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324603904 0} {}  BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-10-29 21:11:36 +0000 UTC,LastTransitionTime:2021-10-29 21:11:36 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-10-30 03:54:25 +0000 UTC,LastTransitionTime:2021-10-29 21:06:17 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-10-30 03:54:25 +0000 UTC,LastTransitionTime:2021-10-29 21:06:17 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-10-30 03:54:25 +0000 UTC,LastTransitionTime:2021-10-29 21:06:17 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-10-30 03:54:25 +0000 UTC,LastTransitionTime:2021-10-29 21:08:35 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.204,},NodeAddress{Type:Hostname,Address:master3,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:de18dcb6cb4c493e9f4d987da2c8b3fd,SystemUUID:008B1444-141E-E711-906E-0017A4403562,BootID:89235c4b-b1f5-4716-bbd7-18b41c0bde74,KernelVersion:3.10.0-1160.45.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.10,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:724463471,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:125930239,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[k8s.gcr.io/nfd/node-feature-discovery@sha256:74a1cbd82354f148277e20cdce25d57816e355a896bc67f67a0f722164b16945 k8s.gcr.io/nfd/node-feature-discovery:v0.8.2],SizeBytes:108486428,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[quay.io/prometheus-operator/prometheus-operator@sha256:850c86bfeda4389bc9c757a9fd17ca5a090ea6b424968178d4467492cfa13921 quay.io/prometheus-operator/prometheus-operator:v0.44.1],SizeBytes:42617274,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e k8s.gcr.io/coredns/coredns:v1.8.0],SizeBytes:42454755,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 aquasec/kube-bench:0.3.1],SizeBytes:19301876,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},}
Oct 30 03:54:29.680: INFO: 
Logging kubelet events for node master3
Oct 30 03:54:29.682: INFO: 
Logging pods the kubelet thinks is on node master3
Oct 30 03:54:29.692: INFO: kube-apiserver-master3 started at 2021-10-29 21:11:10 +0000 UTC (0+1 container statuses recorded)
Oct 30 03:54:29.692: INFO: 	Container kube-apiserver ready: true, restart count 0
Oct 30 03:54:29.692: INFO: kube-scheduler-master3 started at 2021-10-29 21:14:43 +0000 UTC (0+1 container statuses recorded)
Oct 30 03:54:29.692: INFO: 	Container kube-scheduler ready: true, restart count 2
Oct 30 03:54:29.692: INFO: dns-autoscaler-7df78bfcfb-phsdx started at 2021-10-29 21:09:02 +0000 UTC (0+1 container statuses recorded)
Oct 30 03:54:29.692: INFO: 	Container autoscaler ready: true, restart count 1
Oct 30 03:54:29.692: INFO: node-feature-discovery-controller-cff799f9f-qq7g4 started at 2021-10-29 21:15:58 +0000 UTC (0+1 container statuses recorded)
Oct 30 03:54:29.692: INFO: 	Container nfd-controller ready: true, restart count 0
Oct 30 03:54:29.692: INFO: coredns-8474476ff8-wrwwv started at 2021-10-29 21:09:00 +0000 UTC (0+1 container statuses recorded)
Oct 30 03:54:29.692: INFO: 	Container coredns ready: true, restart count 1
Oct 30 03:54:29.692: INFO: prometheus-operator-585ccfb458-czbr2 started at 2021-10-29 21:21:06 +0000 UTC (0+2 container statuses recorded)
Oct 30 03:54:29.692: INFO: 	Container kube-rbac-proxy ready: true, restart count 0
Oct 30 03:54:29.692: INFO: 	Container prometheus-operator ready: true, restart count 0
Oct 30 03:54:29.692: INFO: node-exporter-bv946 started at 2021-10-29 21:21:15 +0000 UTC (0+2 container statuses recorded)
Oct 30 03:54:29.692: INFO: 	Container kube-rbac-proxy ready: true, restart count 0
Oct 30 03:54:29.692: INFO: 	Container node-exporter ready: true, restart count 0
Oct 30 03:54:29.692: INFO: kube-controller-manager-master3 started at 2021-10-29 21:14:43 +0000 UTC (0+1 container statuses recorded)
Oct 30 03:54:29.692: INFO: 	Container kube-controller-manager ready: true, restart count 1
Oct 30 03:54:29.692: INFO: kube-proxy-r6fpx started at 2021-10-29 21:07:31 +0000 UTC (0+1 container statuses recorded)
Oct 30 03:54:29.692: INFO: 	Container kube-proxy ready: true, restart count 1
Oct 30 03:54:29.692: INFO: kube-flannel-rbdlt started at 2021-10-29 21:08:25 +0000 UTC (1+1 container statuses recorded)
Oct 30 03:54:29.692: INFO: 	Init container install-cni ready: true, restart count 2
Oct 30 03:54:29.692: INFO: 	Container kube-flannel ready: true, restart count 2
Oct 30 03:54:29.692: INFO: kube-multus-ds-amd64-bdwh9 started at 2021-10-29 21:08:34 +0000 UTC (0+1 container statuses recorded)
Oct 30 03:54:29.692: INFO: 	Container kube-multus ready: true, restart count 1
W1030 03:54:29.706099      31 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled.
Oct 30 03:54:29.784: INFO: 
Latency metrics for node master3
Oct 30 03:54:29.784: INFO: 
Logging node info for node node1
Oct 30 03:54:29.787: INFO: Node Info: &Node{ObjectMeta:{node1    ddef9269-94c5-4165-81fb-a3b0c4ac5c75 146547 0 2021-10-29 21:07:27 +0000 UTC   map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux cmk.intel.com/cmk-node:true feature.node.kubernetes.io/cpu-cpuid.ADX:true feature.node.kubernetes.io/cpu-cpuid.AESNI:true feature.node.kubernetes.io/cpu-cpuid.AVX:true feature.node.kubernetes.io/cpu-cpuid.AVX2:true feature.node.kubernetes.io/cpu-cpuid.AVX512BW:true feature.node.kubernetes.io/cpu-cpuid.AVX512CD:true feature.node.kubernetes.io/cpu-cpuid.AVX512DQ:true feature.node.kubernetes.io/cpu-cpuid.AVX512F:true feature.node.kubernetes.io/cpu-cpuid.AVX512VL:true feature.node.kubernetes.io/cpu-cpuid.FMA3:true feature.node.kubernetes.io/cpu-cpuid.HLE:true feature.node.kubernetes.io/cpu-cpuid.IBPB:true feature.node.kubernetes.io/cpu-cpuid.MPX:true feature.node.kubernetes.io/cpu-cpuid.RTM:true feature.node.kubernetes.io/cpu-cpuid.SSE4:true feature.node.kubernetes.io/cpu-cpuid.SSE42:true feature.node.kubernetes.io/cpu-cpuid.STIBP:true feature.node.kubernetes.io/cpu-cpuid.VMX:true feature.node.kubernetes.io/cpu-cstate.enabled:true feature.node.kubernetes.io/cpu-hardware_multithreading:true feature.node.kubernetes.io/cpu-pstate.status:active feature.node.kubernetes.io/cpu-pstate.turbo:true feature.node.kubernetes.io/cpu-rdt.RDTCMT:true feature.node.kubernetes.io/cpu-rdt.RDTL3CA:true feature.node.kubernetes.io/cpu-rdt.RDTMBA:true feature.node.kubernetes.io/cpu-rdt.RDTMBM:true feature.node.kubernetes.io/cpu-rdt.RDTMON:true feature.node.kubernetes.io/kernel-config.NO_HZ:true feature.node.kubernetes.io/kernel-config.NO_HZ_FULL:true feature.node.kubernetes.io/kernel-selinux.enabled:true feature.node.kubernetes.io/kernel-version.full:3.10.0-1160.45.1.el7.x86_64 feature.node.kubernetes.io/kernel-version.major:3 feature.node.kubernetes.io/kernel-version.minor:10 feature.node.kubernetes.io/kernel-version.revision:0 feature.node.kubernetes.io/memory-numa:true feature.node.kubernetes.io/network-sriov.capable:true feature.node.kubernetes.io/network-sriov.configured:true feature.node.kubernetes.io/pci-0300_1a03.present:true feature.node.kubernetes.io/storage-nonrotationaldisk:true feature.node.kubernetes.io/system-os_release.ID:centos feature.node.kubernetes.io/system-os_release.VERSION_ID:7 feature.node.kubernetes.io/system-os_release.VERSION_ID.major:7 kubernetes.io/arch:amd64 kubernetes.io/hostname:node1 kubernetes.io/os:linux] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.207 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/extended-resources: nfd.node.kubernetes.io/feature-labels:cpu-cpuid.ADX,cpu-cpuid.AESNI,cpu-cpuid.AVX,cpu-cpuid.AVX2,cpu-cpuid.AVX512BW,cpu-cpuid.AVX512CD,cpu-cpuid.AVX512DQ,cpu-cpuid.AVX512F,cpu-cpuid.AVX512VL,cpu-cpuid.FMA3,cpu-cpuid.HLE,cpu-cpuid.IBPB,cpu-cpuid.MPX,cpu-cpuid.RTM,cpu-cpuid.SSE4,cpu-cpuid.SSE42,cpu-cpuid.STIBP,cpu-cpuid.VMX,cpu-cstate.enabled,cpu-hardware_multithreading,cpu-pstate.status,cpu-pstate.turbo,cpu-rdt.RDTCMT,cpu-rdt.RDTL3CA,cpu-rdt.RDTMBA,cpu-rdt.RDTMBM,cpu-rdt.RDTMON,kernel-config.NO_HZ,kernel-config.NO_HZ_FULL,kernel-selinux.enabled,kernel-version.full,kernel-version.major,kernel-version.minor,kernel-version.revision,memory-numa,network-sriov.capable,network-sriov.configured,pci-0300_1a03.present,storage-nonrotationaldisk,system-os_release.ID,system-os_release.VERSION_ID,system-os_release.VERSION_ID.major nfd.node.kubernetes.io/worker.version:v0.8.2 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  [{kube-controller-manager Update v1 2021-10-29 21:07:28 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.3.0/24\"":{}}}}} {kubeadm Update v1 2021-10-29 21:07:28 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {flanneld Update v1 2021-10-29 21:08:29 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {nfd-master Update v1 2021-10-29 21:16:05 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/extended-resources":{},"f:nfd.node.kubernetes.io/feature-labels":{},"f:nfd.node.kubernetes.io/worker.version":{}},"f:labels":{"f:feature.node.kubernetes.io/cpu-cpuid.ADX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AESNI":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX2":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512BW":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512CD":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512DQ":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512F":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512VL":{},"f:feature.node.kubernetes.io/cpu-cpuid.FMA3":{},"f:feature.node.kubernetes.io/cpu-cpuid.HLE":{},"f:feature.node.kubernetes.io/cpu-cpuid.IBPB":{},"f:feature.node.kubernetes.io/cpu-cpuid.MPX":{},"f:feature.node.kubernetes.io/cpu-cpuid.RTM":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE4":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE42":{},"f:feature.node.kubernetes.io/cpu-cpuid.STIBP":{},"f:feature.node.kubernetes.io/cpu-cpuid.VMX":{},"f:feature.node.kubernetes.io/cpu-cstate.enabled":{},"f:feature.node.kubernetes.io/cpu-hardware_multithreading":{},"f:feature.node.kubernetes.io/cpu-pstate.status":{},"f:feature.node.kubernetes.io/cpu-pstate.turbo":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTCMT":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTL3CA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBM":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMON":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ_FULL":{},"f:feature.node.kubernetes.io/kernel-selinux.enabled":{},"f:feature.node.kubernetes.io/kernel-version.full":{},"f:feature.node.kubernetes.io/kernel-version.major":{},"f:feature.node.kubernetes.io/kernel-version.minor":{},"f:feature.node.kubernetes.io/kernel-version.revision":{},"f:feature.node.kubernetes.io/memory-numa":{},"f:feature.node.kubernetes.io/network-sriov.capable":{},"f:feature.node.kubernetes.io/network-sriov.configured":{},"f:feature.node.kubernetes.io/pci-0300_1a03.present":{},"f:feature.node.kubernetes.io/storage-nonrotationaldisk":{},"f:feature.node.kubernetes.io/system-os_release.ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID.major":{}}}}} {Swagger-Codegen Update v1 2021-10-29 21:19:31 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:cmk.intel.com/cmk-node":{}}},"f:status":{"f:capacity":{"f:cmk.intel.com/exclusive-cores":{}}}}} {kubelet Update v1 2021-10-30 01:59:04 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:cmk.intel.com/exclusive-cores":{},"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{},"f:scheduling.k8s.io/foo":{}},"f:capacity":{"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}} {e2e.test Update v1 2021-10-30 03:08:22 +0000 UTC FieldsV1 {"f:status":{"f:capacity":{"f:scheduling.k8s.io/foo":{}}}}}]},Spec:NodeSpec{PodCIDR:10.244.3.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.3.0/24],},Status:NodeStatus{Capacity:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{201269633024 0} {}  BinarySI},pods: {{110 0} {} 110 DecimalSI},scheduling.k8s.io/foo: {{3 0} {} 3 DecimalSI},},Allocatable:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{77 0} {} 77 DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{178884632576 0} {}  BinarySI},pods: {{110 0} {} 110 DecimalSI},scheduling.k8s.io/foo: {{3 0} {} 3 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-10-29 21:11:38 +0000 UTC,LastTransitionTime:2021-10-29 21:11:38 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-10-30 03:54:20 +0000 UTC,LastTransitionTime:2021-10-29 21:07:27 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-10-30 03:54:20 +0000 UTC,LastTransitionTime:2021-10-29 21:07:27 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-10-30 03:54:20 +0000 UTC,LastTransitionTime:2021-10-29 21:07:27 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-10-30 03:54:20 +0000 UTC,LastTransitionTime:2021-10-29 21:08:36 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.207,},NodeAddress{Type:Hostname,Address:node1,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:3bf4179125e4495c89c046ed0ae7baf7,SystemUUID:00CDA902-D022-E711-906E-0017A4403562,BootID:ce868148-dc5e-4c7c-a555-42ee929547f7,KernelVersion:3.10.0-1160.45.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.10,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[opnfv/barometer-collectd@sha256:f30e965aa6195e6ac4ca2410f5a15e3704c92e4afa5208178ca22a7911975d66],SizeBytes:1075575763,},ContainerImage{Names:[@ :],SizeBytes:1003432289,},ContainerImage{Names:[localhost:30500/cmk@sha256:430843a71fa03faf488543c9f5b50d3efbef49988d6784f9f48b8077cc806f60 cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:724463471,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[golang@sha256:db2475a1dbb2149508e5db31d7d77a75e6600d54be645f37681f03f2762169ba golang:alpine3.12],SizeBytes:301186719,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[grafana/grafana@sha256:ba39bf5131dcc0464134a3ff0e26e8c6380415249fa725e5f619176601255172 grafana/grafana:7.5.4],SizeBytes:203572842,},ContainerImage{Names:[quay.io/prometheus/prometheus@sha256:b899dbd1b9017b9a379f76ce5b40eead01a62762c4f2057eacef945c3c22d210 quay.io/prometheus/prometheus:v2.22.1],SizeBytes:168344243,},ContainerImage{Names:[nginx@sha256:644a70516a26004c97d0d85c7fe1d0c3a67ea8ab7ddf4aff193d9f301670cf36 nginx:latest],SizeBytes:133277153,},ContainerImage{Names:[nginx@sha256:a05b0cdd4fc1be3b224ba9662ebdf98fe44c09c0c9215b45f84344c12867002e nginx:1.21.1],SizeBytes:133175493,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:716d2f68314c5c4ddd5ecdb45183fcb4ed8019015982c1321571f863989b70b0 k8s.gcr.io/e2e-test-images/httpd:2.4.39-1],SizeBytes:126894770,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:125930239,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:1f36a24cfb5e0c3f725d7565a867c2384282fcbeccc77b07b423c9da95763a9a k8s.gcr.io/e2e-test-images/nautilus:1.4],SizeBytes:121748345,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[k8s.gcr.io/nfd/node-feature-discovery@sha256:74a1cbd82354f148277e20cdce25d57816e355a896bc67f67a0f722164b16945 k8s.gcr.io/nfd/node-feature-discovery:v0.8.2],SizeBytes:108486428,},ContainerImage{Names:[directxman12/k8s-prometheus-adapter@sha256:2b09a571757a12c0245f2f1a74db4d1b9386ff901cf57f5ce48a0a682bd0e3af directxman12/k8s-prometheus-adapter:v0.8.2],SizeBytes:68230450,},ContainerImage{Names:[k8s.gcr.io/build-image/debian-iptables@sha256:d226f3fd5f293ff513f53573a40c069b89d57d42338a1045b493bf702ac6b1f6 k8s.gcr.io/build-image/debian-iptables:buster-v1.6.5],SizeBytes:60182158,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[localhost:30500/sriov-device-plugin@sha256:2f1ff7ac170c0ac8079e232ea4ee89d23b7906d1b824d901927acb4e399c52c9 nfvpe/sriov-device-plugin:latest localhost:30500/sriov-device-plugin:v3.3.2],SizeBytes:42674030,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[prom/collectd-exporter@sha256:73fbda4d24421bff3b741c27efc36f1b6fbe7c57c378d56d4ff78101cd556654],SizeBytes:17463681,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:16032814,},ContainerImage{Names:[quay.io/prometheus-operator/prometheus-config-reloader@sha256:4dee0fcf1820355ddd6986c1317b555693776c731315544a99d6cc59a7e34ce9 quay.io/prometheus-operator/prometheus-config-reloader:v0.44.1],SizeBytes:13433274,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:11443478,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonewprivs@sha256:8ac1264691820febacf3aea5d152cbde6d10685731ec14966a9401c6f47a68ac k8s.gcr.io/e2e-test-images/nonewprivs:1.3],SizeBytes:7107254,},ContainerImage{Names:[appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172cead6502fec989198b98d8bb appropriate/curl:edge],SizeBytes:5654234,},ContainerImage{Names:[alpine@sha256:a296b4c6f6ee2b88f095b61e95c7ef4f51ba25598835b4978c9256d8c8ace48a alpine:3.12],SizeBytes:5581415,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:1154361,},ContainerImage{Names:[busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 busybox:1.28],SizeBytes:1146369,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},}
Oct 30 03:54:29.788: INFO: 
Logging kubelet events for node node1
Oct 30 03:54:29.789: INFO: 
Logging pods the kubelet thinks is on node node1
Oct 30 03:54:29.803: INFO: kube-multus-ds-amd64-68wrz started at 2021-10-29 21:08:34 +0000 UTC (0+1 container statuses recorded)
Oct 30 03:54:29.803: INFO: 	Container kube-multus ready: true, restart count 1
Oct 30 03:54:29.803: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-t789r started at 2021-10-29 21:17:10 +0000 UTC (0+1 container statuses recorded)
Oct 30 03:54:29.803: INFO: 	Container kube-sriovdp ready: true, restart count 0
Oct 30 03:54:29.803: INFO: up-down-2-5cz4q started at 2021-10-30 03:53:34 +0000 UTC (0+1 container statuses recorded)
Oct 30 03:54:29.803: INFO: 	Container up-down-2 ready: true, restart count 0
Oct 30 03:54:29.803: INFO: pod-client started at 2021-10-30 03:53:07 +0000 UTC (0+1 container statuses recorded)
Oct 30 03:54:29.803: INFO: 	Container pod-client ready: true, restart count 0
Oct 30 03:54:29.803: INFO: kubernetes-metrics-scraper-5558854cb-5rmjw started at 2021-10-29 21:09:04 +0000 UTC (0+1 container statuses recorded)
Oct 30 03:54:29.803: INFO: 	Container kubernetes-metrics-scraper ready: true, restart count 1
Oct 30 03:54:29.803: INFO: node-exporter-256wm started at 2021-10-29 21:21:15 +0000 UTC (0+2 container statuses recorded)
Oct 30 03:54:29.803: INFO: 	Container kube-rbac-proxy ready: true, restart count 0
Oct 30 03:54:29.803: INFO: 	Container node-exporter ready: true, restart count 0
Oct 30 03:54:29.803: INFO: prometheus-k8s-0 started at 2021-10-29 21:21:17 +0000 UTC (0+4 container statuses recorded)
Oct 30 03:54:29.803: INFO: 	Container config-reloader ready: true, restart count 0
Oct 30 03:54:29.803: INFO: 	Container custom-metrics-apiserver ready: true, restart count 0
Oct 30 03:54:29.803: INFO: 	Container grafana ready: true, restart count 0
Oct 30 03:54:29.803: INFO: 	Container prometheus ready: true, restart count 1
Oct 30 03:54:29.803: INFO: node-feature-discovery-worker-w5vdb started at 2021-10-29 21:15:58 +0000 UTC (0+1 container statuses recorded)
Oct 30 03:54:29.803: INFO: 	Container nfd-worker ready: true, restart count 0
Oct 30 03:54:29.803: INFO: kube-flannel-phg88 started at 2021-10-29 21:08:25 +0000 UTC (1+1 container statuses recorded)
Oct 30 03:54:29.804: INFO: 	Init container install-cni ready: true, restart count 2
Oct 30 03:54:29.804: INFO: 	Container kube-flannel ready: true, restart count 2
Oct 30 03:54:29.804: INFO: up-down-2-gd9xv started at 2021-10-30 03:53:34 +0000 UTC (0+1 container statuses recorded)
Oct 30 03:54:29.804: INFO: 	Container up-down-2 ready: true, restart count 0
Oct 30 03:54:29.804: INFO: kube-proxy-z5hqt started at 2021-10-29 21:07:31 +0000 UTC (0+1 container statuses recorded)
Oct 30 03:54:29.804: INFO: 	Container kube-proxy ready: true, restart count 1
Oct 30 03:54:29.804: INFO: collectd-d45rv started at 2021-10-29 21:25:13 +0000 UTC (0+3 container statuses recorded)
Oct 30 03:54:29.804: INFO: 	Container collectd ready: true, restart count 0
Oct 30 03:54:29.804: INFO: 	Container collectd-exporter ready: true, restart count 0
Oct 30 03:54:29.804: INFO: 	Container rbac-proxy ready: true, restart count 0
Oct 30 03:54:29.804: INFO: nginx-proxy-node1 started at 2021-10-29 21:07:28 +0000 UTC (0+1 container statuses recorded)
Oct 30 03:54:29.804: INFO: 	Container nginx-proxy ready: true, restart count 2
Oct 30 03:54:29.804: INFO: cmk-init-discover-node1-n4mcc started at 2021-10-29 21:19:28 +0000 UTC (0+3 container statuses recorded)
Oct 30 03:54:29.804: INFO: 	Container discover ready: false, restart count 0
Oct 30 03:54:29.804: INFO: 	Container init ready: false, restart count 0
Oct 30 03:54:29.804: INFO: 	Container install ready: false, restart count 0
Oct 30 03:54:29.804: INFO: cmk-89lqq started at 2021-10-29 21:20:10 +0000 UTC (0+2 container statuses recorded)
Oct 30 03:54:29.804: INFO: 	Container nodereport ready: true, restart count 0
Oct 30 03:54:29.804: INFO: 	Container reconcile ready: true, restart count 0
W1030 03:54:29.819510      31 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled.
Oct 30 03:54:30.094: INFO: 
Latency metrics for node node1
Oct 30 03:54:30.094: INFO: 
Logging node info for node node2
Oct 30 03:54:30.098: INFO: Node Info: &Node{ObjectMeta:{node2    3b49ad19-ba56-4f4a-b1fa-eef102063de9 146581 0 2021-10-29 21:07:28 +0000 UTC   map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux cmk.intel.com/cmk-node:true feature.node.kubernetes.io/cpu-cpuid.ADX:true feature.node.kubernetes.io/cpu-cpuid.AESNI:true feature.node.kubernetes.io/cpu-cpuid.AVX:true feature.node.kubernetes.io/cpu-cpuid.AVX2:true feature.node.kubernetes.io/cpu-cpuid.AVX512BW:true feature.node.kubernetes.io/cpu-cpuid.AVX512CD:true feature.node.kubernetes.io/cpu-cpuid.AVX512DQ:true feature.node.kubernetes.io/cpu-cpuid.AVX512F:true feature.node.kubernetes.io/cpu-cpuid.AVX512VL:true feature.node.kubernetes.io/cpu-cpuid.FMA3:true feature.node.kubernetes.io/cpu-cpuid.HLE:true feature.node.kubernetes.io/cpu-cpuid.IBPB:true feature.node.kubernetes.io/cpu-cpuid.MPX:true feature.node.kubernetes.io/cpu-cpuid.RTM:true feature.node.kubernetes.io/cpu-cpuid.SSE4:true feature.node.kubernetes.io/cpu-cpuid.SSE42:true feature.node.kubernetes.io/cpu-cpuid.STIBP:true feature.node.kubernetes.io/cpu-cpuid.VMX:true feature.node.kubernetes.io/cpu-cstate.enabled:true feature.node.kubernetes.io/cpu-hardware_multithreading:true feature.node.kubernetes.io/cpu-pstate.status:active feature.node.kubernetes.io/cpu-pstate.turbo:true feature.node.kubernetes.io/cpu-rdt.RDTCMT:true feature.node.kubernetes.io/cpu-rdt.RDTL3CA:true feature.node.kubernetes.io/cpu-rdt.RDTMBA:true feature.node.kubernetes.io/cpu-rdt.RDTMBM:true feature.node.kubernetes.io/cpu-rdt.RDTMON:true feature.node.kubernetes.io/kernel-config.NO_HZ:true feature.node.kubernetes.io/kernel-config.NO_HZ_FULL:true feature.node.kubernetes.io/kernel-selinux.enabled:true feature.node.kubernetes.io/kernel-version.full:3.10.0-1160.45.1.el7.x86_64 feature.node.kubernetes.io/kernel-version.major:3 feature.node.kubernetes.io/kernel-version.minor:10 feature.node.kubernetes.io/kernel-version.revision:0 feature.node.kubernetes.io/memory-numa:true feature.node.kubernetes.io/network-sriov.capable:true feature.node.kubernetes.io/network-sriov.configured:true feature.node.kubernetes.io/pci-0300_1a03.present:true feature.node.kubernetes.io/storage-nonrotationaldisk:true feature.node.kubernetes.io/system-os_release.ID:centos feature.node.kubernetes.io/system-os_release.VERSION_ID:7 feature.node.kubernetes.io/system-os_release.VERSION_ID.major:7 kubernetes.io/arch:amd64 kubernetes.io/hostname:node2 kubernetes.io/os:linux] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.208 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/extended-resources: nfd.node.kubernetes.io/feature-labels:cpu-cpuid.ADX,cpu-cpuid.AESNI,cpu-cpuid.AVX,cpu-cpuid.AVX2,cpu-cpuid.AVX512BW,cpu-cpuid.AVX512CD,cpu-cpuid.AVX512DQ,cpu-cpuid.AVX512F,cpu-cpuid.AVX512VL,cpu-cpuid.FMA3,cpu-cpuid.HLE,cpu-cpuid.IBPB,cpu-cpuid.MPX,cpu-cpuid.RTM,cpu-cpuid.SSE4,cpu-cpuid.SSE42,cpu-cpuid.STIBP,cpu-cpuid.VMX,cpu-cstate.enabled,cpu-hardware_multithreading,cpu-pstate.status,cpu-pstate.turbo,cpu-rdt.RDTCMT,cpu-rdt.RDTL3CA,cpu-rdt.RDTMBA,cpu-rdt.RDTMBM,cpu-rdt.RDTMON,kernel-config.NO_HZ,kernel-config.NO_HZ_FULL,kernel-selinux.enabled,kernel-version.full,kernel-version.major,kernel-version.minor,kernel-version.revision,memory-numa,network-sriov.capable,network-sriov.configured,pci-0300_1a03.present,storage-nonrotationaldisk,system-os_release.ID,system-os_release.VERSION_ID,system-os_release.VERSION_ID.major nfd.node.kubernetes.io/worker.version:v0.8.2 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  [{kube-controller-manager Update v1 2021-10-29 21:07:28 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.4.0/24\"":{}}}}} {kubeadm Update v1 2021-10-29 21:07:28 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {flanneld Update v1 2021-10-29 21:08:30 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {nfd-master Update v1 2021-10-29 21:16:05 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/extended-resources":{},"f:nfd.node.kubernetes.io/feature-labels":{},"f:nfd.node.kubernetes.io/worker.version":{}},"f:labels":{"f:feature.node.kubernetes.io/cpu-cpuid.ADX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AESNI":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX2":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512BW":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512CD":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512DQ":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512F":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512VL":{},"f:feature.node.kubernetes.io/cpu-cpuid.FMA3":{},"f:feature.node.kubernetes.io/cpu-cpuid.HLE":{},"f:feature.node.kubernetes.io/cpu-cpuid.IBPB":{},"f:feature.node.kubernetes.io/cpu-cpuid.MPX":{},"f:feature.node.kubernetes.io/cpu-cpuid.RTM":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE4":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE42":{},"f:feature.node.kubernetes.io/cpu-cpuid.STIBP":{},"f:feature.node.kubernetes.io/cpu-cpuid.VMX":{},"f:feature.node.kubernetes.io/cpu-cstate.enabled":{},"f:feature.node.kubernetes.io/cpu-hardware_multithreading":{},"f:feature.node.kubernetes.io/cpu-pstate.status":{},"f:feature.node.kubernetes.io/cpu-pstate.turbo":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTCMT":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTL3CA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBM":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMON":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ_FULL":{},"f:feature.node.kubernetes.io/kernel-selinux.enabled":{},"f:feature.node.kubernetes.io/kernel-version.full":{},"f:feature.node.kubernetes.io/kernel-version.major":{},"f:feature.node.kubernetes.io/kernel-version.minor":{},"f:feature.node.kubernetes.io/kernel-version.revision":{},"f:feature.node.kubernetes.io/memory-numa":{},"f:feature.node.kubernetes.io/network-sriov.capable":{},"f:feature.node.kubernetes.io/network-sriov.configured":{},"f:feature.node.kubernetes.io/pci-0300_1a03.present":{},"f:feature.node.kubernetes.io/storage-nonrotationaldisk":{},"f:feature.node.kubernetes.io/system-os_release.ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID.major":{}}}}} {Swagger-Codegen Update v1 2021-10-29 21:19:53 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:cmk.intel.com/cmk-node":{}}},"f:status":{"f:capacity":{"f:cmk.intel.com/exclusive-cores":{}}}}} {e2e.test Update v1 2021-10-30 01:59:00 +0000 UTC FieldsV1 {"f:status":{"f:capacity":{"f:scheduling.k8s.io/foo":{}}}}} {kubelet Update v1 2021-10-30 01:59:08 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:cmk.intel.com/exclusive-cores":{},"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{},"f:scheduling.k8s.io/foo":{}},"f:capacity":{"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.4.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.4.0/24],},Status:NodeStatus{Capacity:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{201269628928 0} {} 196552372Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},scheduling.k8s.io/foo: {{3 0} {} 3 DecimalSI},},Allocatable:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{77 0} {} 77 DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{178884628480 0} {} 174692020Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},scheduling.k8s.io/foo: {{3 0} {} 3 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-10-29 21:11:34 +0000 UTC,LastTransitionTime:2021-10-29 21:11:34 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-10-30 03:54:27 +0000 UTC,LastTransitionTime:2021-10-29 21:07:28 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-10-30 03:54:27 +0000 UTC,LastTransitionTime:2021-10-29 21:07:28 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-10-30 03:54:27 +0000 UTC,LastTransitionTime:2021-10-29 21:07:28 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-10-30 03:54:27 +0000 UTC,LastTransitionTime:2021-10-29 21:08:36 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.208,},NodeAddress{Type:Hostname,Address:node2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:7283436dd9e34722a6e4df817add95ed,SystemUUID:80B3CD56-852F-E711-906E-0017A4403562,BootID:c219e7bd-582b-4d6c-b379-1161acc70676,KernelVersion:3.10.0-1160.45.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.10,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[opnfv/barometer-collectd@sha256:f30e965aa6195e6ac4ca2410f5a15e3704c92e4afa5208178ca22a7911975d66],SizeBytes:1075575763,},ContainerImage{Names:[localhost:30500/cmk@sha256:430843a71fa03faf488543c9f5b50d3efbef49988d6784f9f48b8077cc806f60 localhost:30500/cmk:v1.5.1],SizeBytes:724463471,},ContainerImage{Names:[cmk:v1.5.1],SizeBytes:724463471,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[aquasec/kube-hunter@sha256:2be6820bc1d7e0f57193a9a27d5a3e16b2fd93c53747b03ce8ca48c6fc323781 aquasec/kube-hunter:0.3.1],SizeBytes:347611549,},ContainerImage{Names:[sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 sirot/netperf-latest:latest],SizeBytes:282025213,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[k8s.gcr.io/etcd@sha256:4ad90a11b55313b182afc186b9876c8e891531b8db4c9bf1541953021618d0e2 k8s.gcr.io/etcd:3.4.13-0],SizeBytes:253392289,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:702a992280fb7c3303e84a5801acbb4c9c7fcf48cffe0e9c8be3f0c60f74cf89 k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.4],SizeBytes:253371792,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[nginx@sha256:644a70516a26004c97d0d85c7fe1d0c3a67ea8ab7ddf4aff193d9f301670cf36 nginx:latest],SizeBytes:133277153,},ContainerImage{Names:[nginx@sha256:a05b0cdd4fc1be3b224ba9662ebdf98fe44c09c0c9215b45f84344c12867002e nginx:1.21.1],SizeBytes:133175493,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:716d2f68314c5c4ddd5ecdb45183fcb4ed8019015982c1321571f863989b70b0 k8s.gcr.io/e2e-test-images/httpd:2.4.39-1],SizeBytes:126894770,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:125930239,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:1f36a24cfb5e0c3f725d7565a867c2384282fcbeccc77b07b423c9da95763a9a k8s.gcr.io/e2e-test-images/nautilus:1.4],SizeBytes:121748345,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[k8s.gcr.io/nfd/node-feature-discovery@sha256:74a1cbd82354f148277e20cdce25d57816e355a896bc67f67a0f722164b16945 k8s.gcr.io/nfd/node-feature-discovery:v0.8.2],SizeBytes:108486428,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/sample-apiserver@sha256:e7fddbaac4c3451da2365ab90bad149d32f11409738034e41e0f460927f7c276 k8s.gcr.io/e2e-test-images/sample-apiserver:1.17.4],SizeBytes:58172101,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/regression-issue-74839@sha256:b4f1d8d61bdad84bd50442d161d5460e4019d53e989b64220fdbc62fc87d76bf k8s.gcr.io/e2e-test-images/regression-issue-74839:1.2],SizeBytes:44576952,},ContainerImage{Names:[localhost:30500/sriov-device-plugin@sha256:2f1ff7ac170c0ac8079e232ea4ee89d23b7906d1b824d901927acb4e399c52c9 localhost:30500/sriov-device-plugin:v3.3.2],SizeBytes:42674030,},ContainerImage{Names:[localhost:30500/tasextender@sha256:b7d2fa8154ac5d9cff45866e4d3d210a7d390f8576611c301a2eed2b57273227 localhost:30500/tasextender:0.4],SizeBytes:28910791,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 aquasec/kube-bench:0.3.1],SizeBytes:19301876,},ContainerImage{Names:[prom/collectd-exporter@sha256:73fbda4d24421bff3b741c27efc36f1b6fbe7c57c378d56d4ff78101cd556654],SizeBytes:17463681,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:16032814,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:11443478,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonewprivs@sha256:8ac1264691820febacf3aea5d152cbde6d10685731ec14966a9401c6f47a68ac k8s.gcr.io/e2e-test-images/nonewprivs:1.3],SizeBytes:7107254,},ContainerImage{Names:[appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172cead6502fec989198b98d8bb appropriate/curl:edge],SizeBytes:5654234,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:1154361,},ContainerImage{Names:[busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 busybox:1.28],SizeBytes:1146369,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},}
Oct 30 03:54:30.099: INFO: 
Logging kubelet events for node node2
Oct 30 03:54:30.101: INFO: 
Logging pods the kubelet thinks is on node node2
Oct 30 03:54:30.113: INFO: up-down-2-tgwxx started at 2021-10-30 03:53:34 +0000 UTC (0+1 container statuses recorded)
Oct 30 03:54:30.114: INFO: 	Container up-down-2 ready: true, restart count 0
Oct 30 03:54:30.114: INFO: kube-proxy-76285 started at 2021-10-29 21:07:31 +0000 UTC (0+1 container statuses recorded)
Oct 30 03:54:30.114: INFO: 	Container kube-proxy ready: true, restart count 1
Oct 30 03:54:30.114: INFO: verify-service-up-host-exec-pod started at 2021-10-30 03:54:29 +0000 UTC (0+1 container statuses recorded)
Oct 30 03:54:30.114: INFO: 	Container agnhost-container ready: false, restart count 0
Oct 30 03:54:30.114: INFO: node-feature-discovery-worker-h6lcp started at 2021-10-29 21:15:58 +0000 UTC (0+1 container statuses recorded)
Oct 30 03:54:30.114: INFO: 	Container nfd-worker ready: true, restart count 0
Oct 30 03:54:30.114: INFO: kube-flannel-f6s5v started at 2021-10-29 21:08:25 +0000 UTC (1+1 container statuses recorded)
Oct 30 03:54:30.114: INFO: 	Init container install-cni ready: true, restart count 2
Oct 30 03:54:30.114: INFO: 	Container kube-flannel ready: true, restart count 3
Oct 30 03:54:30.114: INFO: kube-multus-ds-amd64-7tvbl started at 2021-10-29 21:08:34 +0000 UTC (0+1 container statuses recorded)
Oct 30 03:54:30.114: INFO: 	Container kube-multus ready: true, restart count 1
Oct 30 03:54:30.114: INFO: cmk-webhook-6c9d5f8578-ffk66 started at 2021-10-29 21:20:11 +0000 UTC (0+1 container statuses recorded)
Oct 30 03:54:30.114: INFO: 	Container cmk-webhook ready: true, restart count 0
Oct 30 03:54:30.114: INFO: nginx-proxy-node2 started at 2021-10-29 21:07:28 +0000 UTC (0+1 container statuses recorded)
Oct 30 03:54:30.114: INFO: 	Container nginx-proxy ready: true, restart count 2
Oct 30 03:54:30.114: INFO: kubernetes-dashboard-785dcbb76d-pbjjt started at 2021-10-29 21:09:04 +0000 UTC (0+1 container statuses recorded)
Oct 30 03:54:30.114: INFO: 	Container kubernetes-dashboard ready: true, restart count 1
Oct 30 03:54:30.114: INFO: cmk-init-discover-node2-2fmmt started at 2021-10-29 21:19:48 +0000 UTC (0+3 container statuses recorded)
Oct 30 03:54:30.114: INFO: 	Container discover ready: false, restart count 0
Oct 30 03:54:30.114: INFO: 	Container init ready: false, restart count 0
Oct 30 03:54:30.114: INFO: 	Container install ready: false, restart count 0
Oct 30 03:54:30.114: INFO: pod-server-1 started at 2021-10-30 03:53:19 +0000 UTC (0+1 container statuses recorded)
Oct 30 03:54:30.114: INFO: 	Container agnhost-container ready: true, restart count 0
Oct 30 03:54:30.114: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-69pkg started at 2021-10-29 21:17:10 +0000 UTC (0+1 container statuses recorded)
Oct 30 03:54:30.114: INFO: 	Container kube-sriovdp ready: true, restart count 0
Oct 30 03:54:30.114: INFO: cmk-8bpbf started at 2021-10-29 21:20:11 +0000 UTC (0+2 container statuses recorded)
Oct 30 03:54:30.114: INFO: 	Container nodereport ready: true, restart count 0
Oct 30 03:54:30.114: INFO: 	Container reconcile ready: true, restart count 0
Oct 30 03:54:30.114: INFO: node-exporter-r77s4 started at 2021-10-29 21:21:15 +0000 UTC (0+2 container statuses recorded)
Oct 30 03:54:30.114: INFO: 	Container kube-rbac-proxy ready: true, restart count 0
Oct 30 03:54:30.114: INFO: 	Container node-exporter ready: true, restart count 0
Oct 30 03:54:30.114: INFO: tas-telemetry-aware-scheduling-84ff454dfb-989mh started at 2021-10-29 21:24:23 +0000 UTC (0+1 container statuses recorded)
Oct 30 03:54:30.114: INFO: 	Container tas-extender ready: true, restart count 0
Oct 30 03:54:30.114: INFO: collectd-flvhl started at 2021-10-29 21:25:13 +0000 UTC (0+3 container statuses recorded)
Oct 30 03:54:30.114: INFO: 	Container collectd ready: true, restart count 0
Oct 30 03:54:30.114: INFO: 	Container collectd-exporter ready: true, restart count 0
Oct 30 03:54:30.114: INFO: 	Container rbac-proxy ready: true, restart count 0
W1030 03:54:30.134067      31 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled.
Oct 30 03:54:30.463: INFO: 
Latency metrics for node node2
Oct 30 03:54:30.463: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "conntrack-1111" for this suite.


• Failure [83.215 seconds]
[sig-network] Conntrack
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should be able to preserve UDP traffic when server pod cycles for a NodePort service [It]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/conntrack.go:130

  Oct 30 03:54:29.474: Failed to connect to backend 1

  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes/runner.go:113
------------------------------
{"msg":"FAILED [sig-network] Conntrack should be able to preserve UDP traffic when server pod cycles for a NodePort service","total":-1,"completed":2,"skipped":305,"failed":1,"failures":["[sig-network] Conntrack should be able to preserve UDP traffic when server pod cycles for a NodePort service"]}
Oct 30 03:54:30.474: INFO: Running AfterSuite actions on all nodes


[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct 30 03:53:22.582: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746
[It] should be able to up and down services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1015
STEP: creating up-down-1 in namespace services-3021
STEP: creating service up-down-1 in namespace services-3021
STEP: creating replication controller up-down-1 in namespace services-3021
I1030 03:53:22.614156      22 runners.go:190] Created replication controller with name: up-down-1, namespace: services-3021, replica count: 3
I1030 03:53:25.666441      22 runners.go:190] up-down-1 Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1030 03:53:28.667497      22 runners.go:190] up-down-1 Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1030 03:53:31.668180      22 runners.go:190] up-down-1 Pods: 3 out of 3 created, 1 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1030 03:53:34.669512      22 runners.go:190] up-down-1 Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
STEP: creating up-down-2 in namespace services-3021
STEP: creating service up-down-2 in namespace services-3021
STEP: creating replication controller up-down-2 in namespace services-3021
I1030 03:53:34.682262      22 runners.go:190] Created replication controller with name: up-down-2, namespace: services-3021, replica count: 3
I1030 03:53:37.733716      22 runners.go:190] up-down-2 Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1030 03:53:40.734936      22 runners.go:190] up-down-2 Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
STEP: verifying service up-down-1 is up
Oct 30 03:53:40.738: INFO: Creating new host exec pod
Oct 30 03:53:40.751: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Oct 30 03:53:42.755: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Oct 30 03:53:44.754: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Oct 30 03:53:46.758: INFO: The status of Pod verify-service-up-host-exec-pod is Running (Ready = true)
Oct 30 03:53:46.758: INFO: Creating new exec pod
STEP: verifying service has 3 reachable backends
Oct 30 03:53:50.772: INFO: Executing cmd "for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.56.50:80 2>&1 || true; echo; done" in pod services-3021/verify-service-up-host-exec-pod
Oct 30 03:53:50.773: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3021 exec verify-service-up-host-exec-pod -- /bin/sh -x -c for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.56.50:80 2>&1 || true; echo; done'
Oct 30 03:53:51.138: INFO: stderr: "+ seq 1 150\n+ wget -q -T 1 -O - http://10.233.56.50:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.56.50:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.56.50:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.56.50:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.56.50:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.56.50:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.56.50:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.56.50:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.56.50:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.56.50:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.56.50:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.56.50:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.56.50:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.56.50:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.56.50:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.56.50:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.56.50:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.56.50:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.56.50:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.56.50:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.56.50:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.56.50:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.56.50:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.56.50:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.56.50:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.56.50:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.56.50:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.56.50:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.56.50:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.56.50:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.56.50:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.56.50:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.56.50:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.56.50:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.56.50:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.56.50:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.56.50:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.56.50:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.56.50:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.56.50:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.56.50:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.56.50:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.56.50:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.56.50:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.56.50:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.56.50:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.56.50:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.56.50:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.56.50:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.56.50:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.56.50:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.56.50:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.56.50:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.56.50:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.56.50:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.56.50:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.56.50:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.56.50:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.56.50:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.56.50:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.56.50:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.56.50:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.56.50:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.56.50:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.56.50:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.56.50:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.56.50:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.56.50:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.56.50:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.56.50:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.56.50:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.56.50:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.56.50:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.56.50:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.56.50:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.56.50:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.56.50:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.56.50:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.56.50:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.56.50:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.56.50:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.56.50:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.56.50:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.56.50:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.56.50:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.56.50:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.56.50:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.56.50:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.56.50:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.56.50:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.56.50:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.56.50:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.56.50:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.56.50:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.56.50:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.56.50:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.56.50:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.56.50:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.56.50:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.56.50:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.56.50:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.56.50:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.56.50:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.56.50:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.56.50:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.56.50:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.56.50:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.56.50:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.56.50:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.56.50:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.56.50:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.56.50:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.56.50:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.56.50:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.56.50:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.56.50:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.56.50:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.56.50:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.56.50:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.56.50:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.56.50:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.56.50:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.56.50:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.56.50:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.56.50:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.56.50:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.56.50:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.56.50:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.56.50:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.56.50:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.56.50:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.56.50:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.56.50:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.56.50:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.56.50:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.56.50:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.56.50:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.56.50:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.56.50:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.56.50:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.56.50:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.56.50:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.56.50:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.56.50:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.56.50:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.56.50:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.56.50:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.56.50:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.56.50:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.56.50:80\n+ echo\n"
Oct 30 03:53:51.139: INFO: stdout: "up-down-1-6425n\nup-down-1-vrthx\nup-down-1-zk986\nup-down-1-zk986\nup-down-1-6425n\nup-down-1-vrthx\nup-down-1-6425n\nup-down-1-6425n\nup-down-1-zk986\nup-down-1-6425n\nup-down-1-vrthx\nup-down-1-vrthx\nup-down-1-zk986\nup-down-1-6425n\nup-down-1-zk986\nup-down-1-vrthx\nup-down-1-vrthx\nup-down-1-vrthx\nup-down-1-zk986\nup-down-1-6425n\nup-down-1-zk986\nup-down-1-6425n\nup-down-1-6425n\nup-down-1-6425n\nup-down-1-6425n\nup-down-1-vrthx\nup-down-1-6425n\nup-down-1-6425n\nup-down-1-6425n\nup-down-1-vrthx\nup-down-1-zk986\nup-down-1-zk986\nup-down-1-zk986\nup-down-1-vrthx\nup-down-1-zk986\nup-down-1-zk986\nup-down-1-6425n\nup-down-1-zk986\nup-down-1-6425n\nup-down-1-vrthx\nup-down-1-zk986\nup-down-1-zk986\nup-down-1-6425n\nup-down-1-zk986\nup-down-1-vrthx\nup-down-1-vrthx\nup-down-1-vrthx\nup-down-1-zk986\nup-down-1-vrthx\nup-down-1-6425n\nup-down-1-vrthx\nup-down-1-vrthx\nup-down-1-6425n\nup-down-1-zk986\nup-down-1-zk986\nup-down-1-zk986\nup-down-1-vrthx\nup-down-1-vrthx\nup-down-1-zk986\nup-down-1-zk986\nup-down-1-vrthx\nup-down-1-zk986\nup-down-1-zk986\nup-down-1-zk986\nup-down-1-6425n\nup-down-1-vrthx\nup-down-1-6425n\nup-down-1-6425n\nup-down-1-zk986\nup-down-1-vrthx\nup-down-1-6425n\nup-down-1-vrthx\nup-down-1-zk986\nup-down-1-zk986\nup-down-1-6425n\nup-down-1-6425n\nup-down-1-6425n\nup-down-1-6425n\nup-down-1-vrthx\nup-down-1-zk986\nup-down-1-vrthx\nup-down-1-zk986\nup-down-1-zk986\nup-down-1-vrthx\nup-down-1-6425n\nup-down-1-zk986\nup-down-1-vrthx\nup-down-1-6425n\nup-down-1-6425n\nup-down-1-vrthx\nup-down-1-vrthx\nup-down-1-6425n\nup-down-1-6425n\nup-down-1-zk986\nup-down-1-vrthx\nup-down-1-vrthx\nup-down-1-zk986\nup-down-1-zk986\nup-down-1-vrthx\nup-down-1-zk986\nup-down-1-6425n\nup-down-1-vrthx\nup-down-1-vrthx\nup-down-1-zk986\nup-down-1-6425n\nup-down-1-zk986\nup-down-1-vrthx\nup-down-1-vrthx\nup-down-1-6425n\nup-down-1-zk986\nup-down-1-vrthx\nup-down-1-vrthx\nup-down-1-vrthx\nup-down-1-zk986\nup-down-1-vrthx\nup-down-1-vrthx\nup-down-1-vrthx\nup-down-1-zk986\nup-down-1-zk986\nup-down-1-zk986\nup-down-1-vrthx\nup-down-1-vrthx\nup-down-1-zk986\nup-down-1-zk986\nup-down-1-6425n\nup-down-1-6425n\nup-down-1-zk986\nup-down-1-zk986\nup-down-1-vrthx\nup-down-1-6425n\nup-down-1-6425n\nup-down-1-zk986\nup-down-1-zk986\nup-down-1-zk986\nup-down-1-vrthx\nup-down-1-vrthx\nup-down-1-6425n\nup-down-1-zk986\nup-down-1-vrthx\nup-down-1-6425n\nup-down-1-zk986\nup-down-1-zk986\nup-down-1-zk986\nup-down-1-zk986\nup-down-1-zk986\nup-down-1-zk986\nup-down-1-vrthx\nup-down-1-zk986\nup-down-1-zk986\nup-down-1-zk986\n"
Oct 30 03:53:51.139: INFO: Executing cmd "for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.56.50:80 2>&1 || true; echo; done" in pod services-3021/verify-service-up-exec-pod-b6f98
Oct 30 03:53:51.139: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3021 exec verify-service-up-exec-pod-b6f98 -- /bin/sh -x -c for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.56.50:80 2>&1 || true; echo; done'
Oct 30 03:53:51.518: INFO: stderr: "+ seq 1 150\n+ wget -q -T 1 -O - http://10.233.56.50:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.56.50:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.56.50:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.56.50:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.56.50:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.56.50:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.56.50:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.56.50:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.56.50:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.56.50:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.56.50:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.56.50:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.56.50:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.56.50:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.56.50:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.56.50:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.56.50:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.56.50:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.56.50:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.56.50:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.56.50:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.56.50:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.56.50:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.56.50:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.56.50:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.56.50:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.56.50:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.56.50:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.56.50:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.56.50:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.56.50:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.56.50:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.56.50:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.56.50:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.56.50:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.56.50:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.56.50:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.56.50:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.56.50:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.56.50:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.56.50:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.56.50:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.56.50:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.56.50:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.56.50:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.56.50:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.56.50:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.56.50:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.56.50:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.56.50:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.56.50:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.56.50:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.56.50:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.56.50:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.56.50:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.56.50:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.56.50:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.56.50:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.56.50:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.56.50:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.56.50:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.56.50:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.56.50:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.56.50:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.56.50:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.56.50:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.56.50:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.56.50:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.56.50:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.56.50:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.56.50:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.56.50:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.56.50:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.56.50:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.56.50:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.56.50:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.56.50:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.56.50:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.56.50:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.56.50:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.56.50:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.56.50:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.56.50:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.56.50:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.56.50:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.56.50:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.56.50:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.56.50:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.56.50:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.56.50:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.56.50:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.56.50:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.56.50:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.56.50:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.56.50:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.56.50:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.56.50:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.56.50:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.56.50:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.56.50:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.56.50:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.56.50:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.56.50:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.56.50:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.56.50:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.56.50:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.56.50:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.56.50:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.56.50:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.56.50:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.56.50:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.56.50:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.56.50:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.56.50:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.56.50:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.56.50:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.56.50:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.56.50:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.56.50:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.56.50:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.56.50:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.56.50:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.56.50:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.56.50:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.56.50:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.56.50:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.56.50:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.56.50:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.56.50:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.56.50:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.56.50:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.56.50:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.56.50:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.56.50:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.56.50:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.56.50:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.56.50:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.56.50:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.56.50:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.56.50:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.56.50:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.56.50:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.56.50:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.56.50:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.56.50:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.56.50:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.56.50:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.56.50:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.56.50:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.56.50:80\n+ echo\n"
Oct 30 03:53:51.519: INFO: stdout: "up-down-1-vrthx\nup-down-1-vrthx\nup-down-1-zk986\nup-down-1-6425n\nup-down-1-zk986\nup-down-1-vrthx\nup-down-1-zk986\nup-down-1-vrthx\nup-down-1-6425n\nup-down-1-6425n\nup-down-1-zk986\nup-down-1-zk986\nup-down-1-vrthx\nup-down-1-zk986\nup-down-1-6425n\nup-down-1-vrthx\nup-down-1-6425n\nup-down-1-vrthx\nup-down-1-vrthx\nup-down-1-zk986\nup-down-1-vrthx\nup-down-1-vrthx\nup-down-1-zk986\nup-down-1-6425n\nup-down-1-vrthx\nup-down-1-6425n\nup-down-1-6425n\nup-down-1-zk986\nup-down-1-6425n\nup-down-1-zk986\nup-down-1-zk986\nup-down-1-zk986\nup-down-1-vrthx\nup-down-1-6425n\nup-down-1-6425n\nup-down-1-6425n\nup-down-1-vrthx\nup-down-1-zk986\nup-down-1-6425n\nup-down-1-zk986\nup-down-1-zk986\nup-down-1-vrthx\nup-down-1-vrthx\nup-down-1-vrthx\nup-down-1-vrthx\nup-down-1-zk986\nup-down-1-vrthx\nup-down-1-vrthx\nup-down-1-6425n\nup-down-1-vrthx\nup-down-1-zk986\nup-down-1-6425n\nup-down-1-zk986\nup-down-1-zk986\nup-down-1-vrthx\nup-down-1-vrthx\nup-down-1-6425n\nup-down-1-zk986\nup-down-1-zk986\nup-down-1-6425n\nup-down-1-zk986\nup-down-1-vrthx\nup-down-1-vrthx\nup-down-1-6425n\nup-down-1-vrthx\nup-down-1-6425n\nup-down-1-zk986\nup-down-1-vrthx\nup-down-1-vrthx\nup-down-1-vrthx\nup-down-1-6425n\nup-down-1-zk986\nup-down-1-vrthx\nup-down-1-6425n\nup-down-1-vrthx\nup-down-1-vrthx\nup-down-1-vrthx\nup-down-1-6425n\nup-down-1-6425n\nup-down-1-6425n\nup-down-1-zk986\nup-down-1-vrthx\nup-down-1-vrthx\nup-down-1-zk986\nup-down-1-vrthx\nup-down-1-zk986\nup-down-1-vrthx\nup-down-1-6425n\nup-down-1-zk986\nup-down-1-6425n\nup-down-1-vrthx\nup-down-1-6425n\nup-down-1-6425n\nup-down-1-vrthx\nup-down-1-vrthx\nup-down-1-vrthx\nup-down-1-6425n\nup-down-1-zk986\nup-down-1-vrthx\nup-down-1-vrthx\nup-down-1-vrthx\nup-down-1-6425n\nup-down-1-vrthx\nup-down-1-vrthx\nup-down-1-zk986\nup-down-1-6425n\nup-down-1-zk986\nup-down-1-6425n\nup-down-1-zk986\nup-down-1-vrthx\nup-down-1-6425n\nup-down-1-vrthx\nup-down-1-vrthx\nup-down-1-6425n\nup-down-1-zk986\nup-down-1-zk986\nup-down-1-vrthx\nup-down-1-vrthx\nup-down-1-zk986\nup-down-1-6425n\nup-down-1-vrthx\nup-down-1-zk986\nup-down-1-zk986\nup-down-1-vrthx\nup-down-1-zk986\nup-down-1-6425n\nup-down-1-vrthx\nup-down-1-6425n\nup-down-1-6425n\nup-down-1-vrthx\nup-down-1-vrthx\nup-down-1-zk986\nup-down-1-6425n\nup-down-1-zk986\nup-down-1-vrthx\nup-down-1-vrthx\nup-down-1-6425n\nup-down-1-vrthx\nup-down-1-zk986\nup-down-1-zk986\nup-down-1-zk986\nup-down-1-6425n\nup-down-1-vrthx\nup-down-1-6425n\nup-down-1-zk986\nup-down-1-zk986\nup-down-1-vrthx\nup-down-1-zk986\nup-down-1-6425n\nup-down-1-6425n\n"
STEP: Deleting pod verify-service-up-host-exec-pod in namespace services-3021
STEP: Deleting pod verify-service-up-exec-pod-b6f98 in namespace services-3021
STEP: verifying service up-down-2 is up
Oct 30 03:53:51.531: INFO: Creating new host exec pod
Oct 30 03:53:51.542: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Oct 30 03:53:53.547: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Oct 30 03:53:55.547: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Oct 30 03:53:57.546: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Oct 30 03:53:59.546: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Oct 30 03:54:01.547: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Oct 30 03:54:03.546: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Oct 30 03:54:05.547: INFO: The status of Pod verify-service-up-host-exec-pod is Running (Ready = true)
Oct 30 03:54:05.547: INFO: Creating new exec pod
STEP: verifying service has 3 reachable backends
Oct 30 03:54:09.569: INFO: Executing cmd "for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.10.254:80 2>&1 || true; echo; done" in pod services-3021/verify-service-up-host-exec-pod
Oct 30 03:54:09.569: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3021 exec verify-service-up-host-exec-pod -- /bin/sh -x -c for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.10.254:80 2>&1 || true; echo; done'
Oct 30 03:54:09.950: INFO: stderr: "+ seq 1 150\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n"
Oct 30 03:54:09.951: INFO: stdout: "up-down-2-gd9xv\nup-down-2-tgwxx\nup-down-2-tgwxx\nup-down-2-5cz4q\nup-down-2-tgwxx\nup-down-2-tgwxx\nup-down-2-gd9xv\nup-down-2-5cz4q\nup-down-2-5cz4q\nup-down-2-5cz4q\nup-down-2-gd9xv\nup-down-2-tgwxx\nup-down-2-gd9xv\nup-down-2-gd9xv\nup-down-2-tgwxx\nup-down-2-gd9xv\nup-down-2-tgwxx\nup-down-2-gd9xv\nup-down-2-tgwxx\nup-down-2-gd9xv\nup-down-2-gd9xv\nup-down-2-tgwxx\nup-down-2-tgwxx\nup-down-2-tgwxx\nup-down-2-gd9xv\nup-down-2-gd9xv\nup-down-2-5cz4q\nup-down-2-tgwxx\nup-down-2-gd9xv\nup-down-2-5cz4q\nup-down-2-5cz4q\nup-down-2-tgwxx\nup-down-2-5cz4q\nup-down-2-tgwxx\nup-down-2-tgwxx\nup-down-2-tgwxx\nup-down-2-5cz4q\nup-down-2-5cz4q\nup-down-2-5cz4q\nup-down-2-5cz4q\nup-down-2-tgwxx\nup-down-2-gd9xv\nup-down-2-tgwxx\nup-down-2-tgwxx\nup-down-2-tgwxx\nup-down-2-5cz4q\nup-down-2-5cz4q\nup-down-2-gd9xv\nup-down-2-tgwxx\nup-down-2-5cz4q\nup-down-2-gd9xv\nup-down-2-tgwxx\nup-down-2-gd9xv\nup-down-2-5cz4q\nup-down-2-tgwxx\nup-down-2-tgwxx\nup-down-2-gd9xv\nup-down-2-tgwxx\nup-down-2-tgwxx\nup-down-2-tgwxx\nup-down-2-5cz4q\nup-down-2-5cz4q\nup-down-2-gd9xv\nup-down-2-5cz4q\nup-down-2-5cz4q\nup-down-2-tgwxx\nup-down-2-tgwxx\nup-down-2-tgwxx\nup-down-2-tgwxx\nup-down-2-tgwxx\nup-down-2-gd9xv\nup-down-2-5cz4q\nup-down-2-tgwxx\nup-down-2-tgwxx\nup-down-2-5cz4q\nup-down-2-gd9xv\nup-down-2-5cz4q\nup-down-2-tgwxx\nup-down-2-tgwxx\nup-down-2-gd9xv\nup-down-2-gd9xv\nup-down-2-5cz4q\nup-down-2-tgwxx\nup-down-2-5cz4q\nup-down-2-5cz4q\nup-down-2-5cz4q\nup-down-2-gd9xv\nup-down-2-gd9xv\nup-down-2-5cz4q\nup-down-2-gd9xv\nup-down-2-5cz4q\nup-down-2-gd9xv\nup-down-2-gd9xv\nup-down-2-tgwxx\nup-down-2-5cz4q\nup-down-2-5cz4q\nup-down-2-5cz4q\nup-down-2-5cz4q\nup-down-2-5cz4q\nup-down-2-tgwxx\nup-down-2-tgwxx\nup-down-2-5cz4q\nup-down-2-tgwxx\nup-down-2-5cz4q\nup-down-2-5cz4q\nup-down-2-5cz4q\nup-down-2-tgwxx\nup-down-2-5cz4q\nup-down-2-5cz4q\nup-down-2-tgwxx\nup-down-2-5cz4q\nup-down-2-tgwxx\nup-down-2-gd9xv\nup-down-2-gd9xv\nup-down-2-tgwxx\nup-down-2-gd9xv\nup-down-2-tgwxx\nup-down-2-5cz4q\nup-down-2-tgwxx\nup-down-2-tgwxx\nup-down-2-tgwxx\nup-down-2-5cz4q\nup-down-2-5cz4q\nup-down-2-5cz4q\nup-down-2-5cz4q\nup-down-2-5cz4q\nup-down-2-5cz4q\nup-down-2-5cz4q\nup-down-2-gd9xv\nup-down-2-5cz4q\nup-down-2-gd9xv\nup-down-2-5cz4q\nup-down-2-tgwxx\nup-down-2-gd9xv\nup-down-2-tgwxx\nup-down-2-5cz4q\nup-down-2-gd9xv\nup-down-2-5cz4q\nup-down-2-gd9xv\nup-down-2-tgwxx\nup-down-2-5cz4q\nup-down-2-gd9xv\nup-down-2-tgwxx\nup-down-2-tgwxx\nup-down-2-gd9xv\nup-down-2-tgwxx\nup-down-2-tgwxx\nup-down-2-tgwxx\nup-down-2-tgwxx\nup-down-2-5cz4q\n"
Oct 30 03:54:09.951: INFO: Executing cmd "for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.10.254:80 2>&1 || true; echo; done" in pod services-3021/verify-service-up-exec-pod-l9xhx
Oct 30 03:54:09.951: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3021 exec verify-service-up-exec-pod-l9xhx -- /bin/sh -x -c for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.10.254:80 2>&1 || true; echo; done'
Oct 30 03:54:10.346: INFO: stderr: "+ seq 1 150\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n"
Oct 30 03:54:10.346: INFO: stdout: "up-down-2-tgwxx\nup-down-2-5cz4q\nup-down-2-gd9xv\nup-down-2-gd9xv\nup-down-2-gd9xv\nup-down-2-tgwxx\nup-down-2-gd9xv\nup-down-2-tgwxx\nup-down-2-tgwxx\nup-down-2-5cz4q\nup-down-2-gd9xv\nup-down-2-5cz4q\nup-down-2-gd9xv\nup-down-2-gd9xv\nup-down-2-gd9xv\nup-down-2-5cz4q\nup-down-2-tgwxx\nup-down-2-tgwxx\nup-down-2-5cz4q\nup-down-2-tgwxx\nup-down-2-tgwxx\nup-down-2-gd9xv\nup-down-2-gd9xv\nup-down-2-5cz4q\nup-down-2-tgwxx\nup-down-2-5cz4q\nup-down-2-5cz4q\nup-down-2-gd9xv\nup-down-2-5cz4q\nup-down-2-5cz4q\nup-down-2-5cz4q\nup-down-2-gd9xv\nup-down-2-tgwxx\nup-down-2-5cz4q\nup-down-2-gd9xv\nup-down-2-5cz4q\nup-down-2-gd9xv\nup-down-2-gd9xv\nup-down-2-tgwxx\nup-down-2-gd9xv\nup-down-2-tgwxx\nup-down-2-gd9xv\nup-down-2-tgwxx\nup-down-2-5cz4q\nup-down-2-5cz4q\nup-down-2-5cz4q\nup-down-2-5cz4q\nup-down-2-tgwxx\nup-down-2-5cz4q\nup-down-2-5cz4q\nup-down-2-gd9xv\nup-down-2-5cz4q\nup-down-2-tgwxx\nup-down-2-gd9xv\nup-down-2-5cz4q\nup-down-2-gd9xv\nup-down-2-tgwxx\nup-down-2-5cz4q\nup-down-2-tgwxx\nup-down-2-5cz4q\nup-down-2-tgwxx\nup-down-2-tgwxx\nup-down-2-gd9xv\nup-down-2-5cz4q\nup-down-2-5cz4q\nup-down-2-tgwxx\nup-down-2-5cz4q\nup-down-2-tgwxx\nup-down-2-tgwxx\nup-down-2-tgwxx\nup-down-2-tgwxx\nup-down-2-5cz4q\nup-down-2-tgwxx\nup-down-2-5cz4q\nup-down-2-gd9xv\nup-down-2-tgwxx\nup-down-2-5cz4q\nup-down-2-5cz4q\nup-down-2-gd9xv\nup-down-2-5cz4q\nup-down-2-gd9xv\nup-down-2-tgwxx\nup-down-2-gd9xv\nup-down-2-tgwxx\nup-down-2-5cz4q\nup-down-2-gd9xv\nup-down-2-gd9xv\nup-down-2-5cz4q\nup-down-2-tgwxx\nup-down-2-5cz4q\nup-down-2-5cz4q\nup-down-2-tgwxx\nup-down-2-gd9xv\nup-down-2-gd9xv\nup-down-2-tgwxx\nup-down-2-tgwxx\nup-down-2-5cz4q\nup-down-2-tgwxx\nup-down-2-5cz4q\nup-down-2-5cz4q\nup-down-2-gd9xv\nup-down-2-5cz4q\nup-down-2-5cz4q\nup-down-2-tgwxx\nup-down-2-5cz4q\nup-down-2-gd9xv\nup-down-2-tgwxx\nup-down-2-gd9xv\nup-down-2-5cz4q\nup-down-2-5cz4q\nup-down-2-tgwxx\nup-down-2-5cz4q\nup-down-2-tgwxx\nup-down-2-tgwxx\nup-down-2-tgwxx\nup-down-2-tgwxx\nup-down-2-gd9xv\nup-down-2-gd9xv\nup-down-2-tgwxx\nup-down-2-tgwxx\nup-down-2-tgwxx\nup-down-2-5cz4q\nup-down-2-5cz4q\nup-down-2-5cz4q\nup-down-2-gd9xv\nup-down-2-gd9xv\nup-down-2-5cz4q\nup-down-2-tgwxx\nup-down-2-5cz4q\nup-down-2-5cz4q\nup-down-2-5cz4q\nup-down-2-gd9xv\nup-down-2-tgwxx\nup-down-2-tgwxx\nup-down-2-gd9xv\nup-down-2-gd9xv\nup-down-2-5cz4q\nup-down-2-tgwxx\nup-down-2-5cz4q\nup-down-2-tgwxx\nup-down-2-5cz4q\nup-down-2-5cz4q\nup-down-2-5cz4q\nup-down-2-tgwxx\nup-down-2-5cz4q\nup-down-2-gd9xv\nup-down-2-tgwxx\nup-down-2-tgwxx\nup-down-2-gd9xv\nup-down-2-5cz4q\n"
STEP: Deleting pod verify-service-up-host-exec-pod in namespace services-3021
STEP: Deleting pod verify-service-up-exec-pod-l9xhx in namespace services-3021
STEP: stopping service up-down-1
STEP: deleting ReplicationController up-down-1 in namespace services-3021, will wait for the garbage collector to delete the pods
Oct 30 03:54:10.420: INFO: Deleting ReplicationController up-down-1 took: 5.597586ms
Oct 30 03:54:10.521: INFO: Terminating ReplicationController up-down-1 pods took: 100.651621ms
STEP: verifying service up-down-1 is not up
Oct 30 03:54:22.931: INFO: Creating new host exec pod
Oct 30 03:54:22.943: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Oct 30 03:54:24.948: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Oct 30 03:54:26.950: INFO: The status of Pod verify-service-down-host-exec-pod is Running (Ready = true)
Oct 30 03:54:26.950: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3021 exec verify-service-down-host-exec-pod -- /bin/sh -x -c curl -g -s --connect-timeout 2 http://10.233.56.50:80 && echo service-down-failed'
Oct 30 03:54:29.221: INFO: rc: 28
Oct 30 03:54:29.222: INFO: error while kubectl execing "curl -g -s --connect-timeout 2 http://10.233.56.50:80 && echo service-down-failed" in pod services-3021/verify-service-down-host-exec-pod: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3021 exec verify-service-down-host-exec-pod -- /bin/sh -x -c curl -g -s --connect-timeout 2 http://10.233.56.50:80 && echo service-down-failed:
Command stdout:

stderr:
+ curl -g -s --connect-timeout 2 http://10.233.56.50:80
command terminated with exit code 28

error:
exit status 28
Output: 
STEP: Deleting pod verify-service-down-host-exec-pod in namespace services-3021
STEP: verifying service up-down-2 is still up
Oct 30 03:54:29.231: INFO: Creating new host exec pod
Oct 30 03:54:29.243: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Oct 30 03:54:31.248: INFO: The status of Pod verify-service-up-host-exec-pod is Running (Ready = true)
Oct 30 03:54:31.248: INFO: Creating new exec pod
STEP: verifying service has 3 reachable backends
Oct 30 03:54:35.266: INFO: Executing cmd "for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.10.254:80 2>&1 || true; echo; done" in pod services-3021/verify-service-up-host-exec-pod
Oct 30 03:54:35.266: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3021 exec verify-service-up-host-exec-pod -- /bin/sh -x -c for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.10.254:80 2>&1 || true; echo; done'
Oct 30 03:54:35.598: INFO: stderr: "+ seq 1 150\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n"
Oct 30 03:54:35.598: INFO: stdout: "up-down-2-5cz4q\nup-down-2-5cz4q\nup-down-2-5cz4q\nup-down-2-tgwxx\nup-down-2-gd9xv\nup-down-2-tgwxx\nup-down-2-tgwxx\nup-down-2-tgwxx\nup-down-2-gd9xv\nup-down-2-tgwxx\nup-down-2-5cz4q\nup-down-2-5cz4q\nup-down-2-tgwxx\nup-down-2-gd9xv\nup-down-2-5cz4q\nup-down-2-tgwxx\nup-down-2-gd9xv\nup-down-2-5cz4q\nup-down-2-5cz4q\nup-down-2-gd9xv\nup-down-2-gd9xv\nup-down-2-5cz4q\nup-down-2-5cz4q\nup-down-2-5cz4q\nup-down-2-5cz4q\nup-down-2-tgwxx\nup-down-2-5cz4q\nup-down-2-gd9xv\nup-down-2-gd9xv\nup-down-2-gd9xv\nup-down-2-tgwxx\nup-down-2-tgwxx\nup-down-2-tgwxx\nup-down-2-gd9xv\nup-down-2-5cz4q\nup-down-2-5cz4q\nup-down-2-5cz4q\nup-down-2-gd9xv\nup-down-2-tgwxx\nup-down-2-5cz4q\nup-down-2-5cz4q\nup-down-2-5cz4q\nup-down-2-5cz4q\nup-down-2-tgwxx\nup-down-2-tgwxx\nup-down-2-tgwxx\nup-down-2-gd9xv\nup-down-2-gd9xv\nup-down-2-5cz4q\nup-down-2-gd9xv\nup-down-2-5cz4q\nup-down-2-gd9xv\nup-down-2-tgwxx\nup-down-2-gd9xv\nup-down-2-gd9xv\nup-down-2-tgwxx\nup-down-2-5cz4q\nup-down-2-tgwxx\nup-down-2-tgwxx\nup-down-2-5cz4q\nup-down-2-gd9xv\nup-down-2-tgwxx\nup-down-2-5cz4q\nup-down-2-5cz4q\nup-down-2-tgwxx\nup-down-2-gd9xv\nup-down-2-gd9xv\nup-down-2-5cz4q\nup-down-2-tgwxx\nup-down-2-tgwxx\nup-down-2-tgwxx\nup-down-2-gd9xv\nup-down-2-5cz4q\nup-down-2-gd9xv\nup-down-2-tgwxx\nup-down-2-5cz4q\nup-down-2-gd9xv\nup-down-2-gd9xv\nup-down-2-gd9xv\nup-down-2-tgwxx\nup-down-2-tgwxx\nup-down-2-5cz4q\nup-down-2-tgwxx\nup-down-2-gd9xv\nup-down-2-gd9xv\nup-down-2-tgwxx\nup-down-2-tgwxx\nup-down-2-gd9xv\nup-down-2-5cz4q\nup-down-2-tgwxx\nup-down-2-5cz4q\nup-down-2-tgwxx\nup-down-2-5cz4q\nup-down-2-gd9xv\nup-down-2-5cz4q\nup-down-2-gd9xv\nup-down-2-5cz4q\nup-down-2-tgwxx\nup-down-2-gd9xv\nup-down-2-tgwxx\nup-down-2-5cz4q\nup-down-2-gd9xv\nup-down-2-5cz4q\nup-down-2-gd9xv\nup-down-2-5cz4q\nup-down-2-5cz4q\nup-down-2-5cz4q\nup-down-2-tgwxx\nup-down-2-gd9xv\nup-down-2-gd9xv\nup-down-2-5cz4q\nup-down-2-5cz4q\nup-down-2-tgwxx\nup-down-2-tgwxx\nup-down-2-gd9xv\nup-down-2-tgwxx\nup-down-2-gd9xv\nup-down-2-5cz4q\nup-down-2-gd9xv\nup-down-2-tgwxx\nup-down-2-5cz4q\nup-down-2-gd9xv\nup-down-2-gd9xv\nup-down-2-gd9xv\nup-down-2-tgwxx\nup-down-2-tgwxx\nup-down-2-tgwxx\nup-down-2-tgwxx\nup-down-2-5cz4q\nup-down-2-tgwxx\nup-down-2-tgwxx\nup-down-2-5cz4q\nup-down-2-tgwxx\nup-down-2-5cz4q\nup-down-2-gd9xv\nup-down-2-tgwxx\nup-down-2-gd9xv\nup-down-2-gd9xv\nup-down-2-tgwxx\nup-down-2-gd9xv\nup-down-2-gd9xv\nup-down-2-5cz4q\nup-down-2-gd9xv\nup-down-2-gd9xv\nup-down-2-5cz4q\nup-down-2-tgwxx\nup-down-2-5cz4q\nup-down-2-tgwxx\nup-down-2-gd9xv\nup-down-2-gd9xv\n"
Oct 30 03:54:35.598: INFO: Executing cmd "for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.10.254:80 2>&1 || true; echo; done" in pod services-3021/verify-service-up-exec-pod-9qg24
Oct 30 03:54:35.598: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3021 exec verify-service-up-exec-pod-9qg24 -- /bin/sh -x -c for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.10.254:80 2>&1 || true; echo; done'
Oct 30 03:54:36.140: INFO: stderr: "+ seq 1 150\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n"
Oct 30 03:54:36.140: INFO: stdout: "up-down-2-gd9xv\nup-down-2-tgwxx\nup-down-2-5cz4q\nup-down-2-tgwxx\nup-down-2-gd9xv\nup-down-2-gd9xv\nup-down-2-5cz4q\nup-down-2-gd9xv\nup-down-2-gd9xv\nup-down-2-tgwxx\nup-down-2-5cz4q\nup-down-2-gd9xv\nup-down-2-5cz4q\nup-down-2-gd9xv\nup-down-2-5cz4q\nup-down-2-5cz4q\nup-down-2-gd9xv\nup-down-2-5cz4q\nup-down-2-gd9xv\nup-down-2-gd9xv\nup-down-2-tgwxx\nup-down-2-tgwxx\nup-down-2-5cz4q\nup-down-2-gd9xv\nup-down-2-5cz4q\nup-down-2-gd9xv\nup-down-2-5cz4q\nup-down-2-tgwxx\nup-down-2-5cz4q\nup-down-2-gd9xv\nup-down-2-tgwxx\nup-down-2-gd9xv\nup-down-2-5cz4q\nup-down-2-5cz4q\nup-down-2-gd9xv\nup-down-2-5cz4q\nup-down-2-tgwxx\nup-down-2-gd9xv\nup-down-2-tgwxx\nup-down-2-tgwxx\nup-down-2-gd9xv\nup-down-2-gd9xv\nup-down-2-gd9xv\nup-down-2-tgwxx\nup-down-2-gd9xv\nup-down-2-5cz4q\nup-down-2-tgwxx\nup-down-2-5cz4q\nup-down-2-gd9xv\nup-down-2-gd9xv\nup-down-2-5cz4q\nup-down-2-gd9xv\nup-down-2-tgwxx\nup-down-2-gd9xv\nup-down-2-5cz4q\nup-down-2-gd9xv\nup-down-2-gd9xv\nup-down-2-5cz4q\nup-down-2-5cz4q\nup-down-2-5cz4q\nup-down-2-gd9xv\nup-down-2-gd9xv\nup-down-2-5cz4q\nup-down-2-5cz4q\nup-down-2-gd9xv\nup-down-2-tgwxx\nup-down-2-5cz4q\nup-down-2-gd9xv\nup-down-2-tgwxx\nup-down-2-gd9xv\nup-down-2-5cz4q\nup-down-2-gd9xv\nup-down-2-tgwxx\nup-down-2-tgwxx\nup-down-2-tgwxx\nup-down-2-5cz4q\nup-down-2-gd9xv\nup-down-2-tgwxx\nup-down-2-gd9xv\nup-down-2-5cz4q\nup-down-2-tgwxx\nup-down-2-gd9xv\nup-down-2-5cz4q\nup-down-2-tgwxx\nup-down-2-tgwxx\nup-down-2-5cz4q\nup-down-2-5cz4q\nup-down-2-tgwxx\nup-down-2-tgwxx\nup-down-2-tgwxx\nup-down-2-gd9xv\nup-down-2-gd9xv\nup-down-2-gd9xv\nup-down-2-gd9xv\nup-down-2-gd9xv\nup-down-2-gd9xv\nup-down-2-tgwxx\nup-down-2-gd9xv\nup-down-2-gd9xv\nup-down-2-tgwxx\nup-down-2-5cz4q\nup-down-2-5cz4q\nup-down-2-tgwxx\nup-down-2-5cz4q\nup-down-2-5cz4q\nup-down-2-tgwxx\nup-down-2-gd9xv\nup-down-2-5cz4q\nup-down-2-tgwxx\nup-down-2-gd9xv\nup-down-2-gd9xv\nup-down-2-gd9xv\nup-down-2-tgwxx\nup-down-2-tgwxx\nup-down-2-5cz4q\nup-down-2-tgwxx\nup-down-2-gd9xv\nup-down-2-5cz4q\nup-down-2-5cz4q\nup-down-2-tgwxx\nup-down-2-5cz4q\nup-down-2-gd9xv\nup-down-2-gd9xv\nup-down-2-5cz4q\nup-down-2-5cz4q\nup-down-2-5cz4q\nup-down-2-tgwxx\nup-down-2-tgwxx\nup-down-2-gd9xv\nup-down-2-tgwxx\nup-down-2-5cz4q\nup-down-2-tgwxx\nup-down-2-5cz4q\nup-down-2-gd9xv\nup-down-2-tgwxx\nup-down-2-gd9xv\nup-down-2-gd9xv\nup-down-2-gd9xv\nup-down-2-5cz4q\nup-down-2-tgwxx\nup-down-2-tgwxx\nup-down-2-gd9xv\nup-down-2-5cz4q\nup-down-2-gd9xv\nup-down-2-5cz4q\nup-down-2-gd9xv\nup-down-2-tgwxx\nup-down-2-tgwxx\nup-down-2-5cz4q\nup-down-2-5cz4q\n"
STEP: Deleting pod verify-service-up-host-exec-pod in namespace services-3021
STEP: Deleting pod verify-service-up-exec-pod-9qg24 in namespace services-3021
STEP: creating service up-down-3 in namespace services-3021
STEP: creating service up-down-3 in namespace services-3021
STEP: creating replication controller up-down-3 in namespace services-3021
I1030 03:54:36.165942      22 runners.go:190] Created replication controller with name: up-down-3, namespace: services-3021, replica count: 3
I1030 03:54:39.217262      22 runners.go:190] up-down-3 Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1030 03:54:42.221365      22 runners.go:190] up-down-3 Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
STEP: verifying service up-down-2 is still up
Oct 30 03:54:42.224: INFO: Creating new host exec pod
Oct 30 03:54:42.238: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Oct 30 03:54:44.243: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Oct 30 03:54:46.243: INFO: The status of Pod verify-service-up-host-exec-pod is Running (Ready = true)
Oct 30 03:54:46.243: INFO: Creating new exec pod
STEP: verifying service has 3 reachable backends
Oct 30 03:54:50.259: INFO: Executing cmd "for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.10.254:80 2>&1 || true; echo; done" in pod services-3021/verify-service-up-host-exec-pod
Oct 30 03:54:50.259: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3021 exec verify-service-up-host-exec-pod -- /bin/sh -x -c for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.10.254:80 2>&1 || true; echo; done'
Oct 30 03:54:50.590: INFO: stderr: "+ seq 1 150\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n"
Oct 30 03:54:50.591: INFO: stdout: "up-down-2-tgwxx\nup-down-2-gd9xv\nup-down-2-tgwxx\nup-down-2-gd9xv\nup-down-2-gd9xv\nup-down-2-gd9xv\nup-down-2-tgwxx\nup-down-2-gd9xv\nup-down-2-gd9xv\nup-down-2-5cz4q\nup-down-2-gd9xv\nup-down-2-gd9xv\nup-down-2-gd9xv\nup-down-2-5cz4q\nup-down-2-gd9xv\nup-down-2-gd9xv\nup-down-2-5cz4q\nup-down-2-5cz4q\nup-down-2-5cz4q\nup-down-2-gd9xv\nup-down-2-gd9xv\nup-down-2-5cz4q\nup-down-2-tgwxx\nup-down-2-tgwxx\nup-down-2-tgwxx\nup-down-2-5cz4q\nup-down-2-tgwxx\nup-down-2-tgwxx\nup-down-2-gd9xv\nup-down-2-5cz4q\nup-down-2-tgwxx\nup-down-2-tgwxx\nup-down-2-5cz4q\nup-down-2-gd9xv\nup-down-2-5cz4q\nup-down-2-gd9xv\nup-down-2-gd9xv\nup-down-2-tgwxx\nup-down-2-gd9xv\nup-down-2-5cz4q\nup-down-2-gd9xv\nup-down-2-tgwxx\nup-down-2-tgwxx\nup-down-2-gd9xv\nup-down-2-5cz4q\nup-down-2-5cz4q\nup-down-2-tgwxx\nup-down-2-gd9xv\nup-down-2-gd9xv\nup-down-2-tgwxx\nup-down-2-tgwxx\nup-down-2-tgwxx\nup-down-2-tgwxx\nup-down-2-tgwxx\nup-down-2-5cz4q\nup-down-2-5cz4q\nup-down-2-gd9xv\nup-down-2-tgwxx\nup-down-2-tgwxx\nup-down-2-5cz4q\nup-down-2-gd9xv\nup-down-2-gd9xv\nup-down-2-5cz4q\nup-down-2-5cz4q\nup-down-2-tgwxx\nup-down-2-5cz4q\nup-down-2-tgwxx\nup-down-2-gd9xv\nup-down-2-gd9xv\nup-down-2-gd9xv\nup-down-2-tgwxx\nup-down-2-gd9xv\nup-down-2-5cz4q\nup-down-2-5cz4q\nup-down-2-tgwxx\nup-down-2-tgwxx\nup-down-2-tgwxx\nup-down-2-tgwxx\nup-down-2-5cz4q\nup-down-2-5cz4q\nup-down-2-5cz4q\nup-down-2-tgwxx\nup-down-2-gd9xv\nup-down-2-gd9xv\nup-down-2-5cz4q\nup-down-2-5cz4q\nup-down-2-tgwxx\nup-down-2-5cz4q\nup-down-2-tgwxx\nup-down-2-gd9xv\nup-down-2-5cz4q\nup-down-2-tgwxx\nup-down-2-tgwxx\nup-down-2-5cz4q\nup-down-2-5cz4q\nup-down-2-5cz4q\nup-down-2-gd9xv\nup-down-2-tgwxx\nup-down-2-tgwxx\nup-down-2-5cz4q\nup-down-2-gd9xv\nup-down-2-5cz4q\nup-down-2-tgwxx\nup-down-2-gd9xv\nup-down-2-tgwxx\nup-down-2-tgwxx\nup-down-2-5cz4q\nup-down-2-tgwxx\nup-down-2-tgwxx\nup-down-2-gd9xv\nup-down-2-gd9xv\nup-down-2-tgwxx\nup-down-2-5cz4q\nup-down-2-tgwxx\nup-down-2-5cz4q\nup-down-2-5cz4q\nup-down-2-tgwxx\nup-down-2-gd9xv\nup-down-2-gd9xv\nup-down-2-tgwxx\nup-down-2-gd9xv\nup-down-2-5cz4q\nup-down-2-tgwxx\nup-down-2-tgwxx\nup-down-2-tgwxx\nup-down-2-gd9xv\nup-down-2-tgwxx\nup-down-2-tgwxx\nup-down-2-tgwxx\nup-down-2-gd9xv\nup-down-2-tgwxx\nup-down-2-tgwxx\nup-down-2-5cz4q\nup-down-2-5cz4q\nup-down-2-gd9xv\nup-down-2-gd9xv\nup-down-2-tgwxx\nup-down-2-5cz4q\nup-down-2-gd9xv\nup-down-2-gd9xv\nup-down-2-gd9xv\nup-down-2-5cz4q\nup-down-2-tgwxx\nup-down-2-gd9xv\nup-down-2-gd9xv\nup-down-2-5cz4q\nup-down-2-gd9xv\nup-down-2-gd9xv\nup-down-2-5cz4q\nup-down-2-tgwxx\n"
Oct 30 03:54:50.591: INFO: Executing cmd "for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.10.254:80 2>&1 || true; echo; done" in pod services-3021/verify-service-up-exec-pod-d95bb
Oct 30 03:54:50.591: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3021 exec verify-service-up-exec-pod-d95bb -- /bin/sh -x -c for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.10.254:80 2>&1 || true; echo; done'
Oct 30 03:54:50.973: INFO: stderr: "+ seq 1 150\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.10.254:80\n+ echo\n"
Oct 30 03:54:50.974: INFO: stdout: "up-down-2-tgwxx\nup-down-2-tgwxx\nup-down-2-5cz4q\nup-down-2-5cz4q\nup-down-2-gd9xv\nup-down-2-tgwxx\nup-down-2-5cz4q\nup-down-2-tgwxx\nup-down-2-tgwxx\nup-down-2-gd9xv\nup-down-2-tgwxx\nup-down-2-tgwxx\nup-down-2-gd9xv\nup-down-2-gd9xv\nup-down-2-5cz4q\nup-down-2-gd9xv\nup-down-2-tgwxx\nup-down-2-tgwxx\nup-down-2-5cz4q\nup-down-2-tgwxx\nup-down-2-5cz4q\nup-down-2-tgwxx\nup-down-2-5cz4q\nup-down-2-5cz4q\nup-down-2-gd9xv\nup-down-2-5cz4q\nup-down-2-gd9xv\nup-down-2-5cz4q\nup-down-2-5cz4q\nup-down-2-5cz4q\nup-down-2-5cz4q\nup-down-2-5cz4q\nup-down-2-tgwxx\nup-down-2-5cz4q\nup-down-2-tgwxx\nup-down-2-5cz4q\nup-down-2-gd9xv\nup-down-2-tgwxx\nup-down-2-5cz4q\nup-down-2-5cz4q\nup-down-2-gd9xv\nup-down-2-5cz4q\nup-down-2-tgwxx\nup-down-2-5cz4q\nup-down-2-gd9xv\nup-down-2-gd9xv\nup-down-2-tgwxx\nup-down-2-gd9xv\nup-down-2-tgwxx\nup-down-2-gd9xv\nup-down-2-tgwxx\nup-down-2-tgwxx\nup-down-2-gd9xv\nup-down-2-gd9xv\nup-down-2-5cz4q\nup-down-2-gd9xv\nup-down-2-5cz4q\nup-down-2-gd9xv\nup-down-2-5cz4q\nup-down-2-5cz4q\nup-down-2-gd9xv\nup-down-2-tgwxx\nup-down-2-tgwxx\nup-down-2-tgwxx\nup-down-2-tgwxx\nup-down-2-gd9xv\nup-down-2-gd9xv\nup-down-2-gd9xv\nup-down-2-gd9xv\nup-down-2-5cz4q\nup-down-2-tgwxx\nup-down-2-5cz4q\nup-down-2-gd9xv\nup-down-2-gd9xv\nup-down-2-tgwxx\nup-down-2-tgwxx\nup-down-2-tgwxx\nup-down-2-5cz4q\nup-down-2-5cz4q\nup-down-2-5cz4q\nup-down-2-gd9xv\nup-down-2-5cz4q\nup-down-2-gd9xv\nup-down-2-tgwxx\nup-down-2-gd9xv\nup-down-2-gd9xv\nup-down-2-tgwxx\nup-down-2-5cz4q\nup-down-2-tgwxx\nup-down-2-gd9xv\nup-down-2-gd9xv\nup-down-2-gd9xv\nup-down-2-5cz4q\nup-down-2-tgwxx\nup-down-2-5cz4q\nup-down-2-5cz4q\nup-down-2-tgwxx\nup-down-2-5cz4q\nup-down-2-tgwxx\nup-down-2-tgwxx\nup-down-2-tgwxx\nup-down-2-5cz4q\nup-down-2-tgwxx\nup-down-2-tgwxx\nup-down-2-tgwxx\nup-down-2-gd9xv\nup-down-2-tgwxx\nup-down-2-gd9xv\nup-down-2-5cz4q\nup-down-2-tgwxx\nup-down-2-tgwxx\nup-down-2-tgwxx\nup-down-2-tgwxx\nup-down-2-gd9xv\nup-down-2-5cz4q\nup-down-2-5cz4q\nup-down-2-5cz4q\nup-down-2-5cz4q\nup-down-2-5cz4q\nup-down-2-5cz4q\nup-down-2-tgwxx\nup-down-2-gd9xv\nup-down-2-5cz4q\nup-down-2-tgwxx\nup-down-2-gd9xv\nup-down-2-tgwxx\nup-down-2-gd9xv\nup-down-2-tgwxx\nup-down-2-gd9xv\nup-down-2-5cz4q\nup-down-2-5cz4q\nup-down-2-5cz4q\nup-down-2-tgwxx\nup-down-2-gd9xv\nup-down-2-5cz4q\nup-down-2-gd9xv\nup-down-2-tgwxx\nup-down-2-tgwxx\nup-down-2-gd9xv\nup-down-2-5cz4q\nup-down-2-5cz4q\nup-down-2-5cz4q\nup-down-2-gd9xv\nup-down-2-gd9xv\nup-down-2-tgwxx\nup-down-2-tgwxx\nup-down-2-gd9xv\nup-down-2-tgwxx\nup-down-2-tgwxx\nup-down-2-5cz4q\n"
STEP: Deleting pod verify-service-up-host-exec-pod in namespace services-3021
STEP: Deleting pod verify-service-up-exec-pod-d95bb in namespace services-3021
STEP: verifying service up-down-3 is up
Oct 30 03:54:50.986: INFO: Creating new host exec pod
Oct 30 03:54:51.000: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Oct 30 03:54:53.003: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Oct 30 03:54:55.003: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Oct 30 03:54:57.004: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Oct 30 03:54:59.008: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Oct 30 03:55:01.009: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Oct 30 03:55:03.004: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Oct 30 03:55:05.003: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Oct 30 03:55:07.005: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Oct 30 03:55:09.004: INFO: The status of Pod verify-service-up-host-exec-pod is Running (Ready = true)
Oct 30 03:55:09.004: INFO: Creating new exec pod
STEP: verifying service has 3 reachable backends
Oct 30 03:55:13.021: INFO: Executing cmd "for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.30.171:80 2>&1 || true; echo; done" in pod services-3021/verify-service-up-host-exec-pod
Oct 30 03:55:13.021: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3021 exec verify-service-up-host-exec-pod -- /bin/sh -x -c for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.30.171:80 2>&1 || true; echo; done'
Oct 30 03:55:13.411: INFO: stderr: "+ seq 1 150\n+ wget -q -T 1 -O - http://10.233.30.171:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.171:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.171:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.171:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.171:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.171:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.171:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.171:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.171:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.171:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.171:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.171:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.171:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.171:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.171:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.171:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.171:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.171:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.171:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.171:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.171:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.171:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.171:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.171:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.171:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.171:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.171:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.171:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.171:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.171:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.171:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.171:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.171:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.171:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.171:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.171:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.171:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.171:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.171:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.171:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.171:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.171:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.171:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.171:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.171:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.171:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.171:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.171:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.171:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.171:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.171:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.171:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.171:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.171:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.171:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.171:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.171:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.171:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.171:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.171:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.171:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.171:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.171:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.171:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.171:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.171:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.171:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.171:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.171:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.171:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.171:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.171:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.171:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.171:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.171:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.171:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.171:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.171:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.171:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.171:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.171:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.171:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.171:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.171:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.171:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.171:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.171:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.171:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.171:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.171:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.171:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.171:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.171:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.171:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.171:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.171:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.171:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.171:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.171:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.171:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.171:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.171:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.171:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.171:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.171:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.171:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.171:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.171:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.171:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.171:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.171:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.171:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.171:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.171:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.171:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.171:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.171:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.171:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.171:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.171:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.171:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.171:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.171:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.171:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.171:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.171:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.171:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.171:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.171:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.171:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.171:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.171:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.171:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.171:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.171:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.171:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.171:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.171:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.171:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.171:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.171:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.171:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.171:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.171:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.171:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.171:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.171:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.171:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.171:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.171:80\n+ echo\n"
Oct 30 03:55:13.412: INFO: stdout: "up-down-3-cqjmv\nup-down-3-4w5jz\nup-down-3-xvft4\nup-down-3-4w5jz\nup-down-3-cqjmv\nup-down-3-4w5jz\nup-down-3-cqjmv\nup-down-3-xvft4\nup-down-3-cqjmv\nup-down-3-4w5jz\nup-down-3-xvft4\nup-down-3-xvft4\nup-down-3-xvft4\nup-down-3-xvft4\nup-down-3-cqjmv\nup-down-3-4w5jz\nup-down-3-cqjmv\nup-down-3-4w5jz\nup-down-3-cqjmv\nup-down-3-xvft4\nup-down-3-xvft4\nup-down-3-cqjmv\nup-down-3-xvft4\nup-down-3-cqjmv\nup-down-3-4w5jz\nup-down-3-cqjmv\nup-down-3-cqjmv\nup-down-3-xvft4\nup-down-3-xvft4\nup-down-3-xvft4\nup-down-3-xvft4\nup-down-3-4w5jz\nup-down-3-cqjmv\nup-down-3-4w5jz\nup-down-3-xvft4\nup-down-3-4w5jz\nup-down-3-xvft4\nup-down-3-xvft4\nup-down-3-cqjmv\nup-down-3-4w5jz\nup-down-3-xvft4\nup-down-3-4w5jz\nup-down-3-4w5jz\nup-down-3-xvft4\nup-down-3-4w5jz\nup-down-3-cqjmv\nup-down-3-cqjmv\nup-down-3-4w5jz\nup-down-3-xvft4\nup-down-3-cqjmv\nup-down-3-4w5jz\nup-down-3-cqjmv\nup-down-3-cqjmv\nup-down-3-cqjmv\nup-down-3-4w5jz\nup-down-3-xvft4\nup-down-3-xvft4\nup-down-3-xvft4\nup-down-3-4w5jz\nup-down-3-xvft4\nup-down-3-4w5jz\nup-down-3-cqjmv\nup-down-3-xvft4\nup-down-3-cqjmv\nup-down-3-xvft4\nup-down-3-4w5jz\nup-down-3-xvft4\nup-down-3-4w5jz\nup-down-3-xvft4\nup-down-3-4w5jz\nup-down-3-4w5jz\nup-down-3-cqjmv\nup-down-3-xvft4\nup-down-3-cqjmv\nup-down-3-xvft4\nup-down-3-cqjmv\nup-down-3-4w5jz\nup-down-3-4w5jz\nup-down-3-cqjmv\nup-down-3-cqjmv\nup-down-3-xvft4\nup-down-3-xvft4\nup-down-3-xvft4\nup-down-3-cqjmv\nup-down-3-xvft4\nup-down-3-cqjmv\nup-down-3-4w5jz\nup-down-3-xvft4\nup-down-3-xvft4\nup-down-3-xvft4\nup-down-3-cqjmv\nup-down-3-cqjmv\nup-down-3-xvft4\nup-down-3-cqjmv\nup-down-3-4w5jz\nup-down-3-cqjmv\nup-down-3-4w5jz\nup-down-3-cqjmv\nup-down-3-4w5jz\nup-down-3-cqjmv\nup-down-3-4w5jz\nup-down-3-4w5jz\nup-down-3-4w5jz\nup-down-3-4w5jz\nup-down-3-cqjmv\nup-down-3-4w5jz\nup-down-3-4w5jz\nup-down-3-xvft4\nup-down-3-4w5jz\nup-down-3-4w5jz\nup-down-3-xvft4\nup-down-3-cqjmv\nup-down-3-4w5jz\nup-down-3-cqjmv\nup-down-3-cqjmv\nup-down-3-4w5jz\nup-down-3-4w5jz\nup-down-3-xvft4\nup-down-3-4w5jz\nup-down-3-4w5jz\nup-down-3-4w5jz\nup-down-3-4w5jz\nup-down-3-cqjmv\nup-down-3-4w5jz\nup-down-3-xvft4\nup-down-3-4w5jz\nup-down-3-4w5jz\nup-down-3-xvft4\nup-down-3-cqjmv\nup-down-3-cqjmv\nup-down-3-cqjmv\nup-down-3-4w5jz\nup-down-3-4w5jz\nup-down-3-4w5jz\nup-down-3-4w5jz\nup-down-3-4w5jz\nup-down-3-4w5jz\nup-down-3-cqjmv\nup-down-3-xvft4\nup-down-3-xvft4\nup-down-3-4w5jz\nup-down-3-xvft4\nup-down-3-4w5jz\nup-down-3-xvft4\nup-down-3-cqjmv\nup-down-3-cqjmv\nup-down-3-4w5jz\nup-down-3-xvft4\nup-down-3-cqjmv\nup-down-3-xvft4\n"
Oct 30 03:55:13.412: INFO: Executing cmd "for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.30.171:80 2>&1 || true; echo; done" in pod services-3021/verify-service-up-exec-pod-xx7p8
Oct 30 03:55:13.412: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3021 exec verify-service-up-exec-pod-xx7p8 -- /bin/sh -x -c for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.30.171:80 2>&1 || true; echo; done'
Oct 30 03:55:13.788: INFO: stderr: "+ seq 1 150\n+ wget -q -T 1 -O - http://10.233.30.171:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.171:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.171:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.171:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.171:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.171:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.171:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.171:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.171:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.171:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.171:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.171:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.171:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.171:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.171:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.171:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.171:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.171:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.171:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.171:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.171:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.171:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.171:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.171:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.171:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.171:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.171:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.171:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.171:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.171:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.171:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.171:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.171:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.171:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.171:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.171:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.171:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.171:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.171:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.171:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.171:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.171:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.171:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.171:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.171:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.171:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.171:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.171:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.171:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.171:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.171:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.171:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.171:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.171:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.171:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.171:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.171:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.171:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.171:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.171:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.171:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.171:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.171:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.171:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.171:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.171:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.171:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.171:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.171:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.171:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.171:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.171:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.171:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.171:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.171:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.171:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.171:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.171:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.171:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.171:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.171:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.171:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.171:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.171:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.171:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.171:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.171:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.171:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.171:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.171:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.171:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.171:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.171:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.171:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.171:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.171:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.171:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.171:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.171:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.171:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.171:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.171:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.171:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.171:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.171:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.171:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.171:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.171:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.171:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.171:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.171:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.171:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.171:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.171:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.171:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.171:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.171:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.171:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.171:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.171:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.171:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.171:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.171:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.171:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.171:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.171:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.171:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.171:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.171:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.171:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.171:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.171:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.171:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.171:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.171:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.171:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.171:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.171:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.171:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.171:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.171:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.171:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.171:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.171:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.171:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.171:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.171:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.171:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.171:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.171:80\n+ echo\n"
Oct 30 03:55:13.789: INFO: stdout: "up-down-3-4w5jz\nup-down-3-xvft4\nup-down-3-cqjmv\nup-down-3-xvft4\nup-down-3-4w5jz\nup-down-3-xvft4\nup-down-3-cqjmv\nup-down-3-4w5jz\nup-down-3-xvft4\nup-down-3-xvft4\nup-down-3-4w5jz\nup-down-3-cqjmv\nup-down-3-xvft4\nup-down-3-cqjmv\nup-down-3-cqjmv\nup-down-3-4w5jz\nup-down-3-cqjmv\nup-down-3-cqjmv\nup-down-3-xvft4\nup-down-3-4w5jz\nup-down-3-4w5jz\nup-down-3-xvft4\nup-down-3-cqjmv\nup-down-3-cqjmv\nup-down-3-xvft4\nup-down-3-xvft4\nup-down-3-cqjmv\nup-down-3-xvft4\nup-down-3-4w5jz\nup-down-3-4w5jz\nup-down-3-xvft4\nup-down-3-4w5jz\nup-down-3-xvft4\nup-down-3-xvft4\nup-down-3-xvft4\nup-down-3-4w5jz\nup-down-3-xvft4\nup-down-3-cqjmv\nup-down-3-cqjmv\nup-down-3-xvft4\nup-down-3-cqjmv\nup-down-3-xvft4\nup-down-3-4w5jz\nup-down-3-cqjmv\nup-down-3-cqjmv\nup-down-3-cqjmv\nup-down-3-4w5jz\nup-down-3-4w5jz\nup-down-3-xvft4\nup-down-3-xvft4\nup-down-3-xvft4\nup-down-3-cqjmv\nup-down-3-xvft4\nup-down-3-4w5jz\nup-down-3-4w5jz\nup-down-3-xvft4\nup-down-3-xvft4\nup-down-3-cqjmv\nup-down-3-4w5jz\nup-down-3-cqjmv\nup-down-3-4w5jz\nup-down-3-xvft4\nup-down-3-xvft4\nup-down-3-xvft4\nup-down-3-cqjmv\nup-down-3-xvft4\nup-down-3-cqjmv\nup-down-3-cqjmv\nup-down-3-cqjmv\nup-down-3-xvft4\nup-down-3-xvft4\nup-down-3-xvft4\nup-down-3-4w5jz\nup-down-3-xvft4\nup-down-3-xvft4\nup-down-3-cqjmv\nup-down-3-4w5jz\nup-down-3-4w5jz\nup-down-3-xvft4\nup-down-3-4w5jz\nup-down-3-xvft4\nup-down-3-xvft4\nup-down-3-cqjmv\nup-down-3-4w5jz\nup-down-3-xvft4\nup-down-3-xvft4\nup-down-3-cqjmv\nup-down-3-4w5jz\nup-down-3-4w5jz\nup-down-3-xvft4\nup-down-3-xvft4\nup-down-3-4w5jz\nup-down-3-cqjmv\nup-down-3-cqjmv\nup-down-3-cqjmv\nup-down-3-cqjmv\nup-down-3-xvft4\nup-down-3-xvft4\nup-down-3-cqjmv\nup-down-3-4w5jz\nup-down-3-4w5jz\nup-down-3-cqjmv\nup-down-3-cqjmv\nup-down-3-xvft4\nup-down-3-xvft4\nup-down-3-4w5jz\nup-down-3-cqjmv\nup-down-3-4w5jz\nup-down-3-4w5jz\nup-down-3-xvft4\nup-down-3-4w5jz\nup-down-3-4w5jz\nup-down-3-4w5jz\nup-down-3-4w5jz\nup-down-3-cqjmv\nup-down-3-xvft4\nup-down-3-cqjmv\nup-down-3-xvft4\nup-down-3-4w5jz\nup-down-3-4w5jz\nup-down-3-4w5jz\nup-down-3-xvft4\nup-down-3-xvft4\nup-down-3-4w5jz\nup-down-3-4w5jz\nup-down-3-xvft4\nup-down-3-cqjmv\nup-down-3-xvft4\nup-down-3-xvft4\nup-down-3-4w5jz\nup-down-3-cqjmv\nup-down-3-cqjmv\nup-down-3-xvft4\nup-down-3-4w5jz\nup-down-3-xvft4\nup-down-3-4w5jz\nup-down-3-xvft4\nup-down-3-4w5jz\nup-down-3-4w5jz\nup-down-3-xvft4\nup-down-3-xvft4\nup-down-3-4w5jz\nup-down-3-xvft4\nup-down-3-4w5jz\nup-down-3-xvft4\nup-down-3-4w5jz\nup-down-3-xvft4\nup-down-3-cqjmv\nup-down-3-xvft4\nup-down-3-4w5jz\n"
STEP: Deleting pod verify-service-up-host-exec-pod in namespace services-3021
STEP: Deleting pod verify-service-up-exec-pod-xx7p8 in namespace services-3021
[AfterEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct 30 03:55:13.803: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-3021" for this suite.
[AfterEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750


• [SLOW TEST:111.229 seconds]
[sig-network] Services
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should be able to up and down services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1015
------------------------------
{"msg":"PASSED [sig-network] Services should be able to up and down services","total":-1,"completed":3,"skipped":1007,"failed":0}
Oct 30 03:55:13.817: INFO: Running AfterSuite actions on all nodes


{"msg":"FAILED [sig-network] Networking Granular Checks: Services should function for pod-Service: udp","total":-1,"completed":2,"skipped":455,"failed":1,"failures":["[sig-network] Networking Granular Checks: Services should function for pod-Service: udp"]}
Oct 30 03:53:34.739: INFO: Running AfterSuite actions on all nodes
Oct 30 03:55:13.886: INFO: Running AfterSuite actions on node 1
Oct 30 03:55:13.886: INFO: Skipping dumping logs from cluster



Summarizing 3 Failures:

[Fail] [sig-network] Networking Granular Checks: Services [It] should function for pod-Service: udp 
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes/runner.go:113

[Fail] [sig-network] Services [It] should be able to update service type to NodePort listening on same port number but different protocols 
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1245

[Fail] [sig-network] Conntrack [It] should be able to preserve UDP traffic when server pod cycles for a NodePort service 
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes/runner.go:113

Ran 27 of 5770 Specs in 229.694 seconds
FAIL! -- 24 Passed | 3 Failed | 0 Pending | 5743 Skipped


Ginkgo ran 1 suite in 3m51.363855397s
Test Suite Failed