Running Suite: Kubernetes e2e suite =================================== Random Seed: 1651274350 - Will randomize all specs Will run 5773 specs Running in parallel across 10 nodes Apr 29 23:19:11.784: INFO: >>> kubeConfig: /root/.kube/config Apr 29 23:19:11.789: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Apr 29 23:19:11.847: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Apr 29 23:19:11.923: INFO: The status of Pod cmk-init-discover-node1-gxlbt is Succeeded, skipping waiting Apr 29 23:19:11.923: INFO: The status of Pod cmk-init-discover-node2-csdn7 is Succeeded, skipping waiting Apr 29 23:19:11.923: INFO: 40 / 42 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Apr 29 23:19:11.923: INFO: expected 8 pod replicas in namespace 'kube-system', 8 are Running and Ready. Apr 29 23:19:11.923: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Apr 29 23:19:11.937: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'cmk' (0 seconds elapsed) Apr 29 23:19:11.937: INFO: 5 / 5 pods ready in namespace 'kube-system' in daemonset 'kube-flannel' (0 seconds elapsed) Apr 29 23:19:11.937: INFO: 0 / 0 pods ready in namespace 'kube-system' in daemonset 'kube-flannel-ds-arm' (0 seconds elapsed) Apr 29 23:19:11.937: INFO: 0 / 0 pods ready in namespace 'kube-system' in daemonset 'kube-flannel-ds-arm64' (0 seconds elapsed) Apr 29 23:19:11.937: INFO: 0 / 0 pods ready in namespace 'kube-system' in daemonset 'kube-flannel-ds-ppc64le' (0 seconds elapsed) Apr 29 23:19:11.937: INFO: 0 / 0 pods ready in namespace 'kube-system' in daemonset 'kube-flannel-ds-s390x' (0 seconds elapsed) Apr 29 23:19:11.937: INFO: 5 / 5 pods ready in namespace 'kube-system' in daemonset 'kube-multus-ds-amd64' (0 seconds elapsed) Apr 29 23:19:11.937: INFO: 5 / 5 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Apr 29 23:19:11.937: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'node-feature-discovery-worker' (0 seconds elapsed) Apr 29 23:19:11.937: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'sriov-net-dp-kube-sriov-device-plugin-amd64' (0 seconds elapsed) Apr 29 23:19:11.937: INFO: e2e test version: v1.21.9 Apr 29 23:19:11.938: INFO: kube-apiserver version: v1.21.1 Apr 29 23:19:11.939: INFO: >>> kubeConfig: /root/.kube/config Apr 29 23:19:11.945: INFO: Cluster IP family: ipv4 SSSSSSSS ------------------------------ Apr 29 23:19:11.955: INFO: >>> kubeConfig: /root/.kube/config Apr 29 23:19:11.976: INFO: Cluster IP family: ipv4 SS ------------------------------ Apr 29 23:19:11.964: INFO: >>> kubeConfig: /root/.kube/config Apr 29 23:19:11.985: INFO: Cluster IP family: ipv4 Apr 29 23:19:11.964: INFO: >>> kubeConfig: /root/.kube/config Apr 29 23:19:11.985: INFO: Cluster IP family: ipv4 SSSSSSSSSSS ------------------------------ Apr 29 23:19:11.972: INFO: >>> kubeConfig: /root/.kube/config Apr 29 23:19:11.994: INFO: Cluster IP family: ipv4 Apr 29 23:19:11.972: INFO: >>> kubeConfig: /root/.kube/config Apr 29 23:19:11.994: INFO: Cluster IP family: ipv4 SSSSS ------------------------------ Apr 29 23:19:11.977: INFO: >>> kubeConfig: /root/.kube/config Apr 29 23:19:11.998: INFO: Cluster IP family: ipv4 SSSSSSS ------------------------------ Apr 29 23:19:11.980: INFO: >>> kubeConfig: /root/.kube/config Apr 29 23:19:12.002: INFO: Cluster IP family: ipv4 SSSSSSSSSS ------------------------------ Apr 29 23:19:11.985: INFO: >>> kubeConfig: /root/.kube/config Apr 29 23:19:12.005: INFO: Cluster IP family: ipv4 SSSSS ------------------------------ Apr 29 23:19:11.988: INFO: >>> kubeConfig: /root/.kube/config Apr 29 23:19:12.009: INFO: Cluster IP family: ipv4 SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 29 23:19:12.332: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns W0429 23:19:12.352169 27 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Apr 29 23:19:12.352: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Apr 29 23:19:12.354: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for the cluster [Provider:GCE] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns.go:68 Apr 29 23:19:12.356: INFO: Only supported for providers [gce gke] (not local) [AfterEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 29 23:19:12.358: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-2085" for this suite. S [SKIPPING] [0.034 seconds] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should provide DNS for the cluster [Provider:GCE] [It] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns.go:68 Only supported for providers [gce gke] (not local) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns.go:69 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] Firewall rule /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 29 23:19:12.435: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename firewall-test W0429 23:19:12.455561 38 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Apr 29 23:19:12.455: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Apr 29 23:19:12.457: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Firewall rule /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/firewall.go:61 Apr 29 23:19:12.459: INFO: Only supported for providers [gce] (not local) [AfterEach] [sig-network] Firewall rule /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 29 23:19:12.461: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "firewall-test-4431" for this suite. S [SKIPPING] in Spec Setup (BeforeEach) [0.034 seconds] [sig-network] Firewall rule /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 control plane should not expose well-known ports [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/firewall.go:214 Only supported for providers [gce] (not local) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/firewall.go:62 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] version v1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 29 23:19:12.648: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node using proxy subresource /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:91 Apr 29 23:19:14.032: INFO: (0) /api/v1/nodes/node2/proxy/logs/:
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename firewall-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Firewall rule
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/firewall.go:61
Apr 29 23:19:14.639: INFO: Only supported for providers [gce] (not local)
[AfterEach] [sig-network] Firewall rule
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Apr 29 23:19:14.641: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "firewall-test-844" for this suite.


S [SKIPPING] in Spec Setup (BeforeEach) [0.031 seconds]
[sig-network] Firewall rule
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should have correct firewall rules for e2e cluster [BeforeEach]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/firewall.go:204

  Only supported for providers [gce] (not local)

  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/firewall.go:62
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Apr 29 23:19:12.211: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
W0429 23:19:12.229000      26 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
Apr 29 23:19:12.229: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled
Apr 29 23:19:12.230: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746
[It] should be rejected when no endpoints exist
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1968
STEP: creating a service with no endpoints
STEP: creating execpod-noendpoints on node node1
Apr 29 23:19:12.246: INFO: Creating new exec pod
Apr 29 23:19:20.265: INFO: waiting up to 30s to connect to no-pods:80
STEP: hitting service no-pods:80 from pod execpod-noendpoints on node node1
Apr 29 23:19:20.265: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2035 exec execpod-noendpointsdrggs -- /bin/sh -x -c /agnhost connect --timeout=3s no-pods:80'
Apr 29 23:19:22.189: INFO: rc: 1
Apr 29 23:19:22.189: INFO: error contained 'REFUSED', as expected: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2035 exec execpod-noendpointsdrggs -- /bin/sh -x -c /agnhost connect --timeout=3s no-pods:80:
Command stdout:

stderr:
+ /agnhost connect '--timeout=3s' no-pods:80
REFUSED
command terminated with exit code 1

error:
exit status 1
[AfterEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Apr 29 23:19:22.189: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-2035" for this suite.
[AfterEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750


• [SLOW TEST:9.987 seconds]
[sig-network] Services
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should be rejected when no endpoints exist
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1968
------------------------------
{"msg":"PASSED [sig-network] Services should be rejected when no endpoints exist","total":-1,"completed":1,"skipped":54,"failed":0}

SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-network] DNS
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Apr 29 23:19:11.967: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
W0429 23:19:11.991403      25 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
Apr 29 23:19:11.991: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled
Apr 29 23:19:11.995: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support configurable pod resolv.conf
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns.go:458
STEP: Preparing a test DNS service with injected DNS names...
Apr 29 23:19:12.012: INFO: Created pod &Pod{ObjectMeta:{e2e-configmap-dns-server-8f774306-cf3f-47fc-b5c5-81bf8c8b6ecf  dns-7248  6e1c7dd9-37ce-4ba0-a08b-4799179ced59 72520 0 2022-04-29 23:19:12 +0000 UTC   map[] map[kubernetes.io/psp:collectd] [] []  [{e2e.test Update v1 2022-04-29 23:19:12 +0000 UTC FieldsV1 {"f:spec":{"f:containers":{"k:{\"name\":\"agnhost-container\"}":{".":{},"f:args":{},"f:command":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{},"f:volumeMounts":{".":{},"k:{\"mountPath\":\"/etc/coredns\"}":{".":{},"f:mountPath":{},"f:name":{},"f:readOnly":{}}}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{},"f:volumes":{".":{},"k:{\"name\":\"coredns-config\"}":{".":{},"f:configMap":{".":{},"f:defaultMode":{},"f:name":{}},"f:name":{}}}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:coredns-config,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:&ConfigMapVolumeSource{LocalObjectReference:LocalObjectReference{Name:e2e-coredns-configmap-tp2qb,},Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},Volume{Name:kube-api-access-ftbm2,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:agnhost-container,Image:k8s.gcr.io/e2e-test-images/agnhost:2.32,Command:[/coredns],Args:[-conf /etc/coredns/Corefile],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:coredns-config,ReadOnly:true,MountPath:/etc/coredns,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-ftbm2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:Default,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Apr 29 23:19:20.027: INFO: testServerIP is 10.244.4.101
STEP: Creating a pod with dnsPolicy=None and customized dnsConfig...
Apr 29 23:19:20.036: INFO: Created pod &Pod{ObjectMeta:{e2e-dns-utils  dns-7248  997c9cd5-7c3c-41de-a8b7-940c8f5b60e0 72804 0 2022-04-29 23:19:20 +0000 UTC   map[] map[kubernetes.io/psp:collectd] [] []  [{e2e.test Update v1 2022-04-29 23:19:20 +0000 UTC FieldsV1 {"f:spec":{"f:containers":{"k:{\"name\":\"agnhost-container\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsConfig":{".":{},"f:nameservers":{},"f:options":{},"f:searches":{}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-22rbg,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:agnhost-container,Image:k8s.gcr.io/e2e-test-images/agnhost:2.32,Command:[],Args:[pause],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-22rbg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:None,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:&PodDNSConfig{Nameservers:[10.244.4.101],Searches:[resolv.conf.local],Options:[]PodDNSConfigOption{PodDNSConfigOption{Name:ndots,Value:*2,},},},ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
STEP: Verifying customized DNS option is configured on pod...
Apr 29 23:19:24.046: INFO: ExecWithOptions {Command:[cat /etc/resolv.conf] Namespace:dns-7248 PodName:e2e-dns-utils ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Apr 29 23:19:24.046: INFO: >>> kubeConfig: /root/.kube/config
STEP: Verifying customized name server and search path are working...
Apr 29 23:19:24.132: INFO: ExecWithOptions {Command:[dig +short +search notexistname] Namespace:dns-7248 PodName:e2e-dns-utils ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Apr 29 23:19:24.132: INFO: >>> kubeConfig: /root/.kube/config
Apr 29 23:19:24.219: INFO: Deleting pod e2e-dns-utils...
Apr 29 23:19:24.228: INFO: Deleting pod e2e-configmap-dns-server-8f774306-cf3f-47fc-b5c5-81bf8c8b6ecf...
Apr 29 23:19:24.235: INFO: Deleting configmap e2e-coredns-configmap-tp2qb...
[AfterEach] [sig-network] DNS
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Apr 29 23:19:24.238: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-7248" for this suite.


• [SLOW TEST:12.279 seconds]
[sig-network] DNS
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should support configurable pod resolv.conf
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns.go:458
------------------------------
{"msg":"PASSED [sig-network] DNS should support configurable pod resolv.conf","total":-1,"completed":1,"skipped":8,"failed":0}

SSSSS
------------------------------
[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Apr 29 23:19:12.506: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746
[It] should be possible to connect to a service via ExternalIP when the external IP is not assigned to a node
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1177
STEP: creating service externalip-test with type=clusterIP in namespace services-4995
STEP: creating replication controller externalip-test in namespace services-4995
I0429 23:19:12.536006      38 runners.go:190] Created replication controller with name: externalip-test, namespace: services-4995, replica count: 2
I0429 23:19:15.588313      38 runners.go:190] externalip-test Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0429 23:19:18.590786      38 runners.go:190] externalip-test Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0429 23:19:21.591953      38 runners.go:190] externalip-test Pods: 2 out of 2 created, 1 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0429 23:19:24.595246      38 runners.go:190] externalip-test Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Apr 29 23:19:24.595: INFO: Creating new exec pod
Apr 29 23:19:31.613: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4995 exec execpod7jgpp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalip-test 80'
Apr 29 23:19:32.010: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 externalip-test 80\nConnection to externalip-test 80 port [tcp/http] succeeded!\n"
Apr 29 23:19:32.010: INFO: stdout: "externalip-test-zmfvz"
Apr 29 23:19:32.011: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4995 exec execpod7jgpp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.233.18.201 80'
Apr 29 23:19:32.531: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 10.233.18.201 80\nConnection to 10.233.18.201 80 port [tcp/http] succeeded!\n"
Apr 29 23:19:32.531: INFO: stdout: "externalip-test-zmfvz"
Apr 29 23:19:32.531: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4995 exec execpod7jgpp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 203.0.113.250 80'
Apr 29 23:19:32.775: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 203.0.113.250 80\nConnection to 203.0.113.250 80 port [tcp/http] succeeded!\n"
Apr 29 23:19:32.775: INFO: stdout: "externalip-test-7fpvs"
[AfterEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Apr 29 23:19:32.775: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-4995" for this suite.
[AfterEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750


• [SLOW TEST:20.277 seconds]
[sig-network] Services
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should be possible to connect to a service via ExternalIP when the external IP is not assigned to a node
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1177
------------------------------
{"msg":"PASSED [sig-network] Services should be possible to connect to a service via ExternalIP when the external IP is not assigned to a node","total":-1,"completed":1,"skipped":180,"failed":0}

SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-network] KubeProxy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Apr 29 23:19:12.056: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kube-proxy
W0429 23:19:12.076603      31 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
Apr 29 23:19:12.076: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled
Apr 29 23:19:12.078: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run
STEP: Waiting for a default service account to be provisioned in namespace
[It] should set TCP CLOSE_WAIT timeout [Privileged]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/kube_proxy.go:53
Apr 29 23:19:12.098: INFO: The status of Pod e2e-net-exec is Pending, waiting for it to be Running (with Ready = true)
Apr 29 23:19:14.102: INFO: The status of Pod e2e-net-exec is Pending, waiting for it to be Running (with Ready = true)
Apr 29 23:19:16.101: INFO: The status of Pod e2e-net-exec is Pending, waiting for it to be Running (with Ready = true)
Apr 29 23:19:18.103: INFO: The status of Pod e2e-net-exec is Pending, waiting for it to be Running (with Ready = true)
Apr 29 23:19:20.101: INFO: The status of Pod e2e-net-exec is Running (Ready = true)
STEP: Launching a server daemon on node node2 (node ip: 10.10.190.208, image: k8s.gcr.io/e2e-test-images/agnhost:2.32)
Apr 29 23:19:20.114: INFO: The status of Pod e2e-net-server is Pending, waiting for it to be Running (with Ready = true)
Apr 29 23:19:22.118: INFO: The status of Pod e2e-net-server is Pending, waiting for it to be Running (with Ready = true)
Apr 29 23:19:24.124: INFO: The status of Pod e2e-net-server is Pending, waiting for it to be Running (with Ready = true)
Apr 29 23:19:26.118: INFO: The status of Pod e2e-net-server is Pending, waiting for it to be Running (with Ready = true)
Apr 29 23:19:28.120: INFO: The status of Pod e2e-net-server is Running (Ready = true)
STEP: Launching a client connection on node node1 (node ip: 10.10.190.207, image: k8s.gcr.io/e2e-test-images/agnhost:2.32)
Apr 29 23:19:30.140: INFO: The status of Pod e2e-net-client is Pending, waiting for it to be Running (with Ready = true)
Apr 29 23:19:32.144: INFO: The status of Pod e2e-net-client is Pending, waiting for it to be Running (with Ready = true)
Apr 29 23:19:34.147: INFO: The status of Pod e2e-net-client is Running (Ready = true)
STEP: Checking conntrack entries for the timeout
Apr 29 23:19:34.150: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kube-proxy-5939 exec e2e-net-exec -- /bin/sh -x -c conntrack -L -f ipv4 -d 10.10.190.208 | grep -m 1 'CLOSE_WAIT.*dport=11302' '
Apr 29 23:19:34.447: INFO: stderr: "+ grep -m 1 CLOSE_WAIT.*dport=11302\n+ conntrack -L -f ipv4 -d 10.10.190.208\nconntrack v1.4.5 (conntrack-tools): 7 flow entries have been shown.\n"
Apr 29 23:19:34.447: INFO: stdout: "tcp      6 3597 CLOSE_WAIT src=10.244.3.85 dst=10.10.190.208 sport=46838 dport=11302 src=10.10.190.208 dst=10.10.190.207 sport=11302 dport=11114 [ASSURED] mark=0 secctx=system_u:object_r:unlabeled_t:s0 use=1\n"
Apr 29 23:19:34.448: INFO: conntrack entry for node 10.10.190.208 and port 11302:  tcp      6 3597 CLOSE_WAIT src=10.244.3.85 dst=10.10.190.208 sport=46838 dport=11302 src=10.10.190.208 dst=10.10.190.207 sport=11302 dport=11114 [ASSURED] mark=0 secctx=system_u:object_r:unlabeled_t:s0 use=1

[AfterEach] [sig-network] KubeProxy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Apr 29 23:19:34.448: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kube-proxy-5939" for this suite.


• [SLOW TEST:22.400 seconds]
[sig-network] KubeProxy
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should set TCP CLOSE_WAIT timeout [Privileged]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/kube_proxy.go:53
------------------------------
{"msg":"PASSED [sig-network] KubeProxy should set TCP CLOSE_WAIT timeout [Privileged]","total":-1,"completed":1,"skipped":11,"failed":0}

SSSSSS
------------------------------
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Apr 29 23:19:12.841: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename nettest
W0429 23:19:12.865168      32 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
Apr 29 23:19:12.865: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled
Apr 29 23:19:12.867: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:83
STEP: Executing a successful http request from the external internet
[It] should function for client IP based session affinity: http [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:416
STEP: Performing setup for networking test in namespace nettest-5493
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Apr 29 23:19:13.025: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
Apr 29 23:19:13.062: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Apr 29 23:19:15.066: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Apr 29 23:19:17.066: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Apr 29 23:19:19.071: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Apr 29 23:19:21.066: INFO: The status of Pod netserver-0 is Running (Ready = false)
Apr 29 23:19:23.066: INFO: The status of Pod netserver-0 is Running (Ready = false)
Apr 29 23:19:25.065: INFO: The status of Pod netserver-0 is Running (Ready = false)
Apr 29 23:19:27.065: INFO: The status of Pod netserver-0 is Running (Ready = false)
Apr 29 23:19:29.067: INFO: The status of Pod netserver-0 is Running (Ready = false)
Apr 29 23:19:31.067: INFO: The status of Pod netserver-0 is Running (Ready = false)
Apr 29 23:19:33.069: INFO: The status of Pod netserver-0 is Running (Ready = true)
Apr 29 23:19:33.073: INFO: The status of Pod netserver-1 is Running (Ready = false)
Apr 29 23:19:35.077: INFO: The status of Pod netserver-1 is Running (Ready = true)
STEP: Creating test pods
Apr 29 23:19:41.095: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2
STEP: Getting node addresses
Apr 29 23:19:41.095: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
Apr 29 23:19:41.102: INFO: Requires at least 2 nodes (not -1)
[AfterEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Apr 29 23:19:41.103: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "nettest-5493" for this suite.


S [SKIPPING] [28.272 seconds]
[sig-network] Networking
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  Granular Checks: Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:151
    should function for client IP based session affinity: http [LinuxOnly] [It]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:416

    Requires at least 2 nodes (not -1)

    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/network/utils.go:782
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Apr 29 23:19:12.158: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename nettest
W0429 23:19:12.181180      37 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
Apr 29 23:19:12.181: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled
Apr 29 23:19:12.183: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:83
STEP: Executing a successful http request from the external internet
[It] should function for endpoint-Service: udp
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:256
STEP: Performing setup for networking test in namespace nettest-5335
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Apr 29 23:19:12.335: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
Apr 29 23:19:12.368: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Apr 29 23:19:14.372: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Apr 29 23:19:16.372: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Apr 29 23:19:18.372: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Apr 29 23:19:20.371: INFO: The status of Pod netserver-0 is Running (Ready = false)
Apr 29 23:19:22.371: INFO: The status of Pod netserver-0 is Running (Ready = false)
Apr 29 23:19:24.373: INFO: The status of Pod netserver-0 is Running (Ready = false)
Apr 29 23:19:26.373: INFO: The status of Pod netserver-0 is Running (Ready = false)
Apr 29 23:19:28.373: INFO: The status of Pod netserver-0 is Running (Ready = false)
Apr 29 23:19:30.372: INFO: The status of Pod netserver-0 is Running (Ready = false)
Apr 29 23:19:32.372: INFO: The status of Pod netserver-0 is Running (Ready = false)
Apr 29 23:19:34.373: INFO: The status of Pod netserver-0 is Running (Ready = true)
Apr 29 23:19:34.378: INFO: The status of Pod netserver-1 is Running (Ready = true)
STEP: Creating test pods
Apr 29 23:19:42.398: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2
STEP: Getting node addresses
Apr 29 23:19:42.398: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
Apr 29 23:19:42.405: INFO: Requires at least 2 nodes (not -1)
[AfterEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Apr 29 23:19:42.407: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "nettest-5335" for this suite.


S [SKIPPING] [30.256 seconds]
[sig-network] Networking
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  Granular Checks: Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:151
    should function for endpoint-Service: udp [It]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:256

    Requires at least 2 nodes (not -1)

    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/network/utils.go:782
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Apr 29 23:19:24.259: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746
[It] should preserve source pod IP for traffic thru service cluster IP [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:903
Apr 29 23:19:24.291: INFO: The status of Pod kube-proxy-mode-detector is Pending, waiting for it to be Running (with Ready = true)
Apr 29 23:19:26.294: INFO: The status of Pod kube-proxy-mode-detector is Pending, waiting for it to be Running (with Ready = true)
Apr 29 23:19:28.296: INFO: The status of Pod kube-proxy-mode-detector is Pending, waiting for it to be Running (with Ready = true)
Apr 29 23:19:30.294: INFO: The status of Pod kube-proxy-mode-detector is Running (Ready = true)
Apr 29 23:19:30.296: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6452 exec kube-proxy-mode-detector -- /bin/sh -x -c curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode'
Apr 29 23:19:30.550: INFO: stderr: "+ curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode\n"
Apr 29 23:19:30.550: INFO: stdout: "iptables"
Apr 29 23:19:30.550: INFO: proxyMode: iptables
Apr 29 23:19:30.557: INFO: Waiting for pod kube-proxy-mode-detector to disappear
Apr 29 23:19:30.559: INFO: Pod kube-proxy-mode-detector no longer exists
STEP: creating a TCP service sourceip-test with type=ClusterIP in namespace services-6452
Apr 29 23:19:30.565: INFO: sourceip-test cluster ip: 10.233.49.216
STEP: Picking 2 Nodes to test whether source IP is preserved or not
STEP: Creating a webserver pod to be part of the TCP service which echoes back source ip
Apr 29 23:19:30.583: INFO: The status of Pod echo-sourceip is Pending, waiting for it to be Running (with Ready = true)
Apr 29 23:19:32.586: INFO: The status of Pod echo-sourceip is Pending, waiting for it to be Running (with Ready = true)
Apr 29 23:19:34.588: INFO: The status of Pod echo-sourceip is Running (Ready = true)
STEP: waiting up to 3m0s for service sourceip-test in namespace services-6452 to expose endpoints map[echo-sourceip:[8080]]
Apr 29 23:19:34.596: INFO: successfully validated that service sourceip-test in namespace services-6452 exposes endpoints map[echo-sourceip:[8080]]
STEP: Creating pause pod deployment
Apr 29 23:19:34.602: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:0, Replicas:0, UpdatedReplicas:0, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:0, Conditions:[]v1.DeploymentCondition(nil), CollisionCount:(*int32)(nil)}
Apr 29 23:19:36.606: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:2, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:2, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63786871174, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63786871174, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63786871174, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63786871174, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"pause-pod-7c88c699dd\" is progressing."}}, CollisionCount:(*int32)(nil)}
Apr 29 23:19:38.609: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:2, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63786871174, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63786871174, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63786871178, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63786871174, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"pause-pod-7c88c699dd\" is progressing."}}, CollisionCount:(*int32)(nil)}
Apr 29 23:19:40.606: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:2, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63786871174, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63786871174, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63786871178, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63786871174, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"pause-pod-7c88c699dd\" is progressing."}}, CollisionCount:(*int32)(nil)}
Apr 29 23:19:42.612: INFO: Waiting up to 2m0s to get response from 10.233.49.216:8080
Apr 29 23:19:42.612: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6452 exec pause-pod-7c88c699dd-5hrn8 -- /bin/sh -x -c curl -q -s --connect-timeout 30 10.233.49.216:8080/clientip'
Apr 29 23:19:42.885: INFO: stderr: "+ curl -q -s --connect-timeout 30 10.233.49.216:8080/clientip\n"
Apr 29 23:19:42.885: INFO: stdout: "10.244.3.87:41056"
STEP: Verifying the preserved source ip
Apr 29 23:19:42.885: INFO: Waiting up to 2m0s to get response from 10.233.49.216:8080
Apr 29 23:19:42.885: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6452 exec pause-pod-7c88c699dd-6dnck -- /bin/sh -x -c curl -q -s --connect-timeout 30 10.233.49.216:8080/clientip'
Apr 29 23:19:43.167: INFO: stderr: "+ curl -q -s --connect-timeout 30 10.233.49.216:8080/clientip\n"
Apr 29 23:19:43.167: INFO: stdout: "10.244.4.119:58768"
STEP: Verifying the preserved source ip
Apr 29 23:19:43.167: INFO: Deleting deployment
Apr 29 23:19:43.171: INFO: Cleaning up the echo server pod
Apr 29 23:19:43.177: INFO: Cleaning up the sourceip test service
[AfterEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Apr 29 23:19:43.186: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-6452" for this suite.
[AfterEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750


• [SLOW TEST:18.935 seconds]
[sig-network] Services
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should preserve source pod IP for traffic thru service cluster IP [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:903
------------------------------
{"msg":"PASSED [sig-network] Services should preserve source pod IP for traffic thru service cluster IP [LinuxOnly]","total":-1,"completed":2,"skipped":13,"failed":0}

SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Apr 29 23:19:12.184: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename nettest
W0429 23:19:12.201819      23 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
Apr 29 23:19:12.202: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled
Apr 29 23:19:12.203: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:83
STEP: Executing a successful http request from the external internet
[It] should function for service endpoints using hostNetwork
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:474
STEP: Performing setup for networking test in namespace nettest-5762
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Apr 29 23:19:12.386: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
Apr 29 23:19:12.417: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Apr 29 23:19:14.419: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Apr 29 23:19:16.420: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Apr 29 23:19:18.424: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Apr 29 23:19:20.419: INFO: The status of Pod netserver-0 is Running (Ready = false)
Apr 29 23:19:22.421: INFO: The status of Pod netserver-0 is Running (Ready = false)
Apr 29 23:19:24.421: INFO: The status of Pod netserver-0 is Running (Ready = false)
Apr 29 23:19:26.421: INFO: The status of Pod netserver-0 is Running (Ready = false)
Apr 29 23:19:28.421: INFO: The status of Pod netserver-0 is Running (Ready = false)
Apr 29 23:19:30.420: INFO: The status of Pod netserver-0 is Running (Ready = false)
Apr 29 23:19:32.421: INFO: The status of Pod netserver-0 is Running (Ready = true)
Apr 29 23:19:32.426: INFO: The status of Pod netserver-1 is Running (Ready = false)
Apr 29 23:19:34.432: INFO: The status of Pod netserver-1 is Running (Ready = true)
STEP: Creating test pods
Apr 29 23:19:44.469: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2
STEP: Getting node addresses
Apr 29 23:19:44.469: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
Apr 29 23:19:44.480: INFO: Requires at least 2 nodes (not -1)
[AfterEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Apr 29 23:19:44.482: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "nettest-5762" for this suite.


S [SKIPPING] [32.306 seconds]
[sig-network] Networking
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  Granular Checks: Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:151
    should function for service endpoints using hostNetwork [It]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:474

    Requires at least 2 nodes (not -1)

    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/network/utils.go:782
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Apr 29 23:19:41.347: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746
[It] should release NodePorts on delete
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1561
STEP: creating service nodeport-reuse with type NodePort in namespace services-102
STEP: deleting original service nodeport-reuse
Apr 29 23:19:41.399: INFO: Creating new host exec pod
Apr 29 23:19:41.418: INFO: The status of Pod hostexec is Pending, waiting for it to be Running (with Ready = true)
Apr 29 23:19:43.422: INFO: The status of Pod hostexec is Pending, waiting for it to be Running (with Ready = true)
Apr 29 23:19:45.422: INFO: The status of Pod hostexec is Pending, waiting for it to be Running (with Ready = true)
Apr 29 23:19:47.423: INFO: The status of Pod hostexec is Pending, waiting for it to be Running (with Ready = true)
Apr 29 23:19:49.422: INFO: The status of Pod hostexec is Running (Ready = true)
Apr 29 23:19:49.422: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-102 exec hostexec -- /bin/sh -x -c ! ss -ant46 'sport = :30392' | tail -n +2 | grep LISTEN'
Apr 29 23:19:49.747: INFO: stderr: "+ ss -ant46 'sport = :30392'\n+ tail -n +2\n+ grep LISTEN\n"
Apr 29 23:19:49.747: INFO: stdout: ""
STEP: creating service nodeport-reuse with same NodePort 30392
STEP: deleting service nodeport-reuse in namespace services-102
[AfterEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Apr 29 23:19:49.765: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-102" for this suite.
[AfterEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750


• [SLOW TEST:8.424 seconds]
[sig-network] Services
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should release NodePorts on delete
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1561
------------------------------
{"msg":"PASSED [sig-network] Services should release NodePorts on delete","total":-1,"completed":1,"skipped":501,"failed":0}

SSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-network] Loadbalancing: L7
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Apr 29 23:19:49.831: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename ingress
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Loadbalancing: L7
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/ingress.go:69
Apr 29 23:19:49.856: INFO: Found ClusterRoles; assuming RBAC is enabled.
[BeforeEach] [Slow] Nginx
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/ingress.go:688
Apr 29 23:19:49.961: INFO: Only supported for providers [gce gke] (not local)
[AfterEach] [Slow] Nginx
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/ingress.go:706
STEP: No ingress created, no cleanup necessary
[AfterEach] [sig-network] Loadbalancing: L7
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Apr 29 23:19:49.963: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "ingress-368" for this suite.


S [SKIPPING] in Spec Setup (BeforeEach) [0.140 seconds]
[sig-network] Loadbalancing: L7
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  [Slow] Nginx
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/ingress.go:685
    should conform to Ingress spec [BeforeEach]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/ingress.go:722

    Only supported for providers [gce gke] (not local)

    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/ingress.go:689
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-network] DNS
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Apr 29 23:19:42.509: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should resolve DNS of partial qualified names for the cluster [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns.go:90
STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default;check="$$(dig +tcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc;test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-8051.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.dns-8051.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-8051.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default;check="$$(dig +tcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc;test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-8051.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.dns-8051.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-8051.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Apr 29 23:19:50.594: INFO: DNS probes using dns-8051/dns-test-6248379e-2d69-4d49-b76b-7abfab367213 succeeded

STEP: deleting the pod
[AfterEach] [sig-network] DNS
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Apr 29 23:19:50.603: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-8051" for this suite.


• [SLOW TEST:8.102 seconds]
[sig-network] DNS
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should resolve DNS of partial qualified names for the cluster [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns.go:90
------------------------------
S
------------------------------
{"msg":"PASSED [sig-network] DNS should resolve DNS of partial qualified names for the cluster [LinuxOnly]","total":-1,"completed":1,"skipped":82,"failed":0}

SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-network] ESIPP [Slow]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Apr 29 23:19:50.757: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename esipp
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] ESIPP [Slow]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/loadbalancer.go:858
Apr 29 23:19:50.777: INFO: Only supported for providers [gce gke] (not local)
[AfterEach] [sig-network] ESIPP [Slow]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Apr 29 23:19:50.779: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "esipp-3219" for this suite.
[AfterEach] [sig-network] ESIPP [Slow]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/loadbalancer.go:866


S [SKIPPING] in Spec Setup (BeforeEach) [0.031 seconds]
[sig-network] ESIPP [Slow]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should work for type=NodePort [BeforeEach]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/loadbalancer.go:927

  Only supported for providers [gce gke] (not local)

  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/loadbalancer.go:860
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-network] Conntrack
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Apr 29 23:19:14.932: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename conntrack
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Conntrack
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/conntrack.go:96
[It] should be able to preserve UDP traffic when server pod cycles for a ClusterIP service
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/conntrack.go:203
STEP: creating a UDP service svc-udp with type=ClusterIP in conntrack-9605
STEP: creating a client pod for probing the service svc-udp
Apr 29 23:19:15.140: INFO: The status of Pod pod-client is Pending, waiting for it to be Running (with Ready = true)
Apr 29 23:19:17.142: INFO: The status of Pod pod-client is Pending, waiting for it to be Running (with Ready = true)
Apr 29 23:19:19.144: INFO: The status of Pod pod-client is Pending, waiting for it to be Running (with Ready = true)
Apr 29 23:19:21.143: INFO: The status of Pod pod-client is Running (Ready = true)
Apr 29 23:19:21.600: INFO: Pod client logs: Fri Apr 29 23:19:19 UTC 2022
Fri Apr 29 23:19:19 UTC 2022 Try: 1

Fri Apr 29 23:19:19 UTC 2022 Try: 2

Fri Apr 29 23:19:19 UTC 2022 Try: 3

Fri Apr 29 23:19:19 UTC 2022 Try: 4

Fri Apr 29 23:19:19 UTC 2022 Try: 5

Fri Apr 29 23:19:19 UTC 2022 Try: 6

Fri Apr 29 23:19:19 UTC 2022 Try: 7

STEP: creating a backend pod pod-server-1 for the service svc-udp
Apr 29 23:19:21.613: INFO: The status of Pod pod-server-1 is Pending, waiting for it to be Running (with Ready = true)
Apr 29 23:19:23.617: INFO: The status of Pod pod-server-1 is Pending, waiting for it to be Running (with Ready = true)
Apr 29 23:19:25.616: INFO: The status of Pod pod-server-1 is Pending, waiting for it to be Running (with Ready = true)
Apr 29 23:19:27.617: INFO: The status of Pod pod-server-1 is Pending, waiting for it to be Running (with Ready = true)
Apr 29 23:19:29.618: INFO: The status of Pod pod-server-1 is Running (Ready = true)
STEP: waiting up to 3m0s for service svc-udp in namespace conntrack-9605 to expose endpoints map[pod-server-1:[80]]
Apr 29 23:19:29.629: INFO: successfully validated that service svc-udp in namespace conntrack-9605 exposes endpoints map[pod-server-1:[80]]
STEP: checking client pod connected to the backend 1 on Node IP 10.10.190.208
STEP: creating a second backend pod pod-server-2 for the service svc-udp
Apr 29 23:19:39.656: INFO: The status of Pod pod-server-2 is Pending, waiting for it to be Running (with Ready = true)
Apr 29 23:19:41.662: INFO: The status of Pod pod-server-2 is Pending, waiting for it to be Running (with Ready = true)
Apr 29 23:19:43.660: INFO: The status of Pod pod-server-2 is Pending, waiting for it to be Running (with Ready = true)
Apr 29 23:19:45.660: INFO: The status of Pod pod-server-2 is Running (Ready = true)
Apr 29 23:19:45.662: INFO: Cleaning up pod-server-1 pod
Apr 29 23:19:45.669: INFO: Waiting for pod pod-server-1 to disappear
Apr 29 23:19:45.671: INFO: Pod pod-server-1 no longer exists
STEP: waiting up to 3m0s for service svc-udp in namespace conntrack-9605 to expose endpoints map[pod-server-2:[80]]
Apr 29 23:19:45.678: INFO: successfully validated that service svc-udp in namespace conntrack-9605 exposes endpoints map[pod-server-2:[80]]
STEP: checking client pod connected to the backend 2 on Node IP 10.10.190.208
[AfterEach] [sig-network] Conntrack
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Apr 29 23:19:55.689: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "conntrack-9605" for this suite.


• [SLOW TEST:40.765 seconds]
[sig-network] Conntrack
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should be able to preserve UDP traffic when server pod cycles for a ClusterIP service
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/conntrack.go:203
------------------------------
{"msg":"PASSED [sig-network] Conntrack should be able to preserve UDP traffic when server pod cycles for a ClusterIP service","total":-1,"completed":2,"skipped":705,"failed":0}

SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Apr 29 23:19:55.886: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746
[It] should prevent NodePort collisions
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1440
STEP: creating service nodeport-collision-1 with type NodePort in namespace services-6633
STEP: creating service nodeport-collision-2 with conflicting NodePort
STEP: deleting service nodeport-collision-1 to release NodePort
STEP: creating service nodeport-collision-2 with no-longer-conflicting NodePort
STEP: deleting service nodeport-collision-2 in namespace services-6633
[AfterEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Apr 29 23:19:55.950: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-6633" for this suite.
[AfterEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750

•
------------------------------
{"msg":"PASSED [sig-network] Services should prevent NodePort collisions","total":-1,"completed":3,"skipped":802,"failed":0}

SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Apr 29 23:19:34.471: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename nettest
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:83
STEP: Executing a successful http request from the external internet
[It] should update endpoints: http
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:334
STEP: Performing setup for networking test in namespace nettest-2172
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Apr 29 23:19:34.603: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
Apr 29 23:19:34.639: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Apr 29 23:19:36.642: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Apr 29 23:19:38.646: INFO: The status of Pod netserver-0 is Running (Ready = false)
Apr 29 23:19:40.642: INFO: The status of Pod netserver-0 is Running (Ready = false)
Apr 29 23:19:42.642: INFO: The status of Pod netserver-0 is Running (Ready = false)
Apr 29 23:19:44.643: INFO: The status of Pod netserver-0 is Running (Ready = false)
Apr 29 23:19:46.643: INFO: The status of Pod netserver-0 is Running (Ready = false)
Apr 29 23:19:48.648: INFO: The status of Pod netserver-0 is Running (Ready = false)
Apr 29 23:19:50.643: INFO: The status of Pod netserver-0 is Running (Ready = false)
Apr 29 23:19:52.645: INFO: The status of Pod netserver-0 is Running (Ready = false)
Apr 29 23:19:54.647: INFO: The status of Pod netserver-0 is Running (Ready = false)
Apr 29 23:19:56.642: INFO: The status of Pod netserver-0 is Running (Ready = true)
Apr 29 23:19:56.648: INFO: The status of Pod netserver-1 is Running (Ready = true)
STEP: Creating test pods
Apr 29 23:20:02.671: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2
STEP: Getting node addresses
Apr 29 23:20:02.671: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
Apr 29 23:20:02.677: INFO: Requires at least 2 nodes (not -1)
[AfterEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Apr 29 23:20:02.679: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "nettest-2172" for this suite.


S [SKIPPING] [28.215 seconds]
[sig-network] Networking
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  Granular Checks: Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:151
    should update endpoints: http [It]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:334

    Requires at least 2 nodes (not -1)

    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/network/utils.go:782
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Apr 29 23:19:33.055: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename nettest
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:83
STEP: Executing a successful http request from the external internet
[It] should function for endpoint-Service: http
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:242
STEP: Performing setup for networking test in namespace nettest-4586
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Apr 29 23:19:33.192: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
Apr 29 23:19:33.226: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Apr 29 23:19:35.229: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Apr 29 23:19:37.229: INFO: The status of Pod netserver-0 is Running (Ready = false)
Apr 29 23:19:39.231: INFO: The status of Pod netserver-0 is Running (Ready = false)
Apr 29 23:19:41.229: INFO: The status of Pod netserver-0 is Running (Ready = false)
Apr 29 23:19:43.230: INFO: The status of Pod netserver-0 is Running (Ready = false)
Apr 29 23:19:45.230: INFO: The status of Pod netserver-0 is Running (Ready = false)
Apr 29 23:19:47.229: INFO: The status of Pod netserver-0 is Running (Ready = false)
Apr 29 23:19:49.230: INFO: The status of Pod netserver-0 is Running (Ready = false)
Apr 29 23:19:51.231: INFO: The status of Pod netserver-0 is Running (Ready = false)
Apr 29 23:19:53.231: INFO: The status of Pod netserver-0 is Running (Ready = true)
Apr 29 23:19:53.236: INFO: The status of Pod netserver-1 is Running (Ready = false)
Apr 29 23:19:55.239: INFO: The status of Pod netserver-1 is Running (Ready = false)
Apr 29 23:19:57.241: INFO: The status of Pod netserver-1 is Running (Ready = true)
STEP: Creating test pods
Apr 29 23:20:03.262: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2
STEP: Getting node addresses
Apr 29 23:20:03.262: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
Apr 29 23:20:03.271: INFO: Requires at least 2 nodes (not -1)
[AfterEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Apr 29 23:20:03.273: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "nettest-4586" for this suite.


S [SKIPPING] [30.227 seconds]
[sig-network] Networking
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  Granular Checks: Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:151
    should function for endpoint-Service: http [It]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:242

    Requires at least 2 nodes (not -1)

    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/network/utils.go:782
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Apr 29 23:19:44.599: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746
[It] should create endpoints for unready pods
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1624
STEP: creating RC slow-terminating-unready-pod with selectors map[name:slow-terminating-unready-pod]
STEP: creating Service tolerate-unready with selectors map[name:slow-terminating-unready-pod testid:tolerate-unready-8746de24-15cd-4bd3-bd1b-ba13d2248c3a]
STEP: Verifying pods for RC slow-terminating-unready-pod
Apr 29 23:19:44.634: INFO: Pod name slow-terminating-unready-pod: Found 0 pods out of 1
Apr 29 23:19:49.639: INFO: Pod name slow-terminating-unready-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
STEP: trying to dial each unique pod
Apr 29 23:19:49.646: INFO: Controller slow-terminating-unready-pod: Got non-empty result from replica 1 [slow-terminating-unready-pod-6bgz8]: "NOW: 2022-04-29 23:19:49.643522193 +0000 UTC m=+2.223313282", 1 of 1 required successes so far
STEP: Waiting for endpoints of Service with DNS name tolerate-unready.services-4087.svc.cluster.local
Apr 29 23:19:49.646: INFO: Creating new exec pod
Apr 29 23:19:55.661: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4087 exec execpod-rgffn -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://tolerate-unready.services-4087.svc.cluster.local:80/'
Apr 29 23:19:56.736: INFO: stderr: "+ curl -q -s --connect-timeout 2 http://tolerate-unready.services-4087.svc.cluster.local:80/\n"
Apr 29 23:19:56.736: INFO: stdout: "NOW: 2022-04-29 23:19:56.708957103 +0000 UTC m=+9.288748234"
STEP: Scaling down replication controller to zero
STEP: Scaling ReplicationController slow-terminating-unready-pod in namespace services-4087 to 0
STEP: Update service to not tolerate unready services
STEP: Check if pod is unreachable
Apr 29 23:20:01.773: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4087 exec execpod-rgffn -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://tolerate-unready.services-4087.svc.cluster.local:80/; test "$?" -ne "0"'
Apr 29 23:20:01.996: INFO: rc: 1
Apr 29 23:20:01.996: INFO: expected un-ready endpoint for Service slow-terminating-unready-pod, stdout: NOW: 2022-04-29 23:20:01.986653391 +0000 UTC m=+14.566444504, err error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4087 exec execpod-rgffn -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://tolerate-unready.services-4087.svc.cluster.local:80/; test "$?" -ne "0":
Command stdout:
NOW: 2022-04-29 23:20:01.986653391 +0000 UTC m=+14.566444504
stderr:
+ curl -q -s --connect-timeout 2 http://tolerate-unready.services-4087.svc.cluster.local:80/
+ test 0 -ne 0
command terminated with exit code 1

error:
exit status 1
Apr 29 23:20:03.999: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4087 exec execpod-rgffn -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://tolerate-unready.services-4087.svc.cluster.local:80/; test "$?" -ne "0"'
Apr 29 23:20:05.290: INFO: stderr: "+ curl -q -s --connect-timeout 2 http://tolerate-unready.services-4087.svc.cluster.local:80/\n+ test 7 -ne 0\n"
Apr 29 23:20:05.290: INFO: stdout: ""
STEP: Update service to tolerate unready services again
STEP: Check if terminating pod is available through service
Apr 29 23:20:05.298: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4087 exec execpod-rgffn -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://tolerate-unready.services-4087.svc.cluster.local:80/'
Apr 29 23:20:05.790: INFO: stderr: "+ curl -q -s --connect-timeout 2 http://tolerate-unready.services-4087.svc.cluster.local:80/\n"
Apr 29 23:20:05.790: INFO: stdout: "NOW: 2022-04-29 23:20:05.759522323 +0000 UTC m=+18.339313424"
STEP: Remove pods immediately
STEP: stopping RC slow-terminating-unready-pod in namespace services-4087
STEP: deleting service tolerate-unready in namespace services-4087
[AfterEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Apr 29 23:20:05.820: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-4087" for this suite.
[AfterEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750


• [SLOW TEST:21.230 seconds]
[sig-network] Services
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should create endpoints for unready pods
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1624
------------------------------
{"msg":"PASSED [sig-network] Services should create endpoints for unready pods","total":-1,"completed":1,"skipped":110,"failed":0}

SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Apr 29 23:19:51.096: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename nettest
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:83
STEP: Executing a successful http request from the external internet
[It] should function for pod-Service: udp
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:168
STEP: Performing setup for networking test in namespace nettest-1814
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Apr 29 23:19:51.212: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
Apr 29 23:19:51.246: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Apr 29 23:19:53.250: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Apr 29 23:19:55.250: INFO: The status of Pod netserver-0 is Running (Ready = false)
Apr 29 23:19:57.251: INFO: The status of Pod netserver-0 is Running (Ready = false)
Apr 29 23:19:59.251: INFO: The status of Pod netserver-0 is Running (Ready = false)
Apr 29 23:20:01.250: INFO: The status of Pod netserver-0 is Running (Ready = false)
Apr 29 23:20:03.251: INFO: The status of Pod netserver-0 is Running (Ready = false)
Apr 29 23:20:05.250: INFO: The status of Pod netserver-0 is Running (Ready = false)
Apr 29 23:20:07.253: INFO: The status of Pod netserver-0 is Running (Ready = false)
Apr 29 23:20:09.253: INFO: The status of Pod netserver-0 is Running (Ready = false)
Apr 29 23:20:11.251: INFO: The status of Pod netserver-0 is Running (Ready = false)
Apr 29 23:20:13.252: INFO: The status of Pod netserver-0 is Running (Ready = true)
Apr 29 23:20:13.258: INFO: The status of Pod netserver-1 is Running (Ready = true)
STEP: Creating test pods
Apr 29 23:20:19.280: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2
STEP: Getting node addresses
Apr 29 23:20:19.280: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
Apr 29 23:20:19.286: INFO: Requires at least 2 nodes (not -1)
[AfterEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Apr 29 23:20:19.288: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "nettest-1814" for this suite.


S [SKIPPING] [28.202 seconds]
[sig-network] Networking
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  Granular Checks: Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:151
    should function for pod-Service: udp [It]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:168

    Requires at least 2 nodes (not -1)

    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/network/utils.go:782
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-network] ESIPP [Slow]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Apr 29 23:20:19.521: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename esipp
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] ESIPP [Slow]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/loadbalancer.go:858
Apr 29 23:20:19.542: INFO: Only supported for providers [gce gke] (not local)
[AfterEach] [sig-network] ESIPP [Slow]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Apr 29 23:20:19.544: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "esipp-9752" for this suite.
[AfterEach] [sig-network] ESIPP [Slow]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/loadbalancer.go:866


S [SKIPPING] in Spec Setup (BeforeEach) [0.031 seconds]
[sig-network] ESIPP [Slow]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should handle updates to ExternalTrafficPolicy field [BeforeEach]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/loadbalancer.go:1095

  Only supported for providers [gce gke] (not local)

  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/loadbalancer.go:860
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-network] Netpol API
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Apr 29 23:20:19.892: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename netpol
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support creating NetworkPolicy API operations
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/netpol/network_policy_api.go:48
STEP: getting /apis
STEP: getting /apis/networking.k8s.io
STEP: getting /apis/networking.k8s.iov1
STEP: creating
STEP: getting
STEP: listing
STEP: watching
Apr 29 23:20:19.940: INFO: starting watch
STEP: cluster-wide listing
STEP: cluster-wide watching
Apr 29 23:20:19.950: INFO: starting watch
STEP: patching
STEP: updating
Apr 29 23:20:19.957: INFO: waiting for watch events with expected annotations
Apr 29 23:20:19.958: INFO: missing expected annotations, waiting: map[string]string{"patched":"true"}
Apr 29 23:20:19.958: INFO: saw patched and updated annotations
STEP: deleting
STEP: deleting a collection
[AfterEach] [sig-network] Netpol API
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Apr 29 23:20:19.976: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "netpol-9576" for this suite.

•
------------------------------
{"msg":"PASSED [sig-network] Netpol API should support creating NetworkPolicy API operations","total":-1,"completed":2,"skipped":655,"failed":0}

SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-network] ESIPP [Slow]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Apr 29 23:20:20.046: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename esipp
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] ESIPP [Slow]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/loadbalancer.go:858
Apr 29 23:20:20.064: INFO: Only supported for providers [gce gke] (not local)
[AfterEach] [sig-network] ESIPP [Slow]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Apr 29 23:20:20.066: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "esipp-1199" for this suite.
[AfterEach] [sig-network] ESIPP [Slow]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/loadbalancer.go:866


S [SKIPPING] in Spec Setup (BeforeEach) [0.029 seconds]
[sig-network] ESIPP [Slow]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should only target nodes with endpoints [BeforeEach]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/loadbalancer.go:959

  Only supported for providers [gce gke] (not local)

  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/loadbalancer.go:860
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Apr 29 23:19:56.056: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename nettest
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:83
STEP: Executing a successful http request from the external internet
[It] should update nodePort: http [Slow]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:369
STEP: Performing setup for networking test in namespace nettest-3216
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Apr 29 23:19:56.186: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
Apr 29 23:19:56.218: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Apr 29 23:19:58.221: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Apr 29 23:20:00.222: INFO: The status of Pod netserver-0 is Running (Ready = false)
Apr 29 23:20:02.222: INFO: The status of Pod netserver-0 is Running (Ready = false)
Apr 29 23:20:04.222: INFO: The status of Pod netserver-0 is Running (Ready = false)
Apr 29 23:20:06.221: INFO: The status of Pod netserver-0 is Running (Ready = false)
Apr 29 23:20:08.223: INFO: The status of Pod netserver-0 is Running (Ready = false)
Apr 29 23:20:10.221: INFO: The status of Pod netserver-0 is Running (Ready = false)
Apr 29 23:20:12.220: INFO: The status of Pod netserver-0 is Running (Ready = false)
Apr 29 23:20:14.224: INFO: The status of Pod netserver-0 is Running (Ready = false)
Apr 29 23:20:16.224: INFO: The status of Pod netserver-0 is Running (Ready = false)
Apr 29 23:20:18.223: INFO: The status of Pod netserver-0 is Running (Ready = true)
Apr 29 23:20:18.228: INFO: The status of Pod netserver-1 is Running (Ready = true)
STEP: Creating test pods
Apr 29 23:20:22.264: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2
STEP: Getting node addresses
Apr 29 23:20:22.264: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
Apr 29 23:20:22.272: INFO: Requires at least 2 nodes (not -1)
[AfterEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Apr 29 23:20:22.274: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "nettest-3216" for this suite.


S [SKIPPING] [26.227 seconds]
[sig-network] Networking
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  Granular Checks: Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:151
    should update nodePort: http [Slow] [It]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:369

    Requires at least 2 nodes (not -1)

    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/network/utils.go:782
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-network] NetworkPolicy API
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Apr 29 23:20:22.584: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename networkpolicies
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support creating NetworkPolicy API operations
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/netpol/network_legacy.go:2196
STEP: getting /apis
STEP: getting /apis/networking.k8s.io
STEP: getting /apis/networking.k8s.iov1
STEP: creating
STEP: getting
STEP: listing
STEP: watching
Apr 29 23:20:22.625: INFO: starting watch
STEP: cluster-wide listing
STEP: cluster-wide watching
Apr 29 23:20:22.629: INFO: starting watch
STEP: patching
STEP: updating
Apr 29 23:20:22.637: INFO: waiting for watch events with expected annotations
Apr 29 23:20:22.637: INFO: missing expected annotations, waiting: map[string]string{"patched":"true"}
Apr 29 23:20:22.637: INFO: saw patched and updated annotations
STEP: deleting
STEP: deleting a collection
[AfterEach] [sig-network] NetworkPolicy API
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Apr 29 23:20:22.655: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "networkpolicies-5918" for this suite.

•
------------------------------
{"msg":"PASSED [sig-network] NetworkPolicy API should support creating NetworkPolicy API operations","total":-1,"completed":4,"skipped":1017,"failed":0}

SSSSSSSSS
------------------------------
[BeforeEach] [sig-network] Conntrack
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Apr 29 23:19:12.003: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename conntrack
W0429 23:19:12.023088      34 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
Apr 29 23:19:12.023: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled
Apr 29 23:19:12.025: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Conntrack
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/conntrack.go:96
[It] should drop INVALID conntrack entries
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/conntrack.go:282
Apr 29 23:19:12.044: INFO: The status of Pod boom-server is Pending, waiting for it to be Running (with Ready = true)
Apr 29 23:19:14.048: INFO: The status of Pod boom-server is Pending, waiting for it to be Running (with Ready = true)
Apr 29 23:19:16.048: INFO: The status of Pod boom-server is Pending, waiting for it to be Running (with Ready = true)
Apr 29 23:19:18.049: INFO: The status of Pod boom-server is Pending, waiting for it to be Running (with Ready = true)
Apr 29 23:19:20.049: INFO: The status of Pod boom-server is Running (Ready = true)
STEP: Server pod created on node node2
STEP: Server service created
Apr 29 23:19:20.068: INFO: The status of Pod startup-script is Pending, waiting for it to be Running (with Ready = true)
Apr 29 23:19:22.072: INFO: The status of Pod startup-script is Pending, waiting for it to be Running (with Ready = true)
Apr 29 23:19:24.073: INFO: The status of Pod startup-script is Running (Ready = true)
STEP: Client pod created
STEP: checking client pod does not RST the TCP connection because it receives and INVALID packet
Apr 29 23:20:24.120: INFO: boom-server pod logs: 2022/04/29 23:19:18 external ip: 10.244.4.102
2022/04/29 23:19:18 listen on 0.0.0.0:9000
2022/04/29 23:19:18 probing 10.244.4.102
2022/04/29 23:19:24 tcp packet: &{SrcPort:38353 DestPort:9000 Seq:86961513 Ack:0 Flags:40962 WindowSize:29200 Checksum:45433 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.3.81
2022/04/29 23:19:24 tcp packet: &{SrcPort:38353 DestPort:9000 Seq:86961514 Ack:2741139903 Flags:32784 WindowSize:229 Checksum:53761 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.81
2022/04/29 23:19:24 connection established
2022/04/29 23:19:24 calling checksumTCP: 10.244.4.102 10.244.3.81 [35 40 149 209 163 96 243 31 5 46 237 106 80 24 0 229 0 0 0 0] [98 111 111 109 33 33 33]
2022/04/29 23:19:24 checksumer: &{sum:446109 oddByte:33 length:39}
2022/04/29 23:19:24 ret:  446142
2022/04/29 23:19:24 ret:  52932
2022/04/29 23:19:24 ret:  52932
2022/04/29 23:19:24 boom packet injected
2022/04/29 23:19:24 tcp packet: &{SrcPort:38353 DestPort:9000 Seq:86961514 Ack:2741139903 Flags:32785 WindowSize:229 Checksum:53760 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.3.81
2022/04/29 23:19:26 tcp packet: &{SrcPort:34011 DestPort:9000 Seq:2143573477 Ack:0 Flags:40962 WindowSize:29200 Checksum:55181 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.3.81
2022/04/29 23:19:26 tcp packet: &{SrcPort:34011 DestPort:9000 Seq:2143573478 Ack:693396840 Flags:32784 WindowSize:229 Checksum:32427 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.81
2022/04/29 23:19:26 connection established
2022/04/29 23:19:26 calling checksumTCP: 10.244.4.102 10.244.3.81 [35 40 132 219 41 82 222 200 127 196 85 230 80 24 0 229 0 0 0 0] [98 111 111 109 33 33 33]
2022/04/29 23:19:26 checksumer: &{sum:558303 oddByte:33 length:39}
2022/04/29 23:19:26 ret:  558336
2022/04/29 23:19:26 ret:  34056
2022/04/29 23:19:26 ret:  34056
2022/04/29 23:19:26 boom packet injected
2022/04/29 23:19:26 tcp packet: &{SrcPort:34011 DestPort:9000 Seq:2143573478 Ack:693396840 Flags:32785 WindowSize:229 Checksum:32426 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.3.81
2022/04/29 23:19:28 tcp packet: &{SrcPort:42140 DestPort:9000 Seq:1726506385 Ack:0 Flags:40962 WindowSize:29200 Checksum:47403 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.3.81
2022/04/29 23:19:28 tcp packet: &{SrcPort:42140 DestPort:9000 Seq:1726506386 Ack:1950523254 Flags:32784 WindowSize:229 Checksum:54138 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.81
2022/04/29 23:19:28 connection established
2022/04/29 23:19:28 calling checksumTCP: 10.244.4.102 10.244.3.81 [35 40 164 156 116 65 24 214 102 232 101 146 80 24 0 229 0 0 0 0] [98 111 111 109 33 33 33]
2022/04/29 23:19:28 checksumer: &{sum:529019 oddByte:33 length:39}
2022/04/29 23:19:28 ret:  529052
2022/04/29 23:19:28 ret:  4772
2022/04/29 23:19:28 ret:  4772
2022/04/29 23:19:28 boom packet injected
2022/04/29 23:19:28 tcp packet: &{SrcPort:42140 DestPort:9000 Seq:1726506386 Ack:1950523254 Flags:32785 WindowSize:229 Checksum:54137 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.3.81
2022/04/29 23:19:30 tcp packet: &{SrcPort:35429 DestPort:9000 Seq:3015405735 Ack:0 Flags:40962 WindowSize:29200 Checksum:29608 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.3.81
2022/04/29 23:19:30 tcp packet: &{SrcPort:35429 DestPort:9000 Seq:3015405736 Ack:3436026867 Flags:32784 WindowSize:229 Checksum:12575 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.81
2022/04/29 23:19:30 connection established
2022/04/29 23:19:30 calling checksumTCP: 10.244.4.102 10.244.3.81 [35 40 138 101 204 204 21 83 179 187 112 168 80 24 0 229 0 0 0 0] [98 111 111 109 33 33 33]
2022/04/29 23:19:30 checksumer: &{sum:511246 oddByte:33 length:39}
2022/04/29 23:19:30 ret:  511279
2022/04/29 23:19:30 ret:  52534
2022/04/29 23:19:30 ret:  52534
2022/04/29 23:19:30 boom packet injected
2022/04/29 23:19:30 tcp packet: &{SrcPort:35429 DestPort:9000 Seq:3015405736 Ack:3436026867 Flags:32785 WindowSize:229 Checksum:12574 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.3.81
2022/04/29 23:19:32 tcp packet: &{SrcPort:36334 DestPort:9000 Seq:4265300528 Ack:0 Flags:40962 WindowSize:29200 Checksum:15430 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.3.81
2022/04/29 23:19:32 tcp packet: &{SrcPort:36334 DestPort:9000 Seq:4265300529 Ack:734732068 Flags:32784 WindowSize:229 Checksum:3770 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.81
2022/04/29 23:19:32 connection established
2022/04/29 23:19:32 calling checksumTCP: 10.244.4.102 10.244.3.81 [35 40 141 238 43 201 152 132 254 59 82 49 80 24 0 229 0 0 0 0] [98 111 111 109 33 33 33]
2022/04/29 23:19:32 checksumer: &{sum:494880 oddByte:33 length:39}
2022/04/29 23:19:32 ret:  494913
2022/04/29 23:19:32 ret:  36168
2022/04/29 23:19:32 ret:  36168
2022/04/29 23:19:32 boom packet injected
2022/04/29 23:19:32 tcp packet: &{SrcPort:36334 DestPort:9000 Seq:4265300529 Ack:734732068 Flags:32785 WindowSize:229 Checksum:3769 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.3.81
2022/04/29 23:19:34 tcp packet: &{SrcPort:38353 DestPort:9000 Seq:86961515 Ack:2741139904 Flags:32784 WindowSize:229 Checksum:33758 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.81
2022/04/29 23:19:34 tcp packet: &{SrcPort:37011 DestPort:9000 Seq:2470654388 Ack:0 Flags:40962 WindowSize:29200 Checksum:48452 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.3.81
2022/04/29 23:19:34 tcp packet: &{SrcPort:37011 DestPort:9000 Seq:2470654389 Ack:1713308855 Flags:32784 WindowSize:229 Checksum:26885 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.81
2022/04/29 23:19:34 connection established
2022/04/29 23:19:34 calling checksumTCP: 10.244.4.102 10.244.3.81 [35 40 144 147 102 29 126 23 147 67 49 181 80 24 0 229 0 0 0 0] [98 111 111 109 33 33 33]
2022/04/29 23:19:34 checksumer: &{sum:435384 oddByte:33 length:39}
2022/04/29 23:19:34 ret:  435417
2022/04/29 23:19:34 ret:  42207
2022/04/29 23:19:34 ret:  42207
2022/04/29 23:19:34 boom packet injected
2022/04/29 23:19:34 tcp packet: &{SrcPort:37011 DestPort:9000 Seq:2470654389 Ack:1713308855 Flags:32785 WindowSize:229 Checksum:26884 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.3.81
2022/04/29 23:19:36 tcp packet: &{SrcPort:34011 DestPort:9000 Seq:2143573479 Ack:693396841 Flags:32784 WindowSize:229 Checksum:12424 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.81
2022/04/29 23:19:36 tcp packet: &{SrcPort:33024 DestPort:9000 Seq:4271655313 Ack:0 Flags:40962 WindowSize:29200 Checksum:16847 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.3.81
2022/04/29 23:19:36 tcp packet: &{SrcPort:33024 DestPort:9000 Seq:4271655314 Ack:2452025310 Flags:32784 WindowSize:229 Checksum:52878 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.81
2022/04/29 23:19:36 connection established
2022/04/29 23:19:36 calling checksumTCP: 10.244.4.102 10.244.3.81 [35 40 129 0 146 37 105 62 254 156 73 146 80 24 0 229 0 0 0 0] [98 111 111 109 33 33 33]
2022/04/29 23:19:36 checksumer: &{sum:423747 oddByte:33 length:39}
2022/04/29 23:19:36 ret:  423780
2022/04/29 23:19:36 ret:  30570
2022/04/29 23:19:36 ret:  30570
2022/04/29 23:19:36 boom packet injected
2022/04/29 23:19:36 tcp packet: &{SrcPort:33024 DestPort:9000 Seq:4271655314 Ack:2452025310 Flags:32785 WindowSize:229 Checksum:52877 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.3.81
2022/04/29 23:19:38 tcp packet: &{SrcPort:42140 DestPort:9000 Seq:1726506387 Ack:1950523255 Flags:32784 WindowSize:229 Checksum:34135 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.81
2022/04/29 23:19:38 tcp packet: &{SrcPort:36862 DestPort:9000 Seq:4122088525 Ack:0 Flags:40962 WindowSize:29200 Checksum:26927 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.3.81
2022/04/29 23:19:38 tcp packet: &{SrcPort:36862 DestPort:9000 Seq:4122088526 Ack:893577118 Flags:32784 WindowSize:229 Checksum:21315 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.81
2022/04/29 23:19:38 connection established
2022/04/29 23:19:38 calling checksumTCP: 10.244.4.102 10.244.3.81 [35 40 143 254 53 65 96 254 245 178 20 78 80 24 0 229 0 0 0 0] [98 111 111 109 33 33 33]
2022/04/29 23:19:38 checksumer: &{sum:533165 oddByte:33 length:39}
2022/04/29 23:19:38 ret:  533198
2022/04/29 23:19:38 ret:  8918
2022/04/29 23:19:38 ret:  8918
2022/04/29 23:19:38 boom packet injected
2022/04/29 23:19:38 tcp packet: &{SrcPort:36862 DestPort:9000 Seq:4122088526 Ack:893577118 Flags:32785 WindowSize:229 Checksum:21314 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.3.81
2022/04/29 23:19:40 tcp packet: &{SrcPort:35429 DestPort:9000 Seq:3015405737 Ack:3436026868 Flags:32784 WindowSize:229 Checksum:58108 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.81
2022/04/29 23:19:40 tcp packet: &{SrcPort:34974 DestPort:9000 Seq:3004146538 Ack:0 Flags:40962 WindowSize:29200 Checksum:7235 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.3.81
2022/04/29 23:19:40 tcp packet: &{SrcPort:34974 DestPort:9000 Seq:3004146539 Ack:3090195465 Flags:32784 WindowSize:229 Checksum:48940 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.81
2022/04/29 23:19:40 connection established
2022/04/29 23:19:40 calling checksumTCP: 10.244.4.102 10.244.3.81 [35 40 136 158 184 47 29 105 179 15 163 107 80 24 0 229 0 0 0 0] [98 111 111 109 33 33 33]
2022/04/29 23:19:40 checksumer: &{sum:431667 oddByte:33 length:39}
2022/04/29 23:19:40 ret:  431700
2022/04/29 23:19:40 ret:  38490
2022/04/29 23:19:40 ret:  38490
2022/04/29 23:19:40 boom packet injected
2022/04/29 23:19:40 tcp packet: &{SrcPort:34974 DestPort:9000 Seq:3004146539 Ack:3090195465 Flags:32785 WindowSize:229 Checksum:48939 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.3.81
2022/04/29 23:19:42 tcp packet: &{SrcPort:41211 DestPort:9000 Seq:4126003676 Ack:0 Flags:40962 WindowSize:29200 Checksum:35525 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.3.81
2022/04/29 23:19:42 tcp packet: &{SrcPort:41211 DestPort:9000 Seq:4126003677 Ack:3822518344 Flags:32784 WindowSize:229 Checksum:40440 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.81
2022/04/29 23:19:42 connection established
2022/04/29 23:19:42 calling checksumTCP: 10.244.4.102 10.244.3.81 [35 40 160 251 227 213 121 168 245 237 209 221 80 24 0 229 0 0 0 0] [98 111 111 109 33 33 33]
2022/04/29 23:19:42 checksumer: &{sum:600386 oddByte:33 length:39}
2022/04/29 23:19:42 ret:  600419
2022/04/29 23:19:42 ret:  10604
2022/04/29 23:19:42 ret:  10604
2022/04/29 23:19:42 boom packet injected
2022/04/29 23:19:42 tcp packet: &{SrcPort:41211 DestPort:9000 Seq:4126003677 Ack:3822518344 Flags:32785 WindowSize:229 Checksum:40439 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.3.81
2022/04/29 23:19:42 tcp packet: &{SrcPort:36334 DestPort:9000 Seq:4265300530 Ack:734732069 Flags:32784 WindowSize:229 Checksum:49302 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.81
2022/04/29 23:19:44 tcp packet: &{SrcPort:37011 DestPort:9000 Seq:2470654390 Ack:1713308856 Flags:32784 WindowSize:229 Checksum:6883 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.81
2022/04/29 23:19:44 tcp packet: &{SrcPort:34068 DestPort:9000 Seq:958142708 Ack:0 Flags:40962 WindowSize:29200 Checksum:5270 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.3.81
2022/04/29 23:19:44 tcp packet: &{SrcPort:34068 DestPort:9000 Seq:958142709 Ack:2617790221 Flags:32784 WindowSize:229 Checksum:6402 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.81
2022/04/29 23:19:44 connection established
2022/04/29 23:19:44 calling checksumTCP: 10.244.4.102 10.244.3.81 [35 40 133 20 156 6 200 109 57 28 24 245 80 24 0 229 0 0 0 0] [98 111 111 109 33 33 33]
2022/04/29 23:19:44 checksumer: &{sum:425402 oddByte:33 length:39}
2022/04/29 23:19:44 ret:  425435
2022/04/29 23:19:44 ret:  32225
2022/04/29 23:19:44 ret:  32225
2022/04/29 23:19:44 boom packet injected
2022/04/29 23:19:44 tcp packet: &{SrcPort:34068 DestPort:9000 Seq:958142709 Ack:2617790221 Flags:32785 WindowSize:229 Checksum:6401 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.3.81
2022/04/29 23:19:46 tcp packet: &{SrcPort:33024 DestPort:9000 Seq:4271655315 Ack:2452025311 Flags:32784 WindowSize:229 Checksum:32876 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.81
2022/04/29 23:19:46 tcp packet: &{SrcPort:36924 DestPort:9000 Seq:1157970023 Ack:0 Flags:40962 WindowSize:29200 Checksum:54849 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.3.81
2022/04/29 23:19:46 tcp packet: &{SrcPort:36924 DestPort:9000 Seq:1157970024 Ack:4102351458 Flags:32784 WindowSize:229 Checksum:57099 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.81
2022/04/29 23:19:46 connection established
2022/04/29 23:19:46 calling checksumTCP: 10.244.4.102 10.244.3.81 [35 40 144 60 244 131 99 194 69 5 56 104 80 24 0 229 0 0 0 0] [98 111 111 109 33 33 33]
2022/04/29 23:19:46 checksumer: &{sum:447460 oddByte:33 length:39}
2022/04/29 23:19:46 ret:  447493
2022/04/29 23:19:46 ret:  54283
2022/04/29 23:19:46 ret:  54283
2022/04/29 23:19:46 boom packet injected
2022/04/29 23:19:46 tcp packet: &{SrcPort:36924 DestPort:9000 Seq:1157970024 Ack:4102351458 Flags:32785 WindowSize:229 Checksum:57098 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.3.81
2022/04/29 23:19:48 tcp packet: &{SrcPort:36862 DestPort:9000 Seq:4122088527 Ack:893577119 Flags:32784 WindowSize:229 Checksum:1313 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.81
2022/04/29 23:19:48 tcp packet: &{SrcPort:34880 DestPort:9000 Seq:3254568821 Ack:0 Flags:40962 WindowSize:29200 Checksum:51814 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.3.81
2022/04/29 23:19:48 tcp packet: &{SrcPort:34880 DestPort:9000 Seq:3254568822 Ack:1196491993 Flags:32784 WindowSize:229 Checksum:24093 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.81
2022/04/29 23:19:48 connection established
2022/04/29 23:19:48 calling checksumTCP: 10.244.4.102 10.244.3.81 [35 40 136 64 71 79 126 57 193 252 199 118 80 24 0 229 0 0 0 0] [98 111 111 109 33 33 33]
2022/04/29 23:19:48 checksumer: &{sum:467029 oddByte:33 length:39}
2022/04/29 23:19:48 ret:  467062
2022/04/29 23:19:48 ret:  8317
2022/04/29 23:19:48 ret:  8317
2022/04/29 23:19:48 boom packet injected
2022/04/29 23:19:48 tcp packet: &{SrcPort:34880 DestPort:9000 Seq:3254568822 Ack:1196491993 Flags:32785 WindowSize:229 Checksum:24092 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.3.81
2022/04/29 23:19:50 tcp packet: &{SrcPort:34974 DestPort:9000 Seq:3004146540 Ack:3090195466 Flags:32784 WindowSize:229 Checksum:28937 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.81
2022/04/29 23:19:50 tcp packet: &{SrcPort:43266 DestPort:9000 Seq:2780805014 Ack:0 Flags:40962 WindowSize:29200 Checksum:52719 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.3.81
2022/04/29 23:19:50 tcp packet: &{SrcPort:43266 DestPort:9000 Seq:2780805015 Ack:357260277 Flags:32784 WindowSize:229 Checksum:13503 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.81
2022/04/29 23:19:50 connection established
2022/04/29 23:19:50 calling checksumTCP: 10.244.4.102 10.244.3.81 [35 40 169 2 21 73 213 85 165 191 183 151 80 24 0 229 0 0 0 0] [98 111 111 109 33 33 33]
2022/04/29 23:19:50 checksumer: &{sum:449647 oddByte:33 length:39}
2022/04/29 23:19:50 ret:  449680
2022/04/29 23:19:50 ret:  56470
2022/04/29 23:19:50 ret:  56470
2022/04/29 23:19:50 boom packet injected
2022/04/29 23:19:50 tcp packet: &{SrcPort:43266 DestPort:9000 Seq:2780805015 Ack:357260277 Flags:32785 WindowSize:229 Checksum:13502 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.3.81
2022/04/29 23:19:52 tcp packet: &{SrcPort:41211 DestPort:9000 Seq:4126003678 Ack:3822518345 Flags:32784 WindowSize:229 Checksum:20436 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.81
2022/04/29 23:19:52 tcp packet: &{SrcPort:46048 DestPort:9000 Seq:3776567335 Ack:0 Flags:40962 WindowSize:29200 Checksum:24406 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.3.81
2022/04/29 23:19:52 tcp packet: &{SrcPort:46048 DestPort:9000 Seq:3776567336 Ack:4245883341 Flags:32784 WindowSize:229 Checksum:10421 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.81
2022/04/29 23:19:52 connection established
2022/04/29 23:19:52 calling checksumTCP: 10.244.4.102 10.244.3.81 [35 40 179 224 253 17 131 45 225 25 216 40 80 24 0 229 0 0 0 0] [98 111 111 109 33 33 33]
2022/04/29 23:19:52 checksumer: &{sum:411244 oddByte:33 length:39}
2022/04/29 23:19:52 ret:  411277
2022/04/29 23:19:52 ret:  18067
2022/04/29 23:19:52 ret:  18067
2022/04/29 23:19:52 boom packet injected
2022/04/29 23:19:52 tcp packet: &{SrcPort:46048 DestPort:9000 Seq:3776567336 Ack:4245883341 Flags:32785 WindowSize:229 Checksum:10420 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.3.81
2022/04/29 23:19:54 tcp packet: &{SrcPort:34068 DestPort:9000 Seq:958142710 Ack:2617790222 Flags:32784 WindowSize:229 Checksum:51934 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.81
2022/04/29 23:19:54 tcp packet: &{SrcPort:45776 DestPort:9000 Seq:1559167048 Ack:0 Flags:40962 WindowSize:29200 Checksum:46239 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.3.81
2022/04/29 23:19:54 tcp packet: &{SrcPort:45776 DestPort:9000 Seq:1559167049 Ack:3041510473 Flags:32784 WindowSize:229 Checksum:891 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.81
2022/04/29 23:19:54 connection established
2022/04/29 23:19:54 calling checksumTCP: 10.244.4.102 10.244.3.81 [35 40 178 208 181 72 61 169 92 239 0 73 80 24 0 229 0 0 0 0] [98 111 111 109 33 33 33]
2022/04/29 23:19:54 checksumer: &{sum:515712 oddByte:33 length:39}
2022/04/29 23:19:54 ret:  515745
2022/04/29 23:19:54 ret:  57000
2022/04/29 23:19:54 ret:  57000
2022/04/29 23:19:54 boom packet injected
2022/04/29 23:19:54 tcp packet: &{SrcPort:45776 DestPort:9000 Seq:1559167049 Ack:3041510473 Flags:32785 WindowSize:229 Checksum:890 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.3.81
2022/04/29 23:19:56 tcp packet: &{SrcPort:36924 DestPort:9000 Seq:1157970025 Ack:4102351459 Flags:32784 WindowSize:229 Checksum:37095 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.81
2022/04/29 23:19:56 tcp packet: &{SrcPort:41936 DestPort:9000 Seq:3930993854 Ack:0 Flags:40962 WindowSize:29200 Checksum:63993 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.3.81
2022/04/29 23:19:56 tcp packet: &{SrcPort:41936 DestPort:9000 Seq:3930993855 Ack:4057420449 Flags:32784 WindowSize:229 Checksum:30239 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.81
2022/04/29 23:19:56 connection established
2022/04/29 23:19:56 calling checksumTCP: 10.244.4.102 10.244.3.81 [35 40 163 208 241 213 204 1 234 78 52 191 80 24 0 229 0 0 0 0] [98 111 111 109 33 33 33]
2022/04/29 23:19:56 checksumer: &{sum:498174 oddByte:33 length:39}
2022/04/29 23:19:56 ret:  498207
2022/04/29 23:19:56 ret:  39462
2022/04/29 23:19:56 ret:  39462
2022/04/29 23:19:56 boom packet injected
2022/04/29 23:19:56 tcp packet: &{SrcPort:41936 DestPort:9000 Seq:3930993855 Ack:4057420449 Flags:32785 WindowSize:229 Checksum:30238 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.3.81
2022/04/29 23:19:58 tcp packet: &{SrcPort:34880 DestPort:9000 Seq:3254568823 Ack:1196491994 Flags:32784 WindowSize:229 Checksum:4091 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.81
2022/04/29 23:19:58 tcp packet: &{SrcPort:43732 DestPort:9000 Seq:504820760 Ack:0 Flags:40962 WindowSize:29200 Checksum:63490 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.3.81
2022/04/29 23:19:58 tcp packet: &{SrcPort:43732 DestPort:9000 Seq:504820761 Ack:358814417 Flags:32784 WindowSize:229 Checksum:34972 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.81
2022/04/29 23:19:58 connection established
2022/04/29 23:19:58 calling checksumTCP: 10.244.4.102 10.244.3.81 [35 40 170 212 21 97 140 49 30 22 244 25 80 24 0 229 0 0 0 0] [98 111 111 109 33 33 33]
2022/04/29 23:19:58 checksumer: &{sum:424669 oddByte:33 length:39}
2022/04/29 23:19:58 ret:  424702
2022/04/29 23:19:58 ret:  31492
2022/04/29 23:19:58 ret:  31492
2022/04/29 23:19:58 boom packet injected
2022/04/29 23:19:58 tcp packet: &{SrcPort:43732 DestPort:9000 Seq:504820761 Ack:358814417 Flags:32785 WindowSize:229 Checksum:34971 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.3.81
2022/04/29 23:20:00 tcp packet: &{SrcPort:43266 DestPort:9000 Seq:2780805016 Ack:357260278 Flags:32784 WindowSize:229 Checksum:59036 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.81
2022/04/29 23:20:00 tcp packet: &{SrcPort:43585 DestPort:9000 Seq:2160776155 Ack:0 Flags:40962 WindowSize:29200 Checksum:44622 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.3.81
2022/04/29 23:20:00 tcp packet: &{SrcPort:43585 DestPort:9000 Seq:2160776156 Ack:3271547023 Flags:32784 WindowSize:229 Checksum:50108 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.81
2022/04/29 23:20:00 connection established
2022/04/29 23:20:00 calling checksumTCP: 10.244.4.102 10.244.3.81 [35 40 170 65 194 254 81 239 128 202 211 220 80 24 0 229 0 0 0 0] [98 111 111 109 33 33 33]
2022/04/29 23:20:00 checksumer: &{sum:572048 oddByte:33 length:39}
2022/04/29 23:20:00 ret:  572081
2022/04/29 23:20:00 ret:  47801
2022/04/29 23:20:00 ret:  47801
2022/04/29 23:20:00 boom packet injected
2022/04/29 23:20:00 tcp packet: &{SrcPort:43585 DestPort:9000 Seq:2160776156 Ack:3271547023 Flags:32785 WindowSize:229 Checksum:50107 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.3.81
2022/04/29 23:20:02 tcp packet: &{SrcPort:46048 DestPort:9000 Seq:3776567337 Ack:4245883342 Flags:32784 WindowSize:229 Checksum:55952 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.81
2022/04/29 23:20:02 tcp packet: &{SrcPort:41302 DestPort:9000 Seq:726082729 Ack:0 Flags:40962 WindowSize:29200 Checksum:46110 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.3.81
2022/04/29 23:20:02 tcp packet: &{SrcPort:41302 DestPort:9000 Seq:726082730 Ack:2733335448 Flags:32784 WindowSize:229 Checksum:22215 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.81
2022/04/29 23:20:02 connection established
2022/04/29 23:20:02 calling checksumTCP: 10.244.4.102 10.244.3.81 [35 40 161 86 162 233 220 248 43 71 36 170 80 24 0 229 0 0 0 0] [98 111 111 109 33 33 33]
2022/04/29 23:20:02 checksumer: &{sum:527854 oddByte:33 length:39}
2022/04/29 23:20:02 ret:  527887
2022/04/29 23:20:02 ret:  3607
2022/04/29 23:20:02 ret:  3607
2022/04/29 23:20:02 boom packet injected
2022/04/29 23:20:02 tcp packet: &{SrcPort:41302 DestPort:9000 Seq:726082730 Ack:2733335448 Flags:32785 WindowSize:229 Checksum:22214 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.3.81
2022/04/29 23:20:04 tcp packet: &{SrcPort:45776 DestPort:9000 Seq:1559167050 Ack:3041510474 Flags:32784 WindowSize:229 Checksum:46422 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.81
2022/04/29 23:20:04 tcp packet: &{SrcPort:40957 DestPort:9000 Seq:902897646 Ack:0 Flags:40962 WindowSize:29200 Checksum:42966 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.3.81
2022/04/29 23:20:04 tcp packet: &{SrcPort:40957 DestPort:9000 Seq:902897647 Ack:3429559867 Flags:32784 WindowSize:229 Checksum:36490 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.81
2022/04/29 23:20:04 connection established
2022/04/29 23:20:04 calling checksumTCP: 10.244.4.102 10.244.3.81 [35 40 159 253 204 105 103 155 53 209 31 239 80 24 0 229 0 0 0 0] [98 111 111 109 33 33 33]
2022/04/29 23:20:04 checksumer: &{sum:566950 oddByte:33 length:39}
2022/04/29 23:20:04 ret:  566983
2022/04/29 23:20:04 ret:  42703
2022/04/29 23:20:04 ret:  42703
2022/04/29 23:20:04 boom packet injected
2022/04/29 23:20:04 tcp packet: &{SrcPort:40957 DestPort:9000 Seq:902897647 Ack:3429559867 Flags:32785 WindowSize:229 Checksum:36489 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.3.81
2022/04/29 23:20:06 tcp packet: &{SrcPort:41936 DestPort:9000 Seq:3930993856 Ack:4057420450 Flags:32784 WindowSize:229 Checksum:10236 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.81
2022/04/29 23:20:06 tcp packet: &{SrcPort:38853 DestPort:9000 Seq:1818898278 Ack:0 Flags:40962 WindowSize:29200 Checksum:25132 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.3.81
2022/04/29 23:20:06 tcp packet: &{SrcPort:38853 DestPort:9000 Seq:1818898279 Ack:2356955717 Flags:32784 WindowSize:229 Checksum:9460 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.81
2022/04/29 23:20:06 connection established
2022/04/29 23:20:06 calling checksumTCP: 10.244.4.102 10.244.3.81 [35 40 151 197 140 122 195 165 108 106 47 103 80 24 0 229 0 0 0 0] [98 111 111 109 33 33 33]
2022/04/29 23:20:06 checksumer: &{sum:498433 oddByte:33 length:39}
2022/04/29 23:20:06 ret:  498466
2022/04/29 23:20:06 ret:  39721
2022/04/29 23:20:06 ret:  39721
2022/04/29 23:20:06 boom packet injected
2022/04/29 23:20:06 tcp packet: &{SrcPort:38853 DestPort:9000 Seq:1818898279 Ack:2356955717 Flags:32785 WindowSize:229 Checksum:9459 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.3.81
2022/04/29 23:20:08 tcp packet: &{SrcPort:43732 DestPort:9000 Seq:504820762 Ack:358814418 Flags:32784 WindowSize:229 Checksum:14969 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.81
2022/04/29 23:20:08 tcp packet: &{SrcPort:43360 DestPort:9000 Seq:1573122391 Ack:0 Flags:40962 WindowSize:29200 Checksum:38262 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.3.81
2022/04/29 23:20:08 tcp packet: &{SrcPort:43360 DestPort:9000 Seq:1573122392 Ack:3388216372 Flags:32784 WindowSize:229 Checksum:18695 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.81
2022/04/29 23:20:08 connection established
2022/04/29 23:20:08 calling checksumTCP: 10.244.4.102 10.244.3.81 [35 40 169 96 201 242 141 148 93 195 241 88 80 24 0 229 0 0 0 0] [98 111 111 109 33 33 33]
2022/04/29 23:20:08 checksumer: &{sum:518093 oddByte:33 length:39}
2022/04/29 23:20:08 ret:  518126
2022/04/29 23:20:08 ret:  59381
2022/04/29 23:20:08 ret:  59381
2022/04/29 23:20:08 boom packet injected
2022/04/29 23:20:08 tcp packet: &{SrcPort:43360 DestPort:9000 Seq:1573122392 Ack:3388216372 Flags:32785 WindowSize:229 Checksum:18694 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.3.81
2022/04/29 23:20:10 tcp packet: &{SrcPort:43585 DestPort:9000 Seq:2160776157 Ack:3271547024 Flags:32784 WindowSize:229 Checksum:30104 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.81
2022/04/29 23:20:10 tcp packet: &{SrcPort:43223 DestPort:9000 Seq:1565827177 Ack:0 Flags:40962 WindowSize:29200 Checksum:57227 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.3.81
2022/04/29 23:20:10 tcp packet: &{SrcPort:43223 DestPort:9000 Seq:1565827178 Ack:400019098 Flags:32784 WindowSize:229 Checksum:33538 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.81
2022/04/29 23:20:10 connection established
2022/04/29 23:20:10 calling checksumTCP: 10.244.4.102 10.244.3.81 [35 40 168 215 23 214 71 250 93 84 160 106 80 24 0 229 0 0 0 0] [98 111 111 109 33 33 33]
2022/04/29 23:20:10 checksumer: &{sum:543363 oddByte:33 length:39}
2022/04/29 23:20:10 ret:  543396
2022/04/29 23:20:10 ret:  19116
2022/04/29 23:20:10 ret:  19116
2022/04/29 23:20:10 boom packet injected
2022/04/29 23:20:10 tcp packet: &{SrcPort:43223 DestPort:9000 Seq:1565827178 Ack:400019098 Flags:32785 WindowSize:229 Checksum:33537 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.3.81
2022/04/29 23:20:12 tcp packet: &{SrcPort:41302 DestPort:9000 Seq:726082731 Ack:2733335449 Flags:32784 WindowSize:229 Checksum:2212 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.81
2022/04/29 23:20:12 tcp packet: &{SrcPort:35458 DestPort:9000 Seq:3970859313 Ack:0 Flags:40962 WindowSize:29200 Checksum:34285 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.3.81
2022/04/29 23:20:12 tcp packet: &{SrcPort:35458 DestPort:9000 Seq:3970859314 Ack:1205305101 Flags:32784 WindowSize:229 Checksum:16672 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.81
2022/04/29 23:20:12 connection established
2022/04/29 23:20:12 calling checksumTCP: 10.244.4.102 10.244.3.81 [35 40 138 130 71 213 248 109 236 174 129 50 80 24 0 229 0 0 0 0] [98 111 111 109 33 33 33]
2022/04/29 23:20:12 checksumer: &{sum:494262 oddByte:33 length:39}
2022/04/29 23:20:12 ret:  494295
2022/04/29 23:20:12 ret:  35550
2022/04/29 23:20:12 ret:  35550
2022/04/29 23:20:12 boom packet injected
2022/04/29 23:20:12 tcp packet: &{SrcPort:35458 DestPort:9000 Seq:3970859314 Ack:1205305101 Flags:32785 WindowSize:229 Checksum:16671 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.3.81
2022/04/29 23:20:14 tcp packet: &{SrcPort:40957 DestPort:9000 Seq:902897648 Ack:3429559868 Flags:32784 WindowSize:229 Checksum:16486 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.81
2022/04/29 23:20:14 tcp packet: &{SrcPort:37999 DestPort:9000 Seq:1986103117 Ack:0 Flags:40962 WindowSize:29200 Checksum:58464 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.3.81
2022/04/29 23:20:14 tcp packet: &{SrcPort:37999 DestPort:9000 Seq:1986103118 Ack:1009626078 Flags:32784 WindowSize:229 Checksum:30364 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.81
2022/04/29 23:20:14 connection established
2022/04/29 23:20:14 calling checksumTCP: 10.244.4.102 10.244.3.81 [35 40 148 111 60 44 37 62 118 97 135 78 80 24 0 229 0 0 0 0] [98 111 111 109 33 33 33]
2022/04/29 23:20:14 checksumer: &{sum:421234 oddByte:33 length:39}
2022/04/29 23:20:14 ret:  421267
2022/04/29 23:20:14 ret:  28057
2022/04/29 23:20:14 ret:  28057
2022/04/29 23:20:14 boom packet injected
2022/04/29 23:20:14 tcp packet: &{SrcPort:37999 DestPort:9000 Seq:1986103118 Ack:1009626078 Flags:32785 WindowSize:229 Checksum:30363 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.3.81
2022/04/29 23:20:16 tcp packet: &{SrcPort:38853 DestPort:9000 Seq:1818898280 Ack:2356955718 Flags:32784 WindowSize:229 Checksum:54993 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.81
2022/04/29 23:20:16 tcp packet: &{SrcPort:34720 DestPort:9000 Seq:1463194880 Ack:0 Flags:40962 WindowSize:29200 Checksum:64214 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.3.81
2022/04/29 23:20:16 tcp packet: &{SrcPort:34720 DestPort:9000 Seq:1463194881 Ack:621651864 Flags:32784 WindowSize:229 Checksum:41126 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.81
2022/04/29 23:20:16 connection established
2022/04/29 23:20:16 calling checksumTCP: 10.244.4.102 10.244.3.81 [35 40 135 160 37 12 32 248 87 54 149 1 80 24 0 229 0 0 0 0] [98 111 111 109 33 33 33]
2022/04/29 23:20:16 checksumer: &{sum:442424 oddByte:33 length:39}
2022/04/29 23:20:16 ret:  442457
2022/04/29 23:20:16 ret:  49247
2022/04/29 23:20:16 ret:  49247
2022/04/29 23:20:16 boom packet injected
2022/04/29 23:20:16 tcp packet: &{SrcPort:34720 DestPort:9000 Seq:1463194881 Ack:621651864 Flags:32785 WindowSize:229 Checksum:41125 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.3.81
2022/04/29 23:20:18 tcp packet: &{SrcPort:43360 DestPort:9000 Seq:1573122393 Ack:3388216373 Flags:32784 WindowSize:229 Checksum:64227 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.81
2022/04/29 23:20:18 tcp packet: &{SrcPort:40476 DestPort:9000 Seq:339528396 Ack:0 Flags:40962 WindowSize:29200 Checksum:59832 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.3.81
2022/04/29 23:20:18 tcp packet: &{SrcPort:40476 DestPort:9000 Seq:339528397 Ack:2996243357 Flags:32784 WindowSize:229 Checksum:38441 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.81
2022/04/29 23:20:18 connection established
2022/04/29 23:20:18 calling checksumTCP: 10.244.4.102 10.244.3.81 [35 40 158 28 178 149 132 253 20 60 202 205 80 24 0 229 0 0 0 0] [98 111 111 109 33 33 33]
2022/04/29 23:20:18 checksumer: &{sum:498994 oddByte:33 length:39}
2022/04/29 23:20:18 ret:  499027
2022/04/29 23:20:18 ret:  40282
2022/04/29 23:20:18 ret:  40282
2022/04/29 23:20:18 boom packet injected
2022/04/29 23:20:18 tcp packet: &{SrcPort:40476 DestPort:9000 Seq:339528397 Ack:2996243357 Flags:32785 WindowSize:229 Checksum:38440 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.3.81
2022/04/29 23:20:20 tcp packet: &{SrcPort:43223 DestPort:9000 Seq:1565827179 Ack:400019099 Flags:32784 WindowSize:229 Checksum:13534 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.81
2022/04/29 23:20:20 tcp packet: &{SrcPort:39238 DestPort:9000 Seq:2558651718 Ack:0 Flags:40962 WindowSize:29200 Checksum:16382 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.3.81
2022/04/29 23:20:20 tcp packet: &{SrcPort:39238 DestPort:9000 Seq:2558651719 Ack:3857736276 Flags:32784 WindowSize:229 Checksum:23181 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.81
2022/04/29 23:20:20 connection established
2022/04/29 23:20:20 calling checksumTCP: 10.244.4.102 10.244.3.81 [35 40 153 70 229 238 219 180 152 129 237 71 80 24 0 229 0 0 0 0] [98 111 111 109 33 33 33]
2022/04/29 23:20:20 checksumer: &{sum:497502 oddByte:33 length:39}
2022/04/29 23:20:20 ret:  497535
2022/04/29 23:20:20 ret:  38790
2022/04/29 23:20:20 ret:  38790
2022/04/29 23:20:20 boom packet injected
2022/04/29 23:20:20 tcp packet: &{SrcPort:39238 DestPort:9000 Seq:2558651719 Ack:3857736276 Flags:32785 WindowSize:229 Checksum:23180 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.3.81
2022/04/29 23:20:22 tcp packet: &{SrcPort:35458 DestPort:9000 Seq:3970859315 Ack:1205305102 Flags:32784 WindowSize:229 Checksum:62203 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.81
2022/04/29 23:20:22 tcp packet: &{SrcPort:34464 DestPort:9000 Seq:178027062 Ack:0 Flags:40962 WindowSize:29200 Checksum:19400 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.3.81
2022/04/29 23:20:22 tcp packet: &{SrcPort:34464 DestPort:9000 Seq:178027063 Ack:3957469776 Flags:32784 WindowSize:229 Checksum:34968 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.81
2022/04/29 23:20:22 connection established
2022/04/29 23:20:22 calling checksumTCP: 10.244.4.102 10.244.3.81 [35 40 134 160 235 224 171 176 10 156 122 55 80 24 0 229 0 0 0 0] [98 111 111 109 33 33 33]
2022/04/29 23:20:22 checksumer: &{sum:518432 oddByte:33 length:39}
2022/04/29 23:20:22 ret:  518465
2022/04/29 23:20:22 ret:  59720
2022/04/29 23:20:22 ret:  59720
2022/04/29 23:20:22 boom packet injected
2022/04/29 23:20:22 tcp packet: &{SrcPort:34464 DestPort:9000 Seq:178027063 Ack:3957469776 Flags:32785 WindowSize:229 Checksum:34967 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.3.81

Apr 29 23:20:24.120: INFO: boom-server OK: did not receive any RST packet
[AfterEach] [sig-network] Conntrack
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Apr 29 23:20:24.120: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "conntrack-3635" for this suite.


• [SLOW TEST:72.124 seconds]
[sig-network] Conntrack
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should drop INVALID conntrack entries
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/conntrack.go:282
------------------------------
{"msg":"PASSED [sig-network] Conntrack should drop INVALID conntrack entries","total":-1,"completed":1,"skipped":6,"failed":0}

SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Apr 29 23:20:03.750: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename nettest
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:83
STEP: Executing a successful http request from the external internet
[It] should be able to handle large requests: http
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:451
STEP: Performing setup for networking test in namespace nettest-4652
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Apr 29 23:20:03.872: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
Apr 29 23:20:03.904: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Apr 29 23:20:05.908: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Apr 29 23:20:07.908: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Apr 29 23:20:09.911: INFO: The status of Pod netserver-0 is Running (Ready = false)
Apr 29 23:20:11.908: INFO: The status of Pod netserver-0 is Running (Ready = false)
Apr 29 23:20:13.909: INFO: The status of Pod netserver-0 is Running (Ready = false)
Apr 29 23:20:15.908: INFO: The status of Pod netserver-0 is Running (Ready = false)
Apr 29 23:20:17.908: INFO: The status of Pod netserver-0 is Running (Ready = false)
Apr 29 23:20:19.908: INFO: The status of Pod netserver-0 is Running (Ready = false)
Apr 29 23:20:21.909: INFO: The status of Pod netserver-0 is Running (Ready = false)
Apr 29 23:20:23.909: INFO: The status of Pod netserver-0 is Running (Ready = false)
Apr 29 23:20:25.909: INFO: The status of Pod netserver-0 is Running (Ready = true)
Apr 29 23:20:25.913: INFO: The status of Pod netserver-1 is Running (Ready = true)
STEP: Creating test pods
Apr 29 23:20:31.935: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2
STEP: Getting node addresses
Apr 29 23:20:31.935: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
Apr 29 23:20:31.943: INFO: Requires at least 2 nodes (not -1)
[AfterEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Apr 29 23:20:31.945: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "nettest-4652" for this suite.


S [SKIPPING] [28.206 seconds]
[sig-network] Networking
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  Granular Checks: Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:151
    should be able to handle large requests: http [It]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:451

    Requires at least 2 nodes (not -1)

    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/network/utils.go:782
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-network] Conntrack
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Apr 29 23:19:22.289: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename conntrack
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Conntrack
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/conntrack.go:96
[It] should be able to preserve UDP traffic when server pod cycles for a NodePort service
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/conntrack.go:130
STEP: creating a UDP service svc-udp with type=NodePort in conntrack-1748
STEP: creating a client pod for probing the service svc-udp
Apr 29 23:19:22.333: INFO: The status of Pod pod-client is Pending, waiting for it to be Running (with Ready = true)
Apr 29 23:19:24.337: INFO: The status of Pod pod-client is Pending, waiting for it to be Running (with Ready = true)
Apr 29 23:19:26.336: INFO: The status of Pod pod-client is Running (Ready = true)
Apr 29 23:19:26.345: INFO: Pod client logs: Fri Apr 29 23:19:24 UTC 2022
Fri Apr 29 23:19:24 UTC 2022 Try: 1

Fri Apr 29 23:19:24 UTC 2022 Try: 2

Fri Apr 29 23:19:24 UTC 2022 Try: 3

Fri Apr 29 23:19:24 UTC 2022 Try: 4

Fri Apr 29 23:19:24 UTC 2022 Try: 5

Fri Apr 29 23:19:24 UTC 2022 Try: 6

Fri Apr 29 23:19:24 UTC 2022 Try: 7

STEP: creating a backend pod pod-server-1 for the service svc-udp
Apr 29 23:19:26.359: INFO: The status of Pod pod-server-1 is Pending, waiting for it to be Running (with Ready = true)
Apr 29 23:19:28.363: INFO: The status of Pod pod-server-1 is Pending, waiting for it to be Running (with Ready = true)
Apr 29 23:19:30.362: INFO: The status of Pod pod-server-1 is Running (Ready = true)
STEP: waiting up to 3m0s for service svc-udp in namespace conntrack-1748 to expose endpoints map[pod-server-1:[80]]
Apr 29 23:19:30.373: INFO: successfully validated that service svc-udp in namespace conntrack-1748 exposes endpoints map[pod-server-1:[80]]
STEP: checking client pod connected to the backend 1 on Node IP 10.10.190.208
Apr 29 23:20:30.400: INFO: Pod client logs: Fri Apr 29 23:19:24 UTC 2022
Fri Apr 29 23:19:24 UTC 2022 Try: 1

Fri Apr 29 23:19:24 UTC 2022 Try: 2

Fri Apr 29 23:19:24 UTC 2022 Try: 3

Fri Apr 29 23:19:24 UTC 2022 Try: 4

Fri Apr 29 23:19:24 UTC 2022 Try: 5

Fri Apr 29 23:19:24 UTC 2022 Try: 6

Fri Apr 29 23:19:24 UTC 2022 Try: 7

Fri Apr 29 23:19:29 UTC 2022 Try: 8

Fri Apr 29 23:19:29 UTC 2022 Try: 9

Fri Apr 29 23:19:29 UTC 2022 Try: 10

Fri Apr 29 23:19:29 UTC 2022 Try: 11

Fri Apr 29 23:19:29 UTC 2022 Try: 12

Fri Apr 29 23:19:29 UTC 2022 Try: 13

Fri Apr 29 23:19:34 UTC 2022 Try: 14

Fri Apr 29 23:19:34 UTC 2022 Try: 15

Fri Apr 29 23:19:34 UTC 2022 Try: 16

Fri Apr 29 23:19:34 UTC 2022 Try: 17

Fri Apr 29 23:19:34 UTC 2022 Try: 18

Fri Apr 29 23:19:34 UTC 2022 Try: 19

Fri Apr 29 23:19:39 UTC 2022 Try: 20

Fri Apr 29 23:19:39 UTC 2022 Try: 21

Fri Apr 29 23:19:39 UTC 2022 Try: 22

Fri Apr 29 23:19:39 UTC 2022 Try: 23

Fri Apr 29 23:19:39 UTC 2022 Try: 24

Fri Apr 29 23:19:39 UTC 2022 Try: 25

Fri Apr 29 23:19:44 UTC 2022 Try: 26

Fri Apr 29 23:19:44 UTC 2022 Try: 27

Fri Apr 29 23:19:44 UTC 2022 Try: 28

Fri Apr 29 23:19:44 UTC 2022 Try: 29

Fri Apr 29 23:19:44 UTC 2022 Try: 30

Fri Apr 29 23:19:44 UTC 2022 Try: 31

Fri Apr 29 23:19:49 UTC 2022 Try: 32

Fri Apr 29 23:19:49 UTC 2022 Try: 33

Fri Apr 29 23:19:49 UTC 2022 Try: 34

Fri Apr 29 23:19:49 UTC 2022 Try: 35

Fri Apr 29 23:19:49 UTC 2022 Try: 36

Fri Apr 29 23:19:49 UTC 2022 Try: 37

Fri Apr 29 23:19:54 UTC 2022 Try: 38

Fri Apr 29 23:19:54 UTC 2022 Try: 39

Fri Apr 29 23:19:54 UTC 2022 Try: 40

Fri Apr 29 23:19:54 UTC 2022 Try: 41

Fri Apr 29 23:19:54 UTC 2022 Try: 42

Fri Apr 29 23:19:54 UTC 2022 Try: 43

Fri Apr 29 23:19:59 UTC 2022 Try: 44

Fri Apr 29 23:19:59 UTC 2022 Try: 45

Fri Apr 29 23:19:59 UTC 2022 Try: 46

Fri Apr 29 23:19:59 UTC 2022 Try: 47

Fri Apr 29 23:19:59 UTC 2022 Try: 48

Fri Apr 29 23:19:59 UTC 2022 Try: 49

Fri Apr 29 23:20:04 UTC 2022 Try: 50

Fri Apr 29 23:20:04 UTC 2022 Try: 51

Fri Apr 29 23:20:04 UTC 2022 Try: 52

Fri Apr 29 23:20:04 UTC 2022 Try: 53

Fri Apr 29 23:20:04 UTC 2022 Try: 54

Fri Apr 29 23:20:04 UTC 2022 Try: 55

Fri Apr 29 23:20:09 UTC 2022 Try: 56

Fri Apr 29 23:20:09 UTC 2022 Try: 57

Fri Apr 29 23:20:09 UTC 2022 Try: 58

Fri Apr 29 23:20:09 UTC 2022 Try: 59

Fri Apr 29 23:20:09 UTC 2022 Try: 60

Fri Apr 29 23:20:09 UTC 2022 Try: 61

Fri Apr 29 23:20:14 UTC 2022 Try: 62

Fri Apr 29 23:20:14 UTC 2022 Try: 63

Fri Apr 29 23:20:14 UTC 2022 Try: 64

Fri Apr 29 23:20:14 UTC 2022 Try: 65

Fri Apr 29 23:20:14 UTC 2022 Try: 66

Fri Apr 29 23:20:14 UTC 2022 Try: 67

Fri Apr 29 23:20:19 UTC 2022 Try: 68

Fri Apr 29 23:20:19 UTC 2022 Try: 69

Fri Apr 29 23:20:19 UTC 2022 Try: 70

Fri Apr 29 23:20:19 UTC 2022 Try: 71

Fri Apr 29 23:20:19 UTC 2022 Try: 72

Fri Apr 29 23:20:19 UTC 2022 Try: 73

Fri Apr 29 23:20:24 UTC 2022 Try: 74

Fri Apr 29 23:20:24 UTC 2022 Try: 75

Fri Apr 29 23:20:24 UTC 2022 Try: 76

Fri Apr 29 23:20:24 UTC 2022 Try: 77

Fri Apr 29 23:20:24 UTC 2022 Try: 78

Fri Apr 29 23:20:24 UTC 2022 Try: 79

Fri Apr 29 23:20:29 UTC 2022 Try: 80

Fri Apr 29 23:20:29 UTC 2022 Try: 81

Fri Apr 29 23:20:29 UTC 2022 Try: 82

Fri Apr 29 23:20:29 UTC 2022 Try: 83

Fri Apr 29 23:20:29 UTC 2022 Try: 84

Fri Apr 29 23:20:29 UTC 2022 Try: 85

Apr 29 23:20:30.400: FAIL: Failed to connect to backend 1

Full Stack Trace
k8s.io/kubernetes/test/e2e.RunE2ETests(0xc0002eca80)
	_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x36c
k8s.io/kubernetes/test/e2e.TestE2E(0xc0002eca80)
	_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:144 +0x2b
testing.tRunner(0xc0002eca80, 0x70f99e8)
	/usr/local/go/src/testing/testing.go:1193 +0xef
created by testing.(*T).Run
	/usr/local/go/src/testing/testing.go:1238 +0x2b3
[AfterEach] [sig-network] Conntrack
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
STEP: Collecting events from namespace "conntrack-1748".
STEP: Found 8 events.
Apr 29 23:20:30.406: INFO: At 2022-04-29 23:19:23 +0000 UTC - event for pod-client: {kubelet node1} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/agnhost:2.32"
Apr 29 23:20:30.406: INFO: At 2022-04-29 23:19:24 +0000 UTC - event for pod-client: {kubelet node1} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/agnhost:2.32" in 298.734092ms
Apr 29 23:20:30.406: INFO: At 2022-04-29 23:19:24 +0000 UTC - event for pod-client: {kubelet node1} Created: Created container pod-client
Apr 29 23:20:30.406: INFO: At 2022-04-29 23:19:24 +0000 UTC - event for pod-client: {kubelet node1} Started: Started container pod-client
Apr 29 23:20:30.406: INFO: At 2022-04-29 23:19:28 +0000 UTC - event for pod-server-1: {kubelet node2} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/agnhost:2.32"
Apr 29 23:20:30.406: INFO: At 2022-04-29 23:19:29 +0000 UTC - event for pod-server-1: {kubelet node2} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/agnhost:2.32" in 283.415009ms
Apr 29 23:20:30.406: INFO: At 2022-04-29 23:19:29 +0000 UTC - event for pod-server-1: {kubelet node2} Created: Created container agnhost-container
Apr 29 23:20:30.406: INFO: At 2022-04-29 23:19:29 +0000 UTC - event for pod-server-1: {kubelet node2} Started: Started container agnhost-container
Apr 29 23:20:30.408: INFO: POD           NODE   PHASE    GRACE  CONDITIONS
Apr 29 23:20:30.408: INFO: pod-client    node1  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-04-29 23:19:22 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2022-04-29 23:19:24 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-04-29 23:19:24 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-04-29 23:19:22 +0000 UTC  }]
Apr 29 23:20:30.408: INFO: pod-server-1  node2  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-04-29 23:19:26 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2022-04-29 23:19:30 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-04-29 23:19:30 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-04-29 23:19:26 +0000 UTC  }]
Apr 29 23:20:30.408: INFO: 
Apr 29 23:20:30.413: INFO: 
Logging node info for node master1
Apr 29 23:20:30.415: INFO: Node Info: &Node{ObjectMeta:{master1    c968c2e7-7594-4f6e-b85d-932008e8124f 74597 0 2022-04-29 19:57:18 +0000 UTC   map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master1 kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.202 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/master.version:v0.8.2 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  [{kubeadm Update v1 2022-04-29 19:57:21 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}}} {flanneld Update v1 2022-04-29 20:00:08 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2022-04-29 20:00:09 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.0.0/24\"":{}},"f:taints":{}}}} {kubelet Update v1 2022-04-29 20:05:31 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}} {nfd-master Update v1 2022-04-29 20:08:11 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/master.version":{}}}}}]},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{451201003520 0} {} 440625980Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234739200 0} {} 196518300Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{406080902496 0} {} 406080902496 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324575232 0} {} 195629468Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-04-29 20:03:15 +0000 UTC,LastTransitionTime:2022-04-29 20:03:15 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-04-29 23:20:27 +0000 UTC,LastTransitionTime:2022-04-29 19:57:15 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-04-29 23:20:27 +0000 UTC,LastTransitionTime:2022-04-29 19:57:15 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-04-29 23:20:27 +0000 UTC,LastTransitionTime:2022-04-29 19:57:15 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-04-29 23:20:27 +0000 UTC,LastTransitionTime:2022-04-29 20:00:09 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.202,},NodeAddress{Type:Hostname,Address:master1,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:c3419fad4d2d4c5c9574e5b11ef92b4b,SystemUUID:00ACFB60-0631-E711-906E-0017A4403562,BootID:5e0f934f-c777-4827-ade6-efec15a825ef,KernelVersion:3.10.0-1160.62.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.14,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:727675348,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 sirot/netperf-latest:latest],SizeBytes:282025213,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[k8s.gcr.io/nfd/node-feature-discovery@sha256:74a1cbd82354f148277e20cdce25d57816e355a896bc67f67a0f722164b16945 k8s.gcr.io/nfd/node-feature-discovery:v0.8.2],SizeBytes:108486428,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e k8s.gcr.io/coredns/coredns:v1.8.0],SizeBytes:42454755,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[localhost:30500/tasextender@sha256:f09acec459e39fddbd00d2ff6975dd7715ddae0b47f70ed62d6f52e6be7e3f22 tasextender:latest localhost:30500/tasextender:0.4],SizeBytes:28910791,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[registry@sha256:1cd9409a311350c3072fe510b52046f104416376c126a479cef9a4dfe692cf57 registry:2.7.0],SizeBytes:24191168,},ContainerImage{Names:[nginx@sha256:b92d3b942c8b84da889ac3dc6e83bd20ffb8cd2d8298eba92c8b0bf88d52f03e nginx:1.20.1-alpine],SizeBytes:22721538,},ContainerImage{Names:[aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 aquasec/kube-bench:0.3.1],SizeBytes:19301876,},ContainerImage{Names:[@ :],SizeBytes:5577654,},ContainerImage{Names:[alpine@sha256:c0e9560cda118f9ec63ddefb4a173a2b2a0347082d7dff7dc14272e7841a5b5a alpine:3.12.1],SizeBytes:5573013,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},}
Apr 29 23:20:30.416: INFO: 
Logging kubelet events for node master1
Apr 29 23:20:30.418: INFO: 
Logging pods the kubelet thinks is on node master1
Apr 29 23:20:30.444: INFO: coredns-8474476ff8-59qm6 started at 2022-04-29 20:00:39 +0000 UTC (0+1 container statuses recorded)
Apr 29 23:20:30.444: INFO: 	Container coredns ready: true, restart count 1
Apr 29 23:20:30.444: INFO: container-registry-65d7c44b96-np5nk started at 2022-04-29 20:04:54 +0000 UTC (0+2 container statuses recorded)
Apr 29 23:20:30.444: INFO: 	Container docker-registry ready: true, restart count 0
Apr 29 23:20:30.444: INFO: 	Container nginx ready: true, restart count 0
Apr 29 23:20:30.444: INFO: no-snat-testwb5fx started at 2022-04-29 23:20:22 +0000 UTC (0+1 container statuses recorded)
Apr 29 23:20:30.444: INFO: 	Container no-snat-test ready: true, restart count 0
Apr 29 23:20:30.444: INFO: node-feature-discovery-controller-cff799f9f-zpv5m started at 2022-04-29 20:08:04 +0000 UTC (0+1 container statuses recorded)
Apr 29 23:20:30.444: INFO: 	Container nfd-controller ready: true, restart count 0
Apr 29 23:20:30.444: INFO: node-exporter-svkqv started at 2022-04-29 20:13:28 +0000 UTC (0+2 container statuses recorded)
Apr 29 23:20:30.444: INFO: 	Container kube-rbac-proxy ready: true, restart count 0
Apr 29 23:20:30.444: INFO: 	Container node-exporter ready: true, restart count 0
Apr 29 23:20:30.444: INFO: kube-apiserver-master1 started at 2022-04-29 20:02:53 +0000 UTC (0+1 container statuses recorded)
Apr 29 23:20:30.444: INFO: 	Container kube-apiserver ready: true, restart count 0
Apr 29 23:20:30.444: INFO: kube-controller-manager-master1 started at 2022-04-29 20:02:53 +0000 UTC (0+1 container statuses recorded)
Apr 29 23:20:30.444: INFO: 	Container kube-controller-manager ready: true, restart count 2
Apr 29 23:20:30.444: INFO: kube-scheduler-master1 started at 2022-04-29 20:16:35 +0000 UTC (0+1 container statuses recorded)
Apr 29 23:20:30.444: INFO: 	Container kube-scheduler ready: true, restart count 1
Apr 29 23:20:30.444: INFO: kube-proxy-9s46x started at 2022-04-29 19:59:08 +0000 UTC (0+1 container statuses recorded)
Apr 29 23:20:30.444: INFO: 	Container kube-proxy ready: true, restart count 1
Apr 29 23:20:30.444: INFO: kube-flannel-cskzh started at 2022-04-29 20:00:03 +0000 UTC (1+1 container statuses recorded)
Apr 29 23:20:30.444: INFO: 	Init container install-cni ready: true, restart count 0
Apr 29 23:20:30.444: INFO: 	Container kube-flannel ready: true, restart count 1
Apr 29 23:20:30.444: INFO: kube-multus-ds-amd64-w54d6 started at 2022-04-29 20:00:12 +0000 UTC (0+1 container statuses recorded)
Apr 29 23:20:30.444: INFO: 	Container kube-multus ready: true, restart count 1
Apr 29 23:20:30.547: INFO: 
Latency metrics for node master1
Apr 29 23:20:30.547: INFO: 
Logging node info for node master2
Apr 29 23:20:30.550: INFO: Node Info: &Node{ObjectMeta:{master2    5b362581-f2d5-419c-a0b0-3aad7bec82f9 74385 0 2022-04-29 19:57:49 +0000 UTC   map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master2 kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.203 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  [{kubeadm Update v1 2022-04-29 19:57:50 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}}} {flanneld Update v1 2022-04-29 20:00:08 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2022-04-29 20:00:09 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}},"f:taints":{}}}} {kubelet Update v1 2022-04-29 20:10:51 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:capacity":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.1.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{451201003520 0} {} 440625980Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234739200 0} {} 196518300Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{406080902496 0} {} 406080902496 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324575232 0} {} 195629468Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-04-29 20:03:15 +0000 UTC,LastTransitionTime:2022-04-29 20:03:15 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-04-29 23:20:20 +0000 UTC,LastTransitionTime:2022-04-29 19:57:49 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-04-29 23:20:20 +0000 UTC,LastTransitionTime:2022-04-29 19:57:49 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-04-29 23:20:20 +0000 UTC,LastTransitionTime:2022-04-29 19:57:49 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-04-29 23:20:20 +0000 UTC,LastTransitionTime:2022-04-29 20:03:15 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.203,},NodeAddress{Type:Hostname,Address:master2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:d055250c7e194b8a9a572c232266a800,SystemUUID:00A0DE53-E51D-E711-906E-0017A4403562,BootID:fb9f32a4-f021-45dd-bddf-6f1d5ae9abae,KernelVersion:3.10.0-1160.62.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.14,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:727675348,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 sirot/netperf-latest:latest],SizeBytes:282025213,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[quay.io/prometheus-operator/prometheus-operator@sha256:850c86bfeda4389bc9c757a9fd17ca5a090ea6b424968178d4467492cfa13921 quay.io/prometheus-operator/prometheus-operator:v0.44.1],SizeBytes:42617274,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e k8s.gcr.io/coredns/coredns:v1.8.0],SizeBytes:42454755,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},}
Apr 29 23:20:30.550: INFO: 
Logging kubelet events for node master2
Apr 29 23:20:30.553: INFO: 
Logging pods the kubelet thinks is on node master2
Apr 29 23:20:30.569: INFO: kube-apiserver-master2 started at 2022-04-29 20:02:53 +0000 UTC (0+1 container statuses recorded)
Apr 29 23:20:30.569: INFO: 	Container kube-apiserver ready: true, restart count 0
Apr 29 23:20:30.569: INFO: kube-proxy-4dnjw started at 2022-04-29 19:59:08 +0000 UTC (0+1 container statuses recorded)
Apr 29 23:20:30.569: INFO: 	Container kube-proxy ready: true, restart count 2
Apr 29 23:20:30.569: INFO: kube-flannel-q2wgv started at 2022-04-29 20:00:03 +0000 UTC (1+1 container statuses recorded)
Apr 29 23:20:30.569: INFO: 	Init container install-cni ready: true, restart count 0
Apr 29 23:20:30.569: INFO: 	Container kube-flannel ready: true, restart count 1
Apr 29 23:20:30.569: INFO: kube-multus-ds-amd64-txslv started at 2022-04-29 20:00:12 +0000 UTC (0+1 container statuses recorded)
Apr 29 23:20:30.569: INFO: 	Container kube-multus ready: true, restart count 1
Apr 29 23:20:30.569: INFO: no-snat-testzvbcb started at 2022-04-29 23:20:22 +0000 UTC (0+1 container statuses recorded)
Apr 29 23:20:30.569: INFO: 	Container no-snat-test ready: true, restart count 0
Apr 29 23:20:30.569: INFO: kube-controller-manager-master2 started at 2022-04-29 20:02:53 +0000 UTC (0+1 container statuses recorded)
Apr 29 23:20:30.569: INFO: 	Container kube-controller-manager ready: true, restart count 1
Apr 29 23:20:30.569: INFO: kube-scheduler-master2 started at 2022-04-29 20:02:53 +0000 UTC (0+1 container statuses recorded)
Apr 29 23:20:30.569: INFO: 	Container kube-scheduler ready: true, restart count 3
Apr 29 23:20:30.569: INFO: dns-autoscaler-7df78bfcfb-csfp5 started at 2022-04-29 20:00:43 +0000 UTC (0+1 container statuses recorded)
Apr 29 23:20:30.569: INFO: 	Container autoscaler ready: true, restart count 1
Apr 29 23:20:30.569: INFO: coredns-8474476ff8-bg2wr started at 2022-04-29 20:00:45 +0000 UTC (0+1 container statuses recorded)
Apr 29 23:20:30.569: INFO: 	Container coredns ready: true, restart count 2
Apr 29 23:20:30.569: INFO: prometheus-operator-585ccfb458-q8r6q started at 2022-04-29 20:13:20 +0000 UTC (0+2 container statuses recorded)
Apr 29 23:20:30.569: INFO: 	Container kube-rbac-proxy ready: true, restart count 0
Apr 29 23:20:30.569: INFO: 	Container prometheus-operator ready: true, restart count 0
Apr 29 23:20:30.569: INFO: node-exporter-9rgc2 started at 2022-04-29 20:13:28 +0000 UTC (0+2 container statuses recorded)
Apr 29 23:20:30.569: INFO: 	Container kube-rbac-proxy ready: true, restart count 0
Apr 29 23:20:30.569: INFO: 	Container node-exporter ready: true, restart count 0
Apr 29 23:20:30.671: INFO: 
Latency metrics for node master2
Apr 29 23:20:30.671: INFO: 
Logging node info for node master3
Apr 29 23:20:30.674: INFO: Node Info: &Node{ObjectMeta:{master3    1096e515-b559-4c90-b0f7-3398537b5f9e 74383 0 2022-04-29 19:58:00 +0000 UTC   map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master3 kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.204 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  [{kubeadm Update v1 2022-04-29 19:58:01 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}}} {flanneld Update v1 2022-04-29 20:00:08 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2022-04-29 20:00:09 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.2.0/24\"":{}},"f:taints":{}}}} {kubelet Update v1 2022-04-29 20:10:51 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:capacity":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.2.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{451201003520 0} {} 440625980Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234743296 0} {}  BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{406080902496 0} {} 406080902496 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324579328 0} {}  BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-04-29 20:03:16 +0000 UTC,LastTransitionTime:2022-04-29 20:03:16 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-04-29 23:20:20 +0000 UTC,LastTransitionTime:2022-04-29 19:58:00 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-04-29 23:20:20 +0000 UTC,LastTransitionTime:2022-04-29 19:58:00 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-04-29 23:20:20 +0000 UTC,LastTransitionTime:2022-04-29 19:58:00 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-04-29 23:20:20 +0000 UTC,LastTransitionTime:2022-04-29 20:00:09 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.204,},NodeAddress{Type:Hostname,Address:master3,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:8955b376e6314525a9e533e277f5f4fb,SystemUUID:008B1444-141E-E711-906E-0017A4403562,BootID:6ffefaf4-8a5c-4288-a6a9-78ef35aa67ef,KernelVersion:3.10.0-1160.62.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.14,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:727675348,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e k8s.gcr.io/coredns/coredns:v1.8.0],SizeBytes:42454755,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},}
Apr 29 23:20:30.675: INFO: 
Logging kubelet events for node master3
Apr 29 23:20:30.678: INFO: 
Logging pods the kubelet thinks is on node master3
Apr 29 23:20:30.692: INFO: no-snat-testsvbdp started at 2022-04-29 23:20:22 +0000 UTC (0+1 container statuses recorded)
Apr 29 23:20:30.692: INFO: 	Container no-snat-test ready: true, restart count 0
Apr 29 23:20:30.692: INFO: kube-apiserver-master3 started at 2022-04-29 19:58:29 +0000 UTC (0+1 container statuses recorded)
Apr 29 23:20:30.692: INFO: 	Container kube-apiserver ready: true, restart count 0
Apr 29 23:20:30.692: INFO: kube-controller-manager-master3 started at 2022-04-29 20:06:45 +0000 UTC (0+1 container statuses recorded)
Apr 29 23:20:30.692: INFO: 	Container kube-controller-manager ready: true, restart count 3
Apr 29 23:20:30.692: INFO: kube-scheduler-master3 started at 2022-04-29 20:06:45 +0000 UTC (0+1 container statuses recorded)
Apr 29 23:20:30.692: INFO: 	Container kube-scheduler ready: true, restart count 2
Apr 29 23:20:30.692: INFO: kube-proxy-gs7qh started at 2022-04-29 19:59:08 +0000 UTC (0+1 container statuses recorded)
Apr 29 23:20:30.692: INFO: 	Container kube-proxy ready: true, restart count 2
Apr 29 23:20:30.692: INFO: kube-flannel-g8w9b started at 2022-04-29 20:00:03 +0000 UTC (1+1 container statuses recorded)
Apr 29 23:20:30.692: INFO: 	Init container install-cni ready: true, restart count 0
Apr 29 23:20:30.692: INFO: 	Container kube-flannel ready: true, restart count 2
Apr 29 23:20:30.692: INFO: kube-multus-ds-amd64-lxrlj started at 2022-04-29 20:00:12 +0000 UTC (0+1 container statuses recorded)
Apr 29 23:20:30.692: INFO: 	Container kube-multus ready: true, restart count 1
Apr 29 23:20:30.692: INFO: node-exporter-gdq6v started at 2022-04-29 20:13:28 +0000 UTC (0+2 container statuses recorded)
Apr 29 23:20:30.692: INFO: 	Container kube-rbac-proxy ready: true, restart count 0
Apr 29 23:20:30.692: INFO: 	Container node-exporter ready: true, restart count 0
Apr 29 23:20:30.777: INFO: 
Latency metrics for node master3
Apr 29 23:20:30.777: INFO: 
Logging node info for node node1
Apr 29 23:20:30.780: INFO: Node Info: &Node{ObjectMeta:{node1    6842a10e-614a-46f0-b405-bc18936b0017 74637 0 2022-04-29 19:59:05 +0000 UTC   map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux cmk.intel.com/cmk-node:true feature.node.kubernetes.io/cpu-cpuid.ADX:true feature.node.kubernetes.io/cpu-cpuid.AESNI:true feature.node.kubernetes.io/cpu-cpuid.AVX:true feature.node.kubernetes.io/cpu-cpuid.AVX2:true feature.node.kubernetes.io/cpu-cpuid.AVX512BW:true feature.node.kubernetes.io/cpu-cpuid.AVX512CD:true feature.node.kubernetes.io/cpu-cpuid.AVX512DQ:true feature.node.kubernetes.io/cpu-cpuid.AVX512F:true feature.node.kubernetes.io/cpu-cpuid.AVX512VL:true feature.node.kubernetes.io/cpu-cpuid.FMA3:true feature.node.kubernetes.io/cpu-cpuid.HLE:true feature.node.kubernetes.io/cpu-cpuid.IBPB:true feature.node.kubernetes.io/cpu-cpuid.MPX:true feature.node.kubernetes.io/cpu-cpuid.RTM:true feature.node.kubernetes.io/cpu-cpuid.SSE4:true feature.node.kubernetes.io/cpu-cpuid.SSE42:true feature.node.kubernetes.io/cpu-cpuid.STIBP:true feature.node.kubernetes.io/cpu-cpuid.VMX:true feature.node.kubernetes.io/cpu-cstate.enabled:true feature.node.kubernetes.io/cpu-hardware_multithreading:true feature.node.kubernetes.io/cpu-pstate.status:active feature.node.kubernetes.io/cpu-pstate.turbo:true feature.node.kubernetes.io/cpu-rdt.RDTCMT:true feature.node.kubernetes.io/cpu-rdt.RDTL3CA:true feature.node.kubernetes.io/cpu-rdt.RDTMBA:true feature.node.kubernetes.io/cpu-rdt.RDTMBM:true feature.node.kubernetes.io/cpu-rdt.RDTMON:true feature.node.kubernetes.io/kernel-config.NO_HZ:true feature.node.kubernetes.io/kernel-config.NO_HZ_FULL:true feature.node.kubernetes.io/kernel-selinux.enabled:true feature.node.kubernetes.io/kernel-version.full:3.10.0-1160.62.1.el7.x86_64 feature.node.kubernetes.io/kernel-version.major:3 feature.node.kubernetes.io/kernel-version.minor:10 feature.node.kubernetes.io/kernel-version.revision:0 feature.node.kubernetes.io/memory-numa:true feature.node.kubernetes.io/network-sriov.capable:true feature.node.kubernetes.io/network-sriov.configured:true feature.node.kubernetes.io/pci-0300_1a03.present:true feature.node.kubernetes.io/storage-nonrotationaldisk:true feature.node.kubernetes.io/system-os_release.ID:centos feature.node.kubernetes.io/system-os_release.VERSION_ID:7 feature.node.kubernetes.io/system-os_release.VERSION_ID.major:7 kubernetes.io/arch:amd64 kubernetes.io/hostname:node1 kubernetes.io/os:linux] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.207 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/extended-resources: nfd.node.kubernetes.io/feature-labels:cpu-cpuid.ADX,cpu-cpuid.AESNI,cpu-cpuid.AVX,cpu-cpuid.AVX2,cpu-cpuid.AVX512BW,cpu-cpuid.AVX512CD,cpu-cpuid.AVX512DQ,cpu-cpuid.AVX512F,cpu-cpuid.AVX512VL,cpu-cpuid.FMA3,cpu-cpuid.HLE,cpu-cpuid.IBPB,cpu-cpuid.MPX,cpu-cpuid.RTM,cpu-cpuid.SSE4,cpu-cpuid.SSE42,cpu-cpuid.STIBP,cpu-cpuid.VMX,cpu-cstate.enabled,cpu-hardware_multithreading,cpu-pstate.status,cpu-pstate.turbo,cpu-rdt.RDTCMT,cpu-rdt.RDTL3CA,cpu-rdt.RDTMBA,cpu-rdt.RDTMBM,cpu-rdt.RDTMON,kernel-config.NO_HZ,kernel-config.NO_HZ_FULL,kernel-selinux.enabled,kernel-version.full,kernel-version.major,kernel-version.minor,kernel-version.revision,memory-numa,network-sriov.capable,network-sriov.configured,pci-0300_1a03.present,storage-nonrotationaldisk,system-os_release.ID,system-os_release.VERSION_ID,system-os_release.VERSION_ID.major nfd.node.kubernetes.io/worker.version:v0.8.2 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  [{kube-controller-manager Update v1 2022-04-29 19:59:05 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.3.0/24\"":{}}}}} {kubeadm Update v1 2022-04-29 19:59:05 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {flanneld Update v1 2022-04-29 20:00:09 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {nfd-master Update v1 2022-04-29 20:08:12 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/extended-resources":{},"f:nfd.node.kubernetes.io/feature-labels":{},"f:nfd.node.kubernetes.io/worker.version":{}},"f:labels":{"f:feature.node.kubernetes.io/cpu-cpuid.ADX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AESNI":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX2":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512BW":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512CD":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512DQ":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512F":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512VL":{},"f:feature.node.kubernetes.io/cpu-cpuid.FMA3":{},"f:feature.node.kubernetes.io/cpu-cpuid.HLE":{},"f:feature.node.kubernetes.io/cpu-cpuid.IBPB":{},"f:feature.node.kubernetes.io/cpu-cpuid.MPX":{},"f:feature.node.kubernetes.io/cpu-cpuid.RTM":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE4":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE42":{},"f:feature.node.kubernetes.io/cpu-cpuid.STIBP":{},"f:feature.node.kubernetes.io/cpu-cpuid.VMX":{},"f:feature.node.kubernetes.io/cpu-cstate.enabled":{},"f:feature.node.kubernetes.io/cpu-hardware_multithreading":{},"f:feature.node.kubernetes.io/cpu-pstate.status":{},"f:feature.node.kubernetes.io/cpu-pstate.turbo":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTCMT":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTL3CA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBM":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMON":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ_FULL":{},"f:feature.node.kubernetes.io/kernel-selinux.enabled":{},"f:feature.node.kubernetes.io/kernel-version.full":{},"f:feature.node.kubernetes.io/kernel-version.major":{},"f:feature.node.kubernetes.io/kernel-version.minor":{},"f:feature.node.kubernetes.io/kernel-version.revision":{},"f:feature.node.kubernetes.io/memory-numa":{},"f:feature.node.kubernetes.io/network-sriov.capable":{},"f:feature.node.kubernetes.io/network-sriov.configured":{},"f:feature.node.kubernetes.io/pci-0300_1a03.present":{},"f:feature.node.kubernetes.io/storage-nonrotationaldisk":{},"f:feature.node.kubernetes.io/system-os_release.ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID.major":{}}}}} {Swagger-Codegen Update v1 2022-04-29 20:11:46 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:cmk.intel.com/cmk-node":{}}},"f:status":{"f:capacity":{"f:cmk.intel.com/exclusive-cores":{}}}}} {kubelet Update v1 2022-04-29 22:27:01 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:cmk.intel.com/exclusive-cores":{},"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:capacity":{"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.3.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.3.0/24],},Status:NodeStatus{Capacity:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{451201003520 0} {} 440625980Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{201269608448 0} {}  BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{77 0} {} 77 DecimalSI},ephemeral-storage: {{406080902496 0} {} 406080902496 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{178884608000 0} {}  BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-04-29 20:02:57 +0000 UTC,LastTransitionTime:2022-04-29 20:02:57 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-04-29 23:20:28 +0000 UTC,LastTransitionTime:2022-04-29 19:59:05 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-04-29 23:20:28 +0000 UTC,LastTransitionTime:2022-04-29 19:59:05 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-04-29 23:20:28 +0000 UTC,LastTransitionTime:2022-04-29 19:59:05 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-04-29 23:20:28 +0000 UTC,LastTransitionTime:2022-04-29 20:00:14 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.207,},NodeAddress{Type:Hostname,Address:node1,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:2a0958eb1b3044f2963c9e5f2e902173,SystemUUID:00CDA902-D022-E711-906E-0017A4403562,BootID:fc6a2d14-7726-4aec-9428-6617632ddcbe,KernelVersion:3.10.0-1160.62.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.14,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[opnfv/barometer-collectd@sha256:f30e965aa6195e6ac4ca2410f5a15e3704c92e4afa5208178ca22a7911975d66],SizeBytes:1075575763,},ContainerImage{Names:[@ :],SizeBytes:1003954967,},ContainerImage{Names:[localhost:30500/cmk@sha256:cfef1b50441378a7b326a606756a12e664a435cc215d910f7aa9415cfde56361 cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:727675348,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[golang@sha256:db2475a1dbb2149508e5db31d7d77a75e6600d54be645f37681f03f2762169ba golang:alpine3.12],SizeBytes:301186719,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[k8s.gcr.io/etcd@sha256:4ad90a11b55313b182afc186b9876c8e891531b8db4c9bf1541953021618d0e2 k8s.gcr.io/etcd:3.4.13-0],SizeBytes:253392289,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[grafana/grafana@sha256:ba39bf5131dcc0464134a3ff0e26e8c6380415249fa725e5f619176601255172 grafana/grafana:7.5.4],SizeBytes:203572842,},ContainerImage{Names:[quay.io/prometheus/prometheus@sha256:b899dbd1b9017b9a379f76ce5b40eead01a62762c4f2057eacef945c3c22d210 quay.io/prometheus/prometheus:v2.22.1],SizeBytes:168344243,},ContainerImage{Names:[nginx@sha256:a05b0cdd4fc1be3b224ba9662ebdf98fe44c09c0c9215b45f84344c12867002e nginx:1.21.1],SizeBytes:133175493,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:716d2f68314c5c4ddd5ecdb45183fcb4ed8019015982c1321571f863989b70b0 k8s.gcr.io/e2e-test-images/httpd:2.4.39-1],SizeBytes:126894770,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:125930239,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:1f36a24cfb5e0c3f725d7565a867c2384282fcbeccc77b07b423c9da95763a9a k8s.gcr.io/e2e-test-images/nautilus:1.4],SizeBytes:121748345,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[k8s.gcr.io/nfd/node-feature-discovery@sha256:74a1cbd82354f148277e20cdce25d57816e355a896bc67f67a0f722164b16945 k8s.gcr.io/nfd/node-feature-discovery:v0.8.2],SizeBytes:108486428,},ContainerImage{Names:[directxman12/k8s-prometheus-adapter@sha256:2b09a571757a12c0245f2f1a74db4d1b9386ff901cf57f5ce48a0a682bd0e3af directxman12/k8s-prometheus-adapter:v0.8.2],SizeBytes:68230450,},ContainerImage{Names:[k8s.gcr.io/build-image/debian-iptables@sha256:160595fccf5ad4e41cc0a7acf56027802bf1a2310e704f6505baf0f88746e277 k8s.gcr.io/build-image/debian-iptables:buster-v1.6.7],SizeBytes:60182103,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/sample-apiserver@sha256:e7fddbaac4c3451da2365ab90bad149d32f11409738034e41e0f460927f7c276 k8s.gcr.io/e2e-test-images/sample-apiserver:1.17.4],SizeBytes:58172101,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[localhost:30500/sriov-device-plugin@sha256:145b4fe543408db530a0d8880c681aaa0e3df9b949467d93bcecf42e8625a181 nfvpe/sriov-device-plugin:latest localhost:30500/sriov-device-plugin:v3.3.2],SizeBytes:42676189,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[localhost:30500/tasextender@sha256:f09acec459e39fddbd00d2ff6975dd7715ddae0b47f70ed62d6f52e6be7e3f22 localhost:30500/tasextender:0.4],SizeBytes:28910791,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[prom/collectd-exporter@sha256:73fbda4d24421bff3b741c27efc36f1b6fbe7c57c378d56d4ff78101cd556654],SizeBytes:17463681,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:16032814,},ContainerImage{Names:[quay.io/prometheus-operator/prometheus-config-reloader@sha256:4dee0fcf1820355ddd6986c1317b555693776c731315544a99d6cc59a7e34ce9 quay.io/prometheus-operator/prometheus-config-reloader:v0.44.1],SizeBytes:13433274,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:11443478,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonewprivs@sha256:8ac1264691820febacf3aea5d152cbde6d10685731ec14966a9401c6f47a68ac k8s.gcr.io/e2e-test-images/nonewprivs:1.3],SizeBytes:7107254,},ContainerImage{Names:[appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172cead6502fec989198b98d8bb appropriate/curl:edge],SizeBytes:5654234,},ContainerImage{Names:[alpine@sha256:c75ac27b49326926b803b9ed43bf088bc220d22556de1bc5f72d742c91398f69 alpine:3.12],SizeBytes:5581590,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:1154361,},ContainerImage{Names:[busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 busybox:1.28],SizeBytes:1146369,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},}
Apr 29 23:20:30.781: INFO: 
Logging kubelet events for node node1
Apr 29 23:20:30.784: INFO: 
Logging pods the kubelet thinks is on node node1
Apr 29 23:20:30.803: INFO: netserver-0 started at 2022-04-29 23:19:51 +0000 UTC (0+1 container statuses recorded)
Apr 29 23:20:30.803: INFO: 	Container webserver ready: false, restart count 0
Apr 29 23:20:30.803: INFO: node-feature-discovery-worker-kbl9s started at 2022-04-29 20:08:04 +0000 UTC (0+1 container statuses recorded)
Apr 29 23:20:30.803: INFO: 	Container nfd-worker ready: true, restart count 0
Apr 29 23:20:30.803: INFO: startup-script started at 2022-04-29 23:19:20 +0000 UTC (0+1 container statuses recorded)
Apr 29 23:20:30.803: INFO: 	Container startup-script ready: true, restart count 0
Apr 29 23:20:30.803: INFO: kubernetes-dashboard-785dcbb76d-d2k5n started at 2022-04-29 20:00:45 +0000 UTC (0+1 container statuses recorded)
Apr 29 23:20:30.803: INFO: 	Container kubernetes-dashboard ready: true, restart count 1
Apr 29 23:20:30.803: INFO: test-container-pod started at 2022-04-29 23:20:23 +0000 UTC (0+1 container statuses recorded)
Apr 29 23:20:30.803: INFO: 	Container webserver ready: false, restart count 0
Apr 29 23:20:30.803: INFO: kubernetes-metrics-scraper-5558854cb-g47c2 started at 2022-04-29 20:00:45 +0000 UTC (0+1 container statuses recorded)
Apr 29 23:20:30.803: INFO: 	Container kubernetes-metrics-scraper ready: true, restart count 1
Apr 29 23:20:30.803: INFO: nginx-proxy-node1 started at 2022-04-29 19:59:05 +0000 UTC (0+1 container statuses recorded)
Apr 29 23:20:30.803: INFO: 	Container nginx-proxy ready: true, restart count 2
Apr 29 23:20:30.803: INFO: cmk-f5znp started at 2022-04-29 20:12:25 +0000 UTC (0+2 container statuses recorded)
Apr 29 23:20:30.803: INFO: 	Container nodereport ready: true, restart count 0
Apr 29 23:20:30.803: INFO: 	Container reconcile ready: true, restart count 0
Apr 29 23:20:30.803: INFO: netserver-0 started at 2022-04-29 23:20:20 +0000 UTC (0+1 container statuses recorded)
Apr 29 23:20:30.803: INFO: 	Container webserver ready: false, restart count 0
Apr 29 23:20:30.803: INFO: no-snat-testn2bgf started at 2022-04-29 23:20:22 +0000 UTC (0+1 container statuses recorded)
Apr 29 23:20:30.803: INFO: 	Container no-snat-test ready: true, restart count 0
Apr 29 23:20:30.803: INFO: up-down-2-65dld started at 2022-04-29 23:19:24 +0000 UTC (0+1 container statuses recorded)
Apr 29 23:20:30.803: INFO: 	Container up-down-2 ready: true, restart count 0
Apr 29 23:20:30.803: INFO: service-headless-toggled-vsbpf started at 2022-04-29 23:20:15 +0000 UTC (0+1 container statuses recorded)
Apr 29 23:20:30.803: INFO: 	Container service-headless-toggled ready: true, restart count 0
Apr 29 23:20:30.803: INFO: service-headless-toggled-8qw2t started at 2022-04-29 23:20:15 +0000 UTC (0+1 container statuses recorded)
Apr 29 23:20:30.803: INFO: 	Container service-headless-toggled ready: true, restart count 0
Apr 29 23:20:30.803: INFO: service-headless-toggled-z997q started at 2022-04-29 23:20:15 +0000 UTC (0+1 container statuses recorded)
Apr 29 23:20:30.803: INFO: 	Container service-headless-toggled ready: true, restart count 0
Apr 29 23:20:30.803: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-2fslq started at 2022-04-29 20:09:17 +0000 UTC (0+1 container statuses recorded)
Apr 29 23:20:30.803: INFO: 	Container kube-sriovdp ready: true, restart count 0
Apr 29 23:20:30.803: INFO: cmk-init-discover-node1-gxlbt started at 2022-04-29 20:11:43 +0000 UTC (0+3 container statuses recorded)
Apr 29 23:20:30.803: INFO: 	Container discover ready: false, restart count 0
Apr 29 23:20:30.803: INFO: 	Container init ready: false, restart count 0
Apr 29 23:20:30.803: INFO: 	Container install ready: false, restart count 0
Apr 29 23:20:30.803: INFO: prometheus-k8s-0 started at 2022-04-29 20:13:38 +0000 UTC (0+4 container statuses recorded)
Apr 29 23:20:30.803: INFO: 	Container config-reloader ready: true, restart count 0
Apr 29 23:20:30.803: INFO: 	Container custom-metrics-apiserver ready: true, restart count 0
Apr 29 23:20:30.803: INFO: 	Container grafana ready: true, restart count 0
Apr 29 23:20:30.803: INFO: 	Container prometheus ready: true, restart count 1
Apr 29 23:20:30.803: INFO: tas-telemetry-aware-scheduling-84ff454dfb-khdw5 started at 2022-04-29 20:16:34 +0000 UTC (0+1 container statuses recorded)
Apr 29 23:20:30.803: INFO: 	Container tas-extender ready: true, restart count 0
Apr 29 23:20:30.803: INFO: verify-service-down-host-exec-pod started at 2022-04-29 23:20:25 +0000 UTC (0+1 container statuses recorded)
Apr 29 23:20:30.803: INFO: 	Container agnhost-container ready: false, restart count 0
Apr 29 23:20:30.803: INFO: kube-proxy-v9tgj started at 2022-04-29 19:59:08 +0000 UTC (0+1 container statuses recorded)
Apr 29 23:20:30.803: INFO: 	Container kube-proxy ready: true, restart count 2
Apr 29 23:20:30.803: INFO: node-exporter-c8777 started at 2022-04-29 20:13:28 +0000 UTC (0+2 container statuses recorded)
Apr 29 23:20:30.803: INFO: 	Container kube-rbac-proxy ready: true, restart count 0
Apr 29 23:20:30.803: INFO: 	Container node-exporter ready: true, restart count 0
Apr 29 23:20:30.803: INFO: netserver-0 started at 2022-04-29 23:20:03 +0000 UTC (0+1 container statuses recorded)
Apr 29 23:20:30.803: INFO: 	Container webserver ready: true, restart count 0
Apr 29 23:20:30.803: INFO: iperf2-server-deployment-59979d877-p2c8f started at 2022-04-29 23:20:24 +0000 UTC (0+1 container statuses recorded)
Apr 29 23:20:30.803: INFO: 	Container iperf2-server ready: false, restart count 0
Apr 29 23:20:30.803: INFO: pod-client started at 2022-04-29 23:19:22 +0000 UTC (0+1 container statuses recorded)
Apr 29 23:20:30.803: INFO: 	Container pod-client ready: true, restart count 0
Apr 29 23:20:30.803: INFO: kube-flannel-47phs started at 2022-04-29 20:00:03 +0000 UTC (1+1 container statuses recorded)
Apr 29 23:20:30.803: INFO: 	Init container install-cni ready: true, restart count 2
Apr 29 23:20:30.803: INFO: 	Container kube-flannel ready: true, restart count 2
Apr 29 23:20:30.803: INFO: collectd-ccgw2 started at 2022-04-29 20:17:24 +0000 UTC (0+3 container statuses recorded)
Apr 29 23:20:30.803: INFO: 	Container collectd ready: true, restart count 0
Apr 29 23:20:30.803: INFO: 	Container collectd-exporter ready: true, restart count 0
Apr 29 23:20:30.803: INFO: 	Container rbac-proxy ready: true, restart count 0
Apr 29 23:20:30.803: INFO: netserver-0 started at 2022-04-29 23:20:03 +0000 UTC (0+1 container statuses recorded)
Apr 29 23:20:30.803: INFO: 	Container webserver ready: true, restart count 0
Apr 29 23:20:30.803: INFO: kube-multus-ds-amd64-kkz4q started at 2022-04-29 20:00:12 +0000 UTC (0+1 container statuses recorded)
Apr 29 23:20:30.803: INFO: 	Container kube-multus ready: true, restart count 1
Apr 29 23:20:30.803: INFO: test-container-pod started at 2022-04-29 23:20:05 +0000 UTC (0+1 container statuses recorded)
Apr 29 23:20:30.803: INFO: 	Container webserver ready: true, restart count 0
Apr 29 23:20:30.803: INFO: up-down-2-xcs5g started at 2022-04-29 23:19:24 +0000 UTC (0+1 container statuses recorded)
Apr 29 23:20:30.803: INFO: 	Container up-down-2 ready: true, restart count 0
Apr 29 23:20:31.222: INFO: 
Latency metrics for node node1
Apr 29 23:20:31.222: INFO: 
Logging node info for node node2
Apr 29 23:20:31.226: INFO: Node Info: &Node{ObjectMeta:{node2    2f399869-e81b-465d-97b4-806b6186d34a 74672 0 2022-04-29 19:59:05 +0000 UTC   map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux cmk.intel.com/cmk-node:true feature.node.kubernetes.io/cpu-cpuid.ADX:true feature.node.kubernetes.io/cpu-cpuid.AESNI:true feature.node.kubernetes.io/cpu-cpuid.AVX:true feature.node.kubernetes.io/cpu-cpuid.AVX2:true feature.node.kubernetes.io/cpu-cpuid.AVX512BW:true feature.node.kubernetes.io/cpu-cpuid.AVX512CD:true feature.node.kubernetes.io/cpu-cpuid.AVX512DQ:true feature.node.kubernetes.io/cpu-cpuid.AVX512F:true feature.node.kubernetes.io/cpu-cpuid.AVX512VL:true feature.node.kubernetes.io/cpu-cpuid.FMA3:true feature.node.kubernetes.io/cpu-cpuid.HLE:true feature.node.kubernetes.io/cpu-cpuid.IBPB:true feature.node.kubernetes.io/cpu-cpuid.MPX:true feature.node.kubernetes.io/cpu-cpuid.RTM:true feature.node.kubernetes.io/cpu-cpuid.SSE4:true feature.node.kubernetes.io/cpu-cpuid.SSE42:true feature.node.kubernetes.io/cpu-cpuid.STIBP:true feature.node.kubernetes.io/cpu-cpuid.VMX:true feature.node.kubernetes.io/cpu-cstate.enabled:true feature.node.kubernetes.io/cpu-hardware_multithreading:true feature.node.kubernetes.io/cpu-pstate.status:active feature.node.kubernetes.io/cpu-pstate.turbo:true feature.node.kubernetes.io/cpu-rdt.RDTCMT:true feature.node.kubernetes.io/cpu-rdt.RDTL3CA:true feature.node.kubernetes.io/cpu-rdt.RDTMBA:true feature.node.kubernetes.io/cpu-rdt.RDTMBM:true feature.node.kubernetes.io/cpu-rdt.RDTMON:true feature.node.kubernetes.io/kernel-config.NO_HZ:true feature.node.kubernetes.io/kernel-config.NO_HZ_FULL:true feature.node.kubernetes.io/kernel-selinux.enabled:true feature.node.kubernetes.io/kernel-version.full:3.10.0-1160.62.1.el7.x86_64 feature.node.kubernetes.io/kernel-version.major:3 feature.node.kubernetes.io/kernel-version.minor:10 feature.node.kubernetes.io/kernel-version.revision:0 feature.node.kubernetes.io/memory-numa:true feature.node.kubernetes.io/network-sriov.capable:true feature.node.kubernetes.io/network-sriov.configured:true feature.node.kubernetes.io/pci-0300_1a03.present:true feature.node.kubernetes.io/storage-nonrotationaldisk:true feature.node.kubernetes.io/system-os_release.ID:centos feature.node.kubernetes.io/system-os_release.VERSION_ID:7 feature.node.kubernetes.io/system-os_release.VERSION_ID.major:7 kubernetes.io/arch:amd64 kubernetes.io/hostname:node2 kubernetes.io/os:linux] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.208 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/extended-resources: nfd.node.kubernetes.io/feature-labels:cpu-cpuid.ADX,cpu-cpuid.AESNI,cpu-cpuid.AVX,cpu-cpuid.AVX2,cpu-cpuid.AVX512BW,cpu-cpuid.AVX512CD,cpu-cpuid.AVX512DQ,cpu-cpuid.AVX512F,cpu-cpuid.AVX512VL,cpu-cpuid.FMA3,cpu-cpuid.HLE,cpu-cpuid.IBPB,cpu-cpuid.MPX,cpu-cpuid.RTM,cpu-cpuid.SSE4,cpu-cpuid.SSE42,cpu-cpuid.STIBP,cpu-cpuid.VMX,cpu-cstate.enabled,cpu-hardware_multithreading,cpu-pstate.status,cpu-pstate.turbo,cpu-rdt.RDTCMT,cpu-rdt.RDTL3CA,cpu-rdt.RDTMBA,cpu-rdt.RDTMBM,cpu-rdt.RDTMON,kernel-config.NO_HZ,kernel-config.NO_HZ_FULL,kernel-selinux.enabled,kernel-version.full,kernel-version.major,kernel-version.minor,kernel-version.revision,memory-numa,network-sriov.capable,network-sriov.configured,pci-0300_1a03.present,storage-nonrotationaldisk,system-os_release.ID,system-os_release.VERSION_ID,system-os_release.VERSION_ID.major nfd.node.kubernetes.io/worker.version:v0.8.2 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  [{kube-controller-manager Update v1 2022-04-29 19:59:05 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.4.0/24\"":{}}}}} {kubeadm Update v1 2022-04-29 19:59:05 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {flanneld Update v1 2022-04-29 20:00:08 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {nfd-master Update v1 2022-04-29 20:08:12 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/extended-resources":{},"f:nfd.node.kubernetes.io/feature-labels":{},"f:nfd.node.kubernetes.io/worker.version":{}},"f:labels":{"f:feature.node.kubernetes.io/cpu-cpuid.ADX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AESNI":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX2":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512BW":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512CD":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512DQ":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512F":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512VL":{},"f:feature.node.kubernetes.io/cpu-cpuid.FMA3":{},"f:feature.node.kubernetes.io/cpu-cpuid.HLE":{},"f:feature.node.kubernetes.io/cpu-cpuid.IBPB":{},"f:feature.node.kubernetes.io/cpu-cpuid.MPX":{},"f:feature.node.kubernetes.io/cpu-cpuid.RTM":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE4":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE42":{},"f:feature.node.kubernetes.io/cpu-cpuid.STIBP":{},"f:feature.node.kubernetes.io/cpu-cpuid.VMX":{},"f:feature.node.kubernetes.io/cpu-cstate.enabled":{},"f:feature.node.kubernetes.io/cpu-hardware_multithreading":{},"f:feature.node.kubernetes.io/cpu-pstate.status":{},"f:feature.node.kubernetes.io/cpu-pstate.turbo":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTCMT":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTL3CA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBM":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMON":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ_FULL":{},"f:feature.node.kubernetes.io/kernel-selinux.enabled":{},"f:feature.node.kubernetes.io/kernel-version.full":{},"f:feature.node.kubernetes.io/kernel-version.major":{},"f:feature.node.kubernetes.io/kernel-version.minor":{},"f:feature.node.kubernetes.io/kernel-version.revision":{},"f:feature.node.kubernetes.io/memory-numa":{},"f:feature.node.kubernetes.io/network-sriov.capable":{},"f:feature.node.kubernetes.io/network-sriov.configured":{},"f:feature.node.kubernetes.io/pci-0300_1a03.present":{},"f:feature.node.kubernetes.io/storage-nonrotationaldisk":{},"f:feature.node.kubernetes.io/system-os_release.ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID.major":{}}}}} {Swagger-Codegen Update v1 2022-04-29 20:12:09 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:cmk.intel.com/cmk-node":{}}},"f:status":{"f:capacity":{"f:cmk.intel.com/exclusive-cores":{}}}}} {kubelet Update v1 2022-04-29 22:27:03 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:cmk.intel.com/exclusive-cores":{},"f:ephemeral-storage":{},"f:example.com/fakecpu":{},"f:intel.com/intel_sriov_netdevice":{}},"f:capacity":{"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}} {e2e.test Update v1 2022-04-29 22:53:59 +0000 UTC FieldsV1 {"f:status":{"f:capacity":{"f:example.com/fakecpu":{}}}}}]},Spec:NodeSpec{PodCIDR:10.244.4.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.4.0/24],},Status:NodeStatus{Capacity:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{451201003520 0} {} 440625980Ki BinarySI},example.com/fakecpu: {{1 3} {} 1k DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{201269608448 0} {}  BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{77 0} {} 77 DecimalSI},ephemeral-storage: {{406080902496 0} {} 406080902496 DecimalSI},example.com/fakecpu: {{1 3} {} 1k DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{178884608000 0} {}  BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-04-29 20:03:12 +0000 UTC,LastTransitionTime:2022-04-29 20:03:12 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-04-29 23:20:30 +0000 UTC,LastTransitionTime:2022-04-29 19:59:05 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-04-29 23:20:30 +0000 UTC,LastTransitionTime:2022-04-29 19:59:05 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-04-29 23:20:30 +0000 UTC,LastTransitionTime:2022-04-29 19:59:05 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-04-29 23:20:30 +0000 UTC,LastTransitionTime:2022-04-29 20:03:19 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.208,},NodeAddress{Type:Hostname,Address:node2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:22c763056cc24e6ba6e8bbadb5113d3d,SystemUUID:80B3CD56-852F-E711-906E-0017A4403562,BootID:8ca050bd-5d8a-4c59-8e02-41e26864aa92,KernelVersion:3.10.0-1160.62.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.14,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[opnfv/barometer-collectd@sha256:f30e965aa6195e6ac4ca2410f5a15e3704c92e4afa5208178ca22a7911975d66],SizeBytes:1075575763,},ContainerImage{Names:[localhost:30500/cmk@sha256:cfef1b50441378a7b326a606756a12e664a435cc215d910f7aa9415cfde56361 localhost:30500/cmk:v1.5.1],SizeBytes:727675348,},ContainerImage{Names:[cmk:v1.5.1],SizeBytes:727675348,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[aquasec/kube-hunter@sha256:2be6820bc1d7e0f57193a9a27d5a3e16b2fd93c53747b03ce8ca48c6fc323781 aquasec/kube-hunter:0.3.1],SizeBytes:347611549,},ContainerImage{Names:[sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 sirot/netperf-latest:latest],SizeBytes:282025213,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:702a992280fb7c3303e84a5801acbb4c9c7fcf48cffe0e9c8be3f0c60f74cf89 k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.4],SizeBytes:253371792,},ContainerImage{Names:[nginx@sha256:a05b0cdd4fc1be3b224ba9662ebdf98fe44c09c0c9215b45f84344c12867002e nginx:1.21.1],SizeBytes:133175493,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:716d2f68314c5c4ddd5ecdb45183fcb4ed8019015982c1321571f863989b70b0 k8s.gcr.io/e2e-test-images/httpd:2.4.39-1],SizeBytes:126894770,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:125930239,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:1f36a24cfb5e0c3f725d7565a867c2384282fcbeccc77b07b423c9da95763a9a k8s.gcr.io/e2e-test-images/nautilus:1.4],SizeBytes:121748345,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[k8s.gcr.io/nfd/node-feature-discovery@sha256:74a1cbd82354f148277e20cdce25d57816e355a896bc67f67a0f722164b16945 k8s.gcr.io/nfd/node-feature-discovery:v0.8.2],SizeBytes:108486428,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/regression-issue-74839@sha256:b4f1d8d61bdad84bd50442d161d5460e4019d53e989b64220fdbc62fc87d76bf k8s.gcr.io/e2e-test-images/regression-issue-74839:1.2],SizeBytes:44576952,},ContainerImage{Names:[localhost:30500/sriov-device-plugin@sha256:145b4fe543408db530a0d8880c681aaa0e3df9b949467d93bcecf42e8625a181 localhost:30500/sriov-device-plugin:v3.3.2],SizeBytes:42676189,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 aquasec/kube-bench:0.3.1],SizeBytes:19301876,},ContainerImage{Names:[prom/collectd-exporter@sha256:73fbda4d24421bff3b741c27efc36f1b6fbe7c57c378d56d4ff78101cd556654],SizeBytes:17463681,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:16032814,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:11443478,},ContainerImage{Names:[appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172cead6502fec989198b98d8bb appropriate/curl:edge],SizeBytes:5654234,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:1154361,},ContainerImage{Names:[busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 busybox:1.28],SizeBytes:1146369,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},}
Apr 29 23:20:31.227: INFO: 
Logging kubelet events for node node2
Apr 29 23:20:31.229: INFO: 
Logging pods the kubelet thinks is on node node2
Apr 29 23:20:31.995: INFO: node-feature-discovery-worker-jtjjb started at 2022-04-29 20:08:04 +0000 UTC (0+1 container statuses recorded)
Apr 29 23:20:31.995: INFO: 	Container nfd-worker ready: true, restart count 0
Apr 29 23:20:31.995: INFO: boom-server started at 2022-04-29 23:19:12 +0000 UTC (0+1 container statuses recorded)
Apr 29 23:20:31.995: INFO: 	Container boom-server ready: false, restart count 0
Apr 29 23:20:31.995: INFO: netserver-1 started at 2022-04-29 23:19:43 +0000 UTC (0+1 container statuses recorded)
Apr 29 23:20:31.995: INFO: 	Container webserver ready: true, restart count 0
Apr 29 23:20:31.995: INFO: kube-multus-ds-amd64-7slcd started at 2022-04-29 20:00:12 +0000 UTC (0+1 container statuses recorded)
Apr 29 23:20:31.995: INFO: 	Container kube-multus ready: true, restart count 1
Apr 29 23:20:31.995: INFO: netserver-1 started at 2022-04-29 23:20:03 +0000 UTC (0+1 container statuses recorded)
Apr 29 23:20:31.995: INFO: 	Container webserver ready: true, restart count 0
Apr 29 23:20:31.995: INFO: pod-server-1 started at 2022-04-29 23:19:26 +0000 UTC (0+1 container statuses recorded)
Apr 29 23:20:31.995: INFO: 	Container agnhost-container ready: true, restart count 0
Apr 29 23:20:31.995: INFO: service-headless-9vxdm started at 2022-04-29 23:20:06 +0000 UTC (0+1 container statuses recorded)
Apr 29 23:20:31.995: INFO: 	Container service-headless ready: true, restart count 0
Apr 29 23:20:31.995: INFO: host-test-container-pod started at 2022-04-29 23:20:23 +0000 UTC (0+1 container statuses recorded)
Apr 29 23:20:31.995: INFO: 	Container agnhost-container ready: true, restart count 0
Apr 29 23:20:31.995: INFO: cmk-init-discover-node2-csdn7 started at 2022-04-29 20:12:03 +0000 UTC (0+3 container statuses recorded)
Apr 29 23:20:31.995: INFO: 	Container discover ready: false, restart count 0
Apr 29 23:20:31.995: INFO: 	Container init ready: false, restart count 0
Apr 29 23:20:31.995: INFO: 	Container install ready: false, restart count 0
Apr 29 23:20:31.995: INFO: collectd-zxs8j started at 2022-04-29 20:17:24 +0000 UTC (0+3 container statuses recorded)
Apr 29 23:20:31.995: INFO: 	Container collectd ready: true, restart count 0
Apr 29 23:20:31.995: INFO: 	Container collectd-exporter ready: true, restart count 0
Apr 29 23:20:31.995: INFO: 	Container rbac-proxy ready: true, restart count 0
Apr 29 23:20:31.995: INFO: verify-service-up-exec-pod-sjqqj started at  (0+0 container statuses recorded)
Apr 29 23:20:31.995: INFO: netserver-1 started at 2022-04-29 23:19:51 +0000 UTC (0+1 container statuses recorded)
Apr 29 23:20:31.995: INFO: 	Container webserver ready: false, restart count 0
Apr 29 23:20:31.995: INFO: test-container-pod started at 2022-04-29 23:20:13 +0000 UTC (0+1 container statuses recorded)
Apr 29 23:20:31.995: INFO: 	Container webserver ready: false, restart count 0
Apr 29 23:20:31.995: INFO: nginx-proxy-node2 started at 2022-04-29 19:59:05 +0000 UTC (0+1 container statuses recorded)
Apr 29 23:20:31.995: INFO: 	Container nginx-proxy ready: true, restart count 2
Apr 29 23:20:31.995: INFO: netserver-1 started at 2022-04-29 23:20:03 +0000 UTC (0+1 container statuses recorded)
Apr 29 23:20:31.995: INFO: 	Container webserver ready: true, restart count 0
Apr 29 23:20:31.995: INFO: service-headless-ttfgc started at 2022-04-29 23:20:06 +0000 UTC (0+1 container statuses recorded)
Apr 29 23:20:31.995: INFO: 	Container service-headless ready: true, restart count 0
Apr 29 23:20:31.995: INFO: nodeport-update-service-xrvlw started at 2022-04-29 23:19:51 +0000 UTC (0+1 container statuses recorded)
Apr 29 23:20:31.995: INFO: 	Container nodeport-update-service ready: true, restart count 0
Apr 29 23:20:31.995: INFO: up-down-2-8q9zz started at 2022-04-29 23:19:24 +0000 UTC (0+1 container statuses recorded)
Apr 29 23:20:31.995: INFO: 	Container up-down-2 ready: true, restart count 0
Apr 29 23:20:31.995: INFO: nodeport-update-service-f8mb5 started at 2022-04-29 23:19:51 +0000 UTC (0+1 container statuses recorded)
Apr 29 23:20:31.995: INFO: 	Container nodeport-update-service ready: true, restart count 0
Apr 29 23:20:31.995: INFO: kube-proxy-k6tv2 started at 2022-04-29 19:59:08 +0000 UTC (0+1 container statuses recorded)
Apr 29 23:20:31.995: INFO: 	Container kube-proxy ready: true, restart count 2
Apr 29 23:20:31.995: INFO: kube-flannel-dbcj8 started at 2022-04-29 20:00:03 +0000 UTC (1+1 container statuses recorded)
Apr 29 23:20:31.995: INFO: 	Init container install-cni ready: true, restart count 2
Apr 29 23:20:31.995: INFO: 	Container kube-flannel ready: true, restart count 3
Apr 29 23:20:31.995: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-zfdv5 started at 2022-04-29 20:09:17 +0000 UTC (0+1 container statuses recorded)
Apr 29 23:20:31.995: INFO: 	Container kube-sriovdp ready: true, restart count 0
Apr 29 23:20:31.995: INFO: cmk-74bh9 started at 2022-04-29 20:12:25 +0000 UTC (0+2 container statuses recorded)
Apr 29 23:20:31.995: INFO: 	Container nodereport ready: true, restart count 0
Apr 29 23:20:31.995: INFO: 	Container reconcile ready: true, restart count 0
Apr 29 23:20:31.995: INFO: node-exporter-tlpmt started at 2022-04-29 20:13:28 +0000 UTC (0+2 container statuses recorded)
Apr 29 23:20:31.995: INFO: 	Container kube-rbac-proxy ready: true, restart count 0
Apr 29 23:20:31.996: INFO: 	Container node-exporter ready: true, restart count 0
Apr 29 23:20:31.996: INFO: execpod7jqfj started at 2022-04-29 23:20:03 +0000 UTC (0+1 container statuses recorded)
Apr 29 23:20:31.996: INFO: 	Container agnhost-container ready: true, restart count 0
Apr 29 23:20:31.996: INFO: netserver-1 started at 2022-04-29 23:20:20 +0000 UTC (0+1 container statuses recorded)
Apr 29 23:20:31.996: INFO: 	Container webserver ready: false, restart count 0
Apr 29 23:20:31.996: INFO: no-snat-test4zgn4 started at 2022-04-29 23:20:22 +0000 UTC (0+1 container statuses recorded)
Apr 29 23:20:31.996: INFO: 	Container no-snat-test ready: true, restart count 0
Apr 29 23:20:31.996: INFO: verify-service-up-host-exec-pod started at 2022-04-29 23:20:24 +0000 UTC (0+1 container statuses recorded)
Apr 29 23:20:31.996: INFO: 	Container agnhost-container ready: true, restart count 0
Apr 29 23:20:31.996: INFO: cmk-webhook-6c9d5f8578-b9mdv started at 2022-04-29 20:12:26 +0000 UTC (0+1 container statuses recorded)
Apr 29 23:20:31.996: INFO: 	Container cmk-webhook ready: true, restart count 0
Apr 29 23:20:31.996: INFO: test-container-pod started at 2022-04-29 23:20:26 +0000 UTC (0+1 container statuses recorded)
Apr 29 23:20:31.996: INFO: 	Container webserver ready: true, restart count 0
Apr 29 23:20:31.996: INFO: service-headless-jvgs9 started at 2022-04-29 23:20:06 +0000 UTC (0+1 container statuses recorded)
Apr 29 23:20:31.996: INFO: 	Container service-headless ready: true, restart count 0
Apr 29 23:20:32.619: INFO: 
Latency metrics for node node2
Apr 29 23:20:32.619: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "conntrack-1748" for this suite.


• Failure [70.338 seconds]
[sig-network] Conntrack
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should be able to preserve UDP traffic when server pod cycles for a NodePort service [It]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/conntrack.go:130

  Apr 29 23:20:30.400: Failed to connect to backend 1

  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes/runner.go:113
------------------------------
{"msg":"FAILED [sig-network] Conntrack should be able to preserve UDP traffic when server pod cycles for a NodePort service","total":-1,"completed":1,"skipped":96,"failed":1,"failures":["[sig-network] Conntrack should be able to preserve UDP traffic when server pod cycles for a NodePort service"]}

SSSSSSS
------------------------------
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Apr 29 23:20:03.319: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename nettest
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:83
STEP: Executing a successful http request from the external internet
[It] should update nodePort: udp [Slow]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:397
STEP: Performing setup for networking test in namespace nettest-7639
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Apr 29 23:20:03.422: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
Apr 29 23:20:03.456: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Apr 29 23:20:05.461: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Apr 29 23:20:07.459: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Apr 29 23:20:09.461: INFO: The status of Pod netserver-0 is Running (Ready = false)
Apr 29 23:20:11.459: INFO: The status of Pod netserver-0 is Running (Ready = false)
Apr 29 23:20:13.461: INFO: The status of Pod netserver-0 is Running (Ready = false)
Apr 29 23:20:15.460: INFO: The status of Pod netserver-0 is Running (Ready = false)
Apr 29 23:20:17.460: INFO: The status of Pod netserver-0 is Running (Ready = false)
Apr 29 23:20:19.462: INFO: The status of Pod netserver-0 is Running (Ready = false)
Apr 29 23:20:21.460: INFO: The status of Pod netserver-0 is Running (Ready = false)
Apr 29 23:20:23.461: INFO: The status of Pod netserver-0 is Running (Ready = true)
Apr 29 23:20:23.466: INFO: The status of Pod netserver-1 is Running (Ready = true)
STEP: Creating test pods
Apr 29 23:20:33.501: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2
STEP: Getting node addresses
Apr 29 23:20:33.501: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
Apr 29 23:20:33.508: INFO: Requires at least 2 nodes (not -1)
[AfterEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Apr 29 23:20:33.509: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "nettest-7639" for this suite.


S [SKIPPING] [30.199 seconds]
[sig-network] Networking
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  Granular Checks: Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:151
    should update nodePort: udp [Slow] [It]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:397

    Requires at least 2 nodes (not -1)

    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/network/utils.go:782
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-network] NoSNAT [Feature:NoSNAT] [Slow]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Apr 29 23:20:22.685: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename no-snat-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] Should be able to send traffic between Pods without SNAT
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/no_snat.go:64
STEP: creating a test pod on each Node
STEP: waiting for all of the no-snat-test pods to be scheduled and running
STEP: sending traffic from each pod to the others and checking that SNAT does not occur
Apr 29 23:20:32.764: INFO: Waiting up to 2m0s to get response from 10.244.3.102:8080
Apr 29 23:20:32.764: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=no-snat-test-8735 exec no-snat-test4zgn4 -- /bin/sh -x -c curl -q -s --connect-timeout 30 10.244.3.102:8080/clientip'
Apr 29 23:20:33.032: INFO: stderr: "+ curl -q -s --connect-timeout 30 10.244.3.102:8080/clientip\n"
Apr 29 23:20:33.032: INFO: stdout: "10.244.4.140:39492"
STEP: Verifying the preserved source ip
Apr 29 23:20:33.032: INFO: Waiting up to 2m0s to get response from 10.244.2.2:8080
Apr 29 23:20:33.032: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=no-snat-test-8735 exec no-snat-test4zgn4 -- /bin/sh -x -c curl -q -s --connect-timeout 30 10.244.2.2:8080/clientip'
Apr 29 23:20:33.322: INFO: stderr: "+ curl -q -s --connect-timeout 30 10.244.2.2:8080/clientip\n"
Apr 29 23:20:33.322: INFO: stdout: "10.244.4.140:33808"
STEP: Verifying the preserved source ip
Apr 29 23:20:33.322: INFO: Waiting up to 2m0s to get response from 10.244.0.9:8080
Apr 29 23:20:33.322: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=no-snat-test-8735 exec no-snat-test4zgn4 -- /bin/sh -x -c curl -q -s --connect-timeout 30 10.244.0.9:8080/clientip'
Apr 29 23:20:33.550: INFO: stderr: "+ curl -q -s --connect-timeout 30 10.244.0.9:8080/clientip\n"
Apr 29 23:20:33.550: INFO: stdout: "10.244.4.140:50372"
STEP: Verifying the preserved source ip
Apr 29 23:20:33.550: INFO: Waiting up to 2m0s to get response from 10.244.1.9:8080
Apr 29 23:20:33.551: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=no-snat-test-8735 exec no-snat-test4zgn4 -- /bin/sh -x -c curl -q -s --connect-timeout 30 10.244.1.9:8080/clientip'
Apr 29 23:20:33.848: INFO: stderr: "+ curl -q -s --connect-timeout 30 10.244.1.9:8080/clientip\n"
Apr 29 23:20:33.848: INFO: stdout: "10.244.4.140:49060"
STEP: Verifying the preserved source ip
Apr 29 23:20:33.848: INFO: Waiting up to 2m0s to get response from 10.244.4.140:8080
Apr 29 23:20:33.848: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=no-snat-test-8735 exec no-snat-testn2bgf -- /bin/sh -x -c curl -q -s --connect-timeout 30 10.244.4.140:8080/clientip'
Apr 29 23:20:34.360: INFO: stderr: "+ curl -q -s --connect-timeout 30 10.244.4.140:8080/clientip\n"
Apr 29 23:20:34.360: INFO: stdout: "10.244.3.102:57808"
STEP: Verifying the preserved source ip
Apr 29 23:20:34.360: INFO: Waiting up to 2m0s to get response from 10.244.2.2:8080
Apr 29 23:20:34.360: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=no-snat-test-8735 exec no-snat-testn2bgf -- /bin/sh -x -c curl -q -s --connect-timeout 30 10.244.2.2:8080/clientip'
Apr 29 23:20:34.627: INFO: stderr: "+ curl -q -s --connect-timeout 30 10.244.2.2:8080/clientip\n"
Apr 29 23:20:34.627: INFO: stdout: "10.244.3.102:56138"
STEP: Verifying the preserved source ip
Apr 29 23:20:34.627: INFO: Waiting up to 2m0s to get response from 10.244.0.9:8080
Apr 29 23:20:34.627: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=no-snat-test-8735 exec no-snat-testn2bgf -- /bin/sh -x -c curl -q -s --connect-timeout 30 10.244.0.9:8080/clientip'
Apr 29 23:20:35.167: INFO: stderr: "+ curl -q -s --connect-timeout 30 10.244.0.9:8080/clientip\n"
Apr 29 23:20:35.167: INFO: stdout: "10.244.3.102:49642"
STEP: Verifying the preserved source ip
Apr 29 23:20:35.167: INFO: Waiting up to 2m0s to get response from 10.244.1.9:8080
Apr 29 23:20:35.167: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=no-snat-test-8735 exec no-snat-testn2bgf -- /bin/sh -x -c curl -q -s --connect-timeout 30 10.244.1.9:8080/clientip'
Apr 29 23:20:35.492: INFO: stderr: "+ curl -q -s --connect-timeout 30 10.244.1.9:8080/clientip\n"
Apr 29 23:20:35.492: INFO: stdout: "10.244.3.102:41482"
STEP: Verifying the preserved source ip
Apr 29 23:20:35.492: INFO: Waiting up to 2m0s to get response from 10.244.4.140:8080
Apr 29 23:20:35.492: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=no-snat-test-8735 exec no-snat-testsvbdp -- /bin/sh -x -c curl -q -s --connect-timeout 30 10.244.4.140:8080/clientip'
Apr 29 23:20:35.735: INFO: stderr: "+ curl -q -s --connect-timeout 30 10.244.4.140:8080/clientip\n"
Apr 29 23:20:35.735: INFO: stdout: "10.244.2.2:39272"
STEP: Verifying the preserved source ip
Apr 29 23:20:35.735: INFO: Waiting up to 2m0s to get response from 10.244.3.102:8080
Apr 29 23:20:35.735: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=no-snat-test-8735 exec no-snat-testsvbdp -- /bin/sh -x -c curl -q -s --connect-timeout 30 10.244.3.102:8080/clientip'
Apr 29 23:20:35.993: INFO: stderr: "+ curl -q -s --connect-timeout 30 10.244.3.102:8080/clientip\n"
Apr 29 23:20:35.994: INFO: stdout: "10.244.2.2:43732"
STEP: Verifying the preserved source ip
Apr 29 23:20:35.994: INFO: Waiting up to 2m0s to get response from 10.244.0.9:8080
Apr 29 23:20:35.994: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=no-snat-test-8735 exec no-snat-testsvbdp -- /bin/sh -x -c curl -q -s --connect-timeout 30 10.244.0.9:8080/clientip'
Apr 29 23:20:36.234: INFO: stderr: "+ curl -q -s --connect-timeout 30 10.244.0.9:8080/clientip\n"
Apr 29 23:20:36.234: INFO: stdout: "10.244.2.2:45308"
STEP: Verifying the preserved source ip
Apr 29 23:20:36.234: INFO: Waiting up to 2m0s to get response from 10.244.1.9:8080
Apr 29 23:20:36.234: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=no-snat-test-8735 exec no-snat-testsvbdp -- /bin/sh -x -c curl -q -s --connect-timeout 30 10.244.1.9:8080/clientip'
Apr 29 23:20:36.487: INFO: stderr: "+ curl -q -s --connect-timeout 30 10.244.1.9:8080/clientip\n"
Apr 29 23:20:36.487: INFO: stdout: "10.244.2.2:57476"
STEP: Verifying the preserved source ip
Apr 29 23:20:36.487: INFO: Waiting up to 2m0s to get response from 10.244.4.140:8080
Apr 29 23:20:36.487: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=no-snat-test-8735 exec no-snat-testwb5fx -- /bin/sh -x -c curl -q -s --connect-timeout 30 10.244.4.140:8080/clientip'
Apr 29 23:20:36.735: INFO: stderr: "+ curl -q -s --connect-timeout 30 10.244.4.140:8080/clientip\n"
Apr 29 23:20:36.735: INFO: stdout: "10.244.0.9:50402"
STEP: Verifying the preserved source ip
Apr 29 23:20:36.735: INFO: Waiting up to 2m0s to get response from 10.244.3.102:8080
Apr 29 23:20:36.736: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=no-snat-test-8735 exec no-snat-testwb5fx -- /bin/sh -x -c curl -q -s --connect-timeout 30 10.244.3.102:8080/clientip'
Apr 29 23:20:36.982: INFO: stderr: "+ curl -q -s --connect-timeout 30 10.244.3.102:8080/clientip\n"
Apr 29 23:20:36.982: INFO: stdout: "10.244.0.9:47372"
STEP: Verifying the preserved source ip
Apr 29 23:20:36.982: INFO: Waiting up to 2m0s to get response from 10.244.2.2:8080
Apr 29 23:20:36.982: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=no-snat-test-8735 exec no-snat-testwb5fx -- /bin/sh -x -c curl -q -s --connect-timeout 30 10.244.2.2:8080/clientip'
Apr 29 23:20:37.267: INFO: stderr: "+ curl -q -s --connect-timeout 30 10.244.2.2:8080/clientip\n"
Apr 29 23:20:37.267: INFO: stdout: "10.244.0.9:40476"
STEP: Verifying the preserved source ip
Apr 29 23:20:37.267: INFO: Waiting up to 2m0s to get response from 10.244.1.9:8080
Apr 29 23:20:37.267: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=no-snat-test-8735 exec no-snat-testwb5fx -- /bin/sh -x -c curl -q -s --connect-timeout 30 10.244.1.9:8080/clientip'
Apr 29 23:20:37.539: INFO: stderr: "+ curl -q -s --connect-timeout 30 10.244.1.9:8080/clientip\n"
Apr 29 23:20:37.539: INFO: stdout: "10.244.0.9:58104"
STEP: Verifying the preserved source ip
Apr 29 23:20:37.540: INFO: Waiting up to 2m0s to get response from 10.244.4.140:8080
Apr 29 23:20:37.540: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=no-snat-test-8735 exec no-snat-testzvbcb -- /bin/sh -x -c curl -q -s --connect-timeout 30 10.244.4.140:8080/clientip'
Apr 29 23:20:37.814: INFO: stderr: "+ curl -q -s --connect-timeout 30 10.244.4.140:8080/clientip\n"
Apr 29 23:20:37.814: INFO: stdout: "10.244.1.9:45720"
STEP: Verifying the preserved source ip
Apr 29 23:20:37.814: INFO: Waiting up to 2m0s to get response from 10.244.3.102:8080
Apr 29 23:20:37.814: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=no-snat-test-8735 exec no-snat-testzvbcb -- /bin/sh -x -c curl -q -s --connect-timeout 30 10.244.3.102:8080/clientip'
Apr 29 23:20:38.076: INFO: stderr: "+ curl -q -s --connect-timeout 30 10.244.3.102:8080/clientip\n"
Apr 29 23:20:38.077: INFO: stdout: "10.244.1.9:42592"
STEP: Verifying the preserved source ip
Apr 29 23:20:38.077: INFO: Waiting up to 2m0s to get response from 10.244.2.2:8080
Apr 29 23:20:38.077: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=no-snat-test-8735 exec no-snat-testzvbcb -- /bin/sh -x -c curl -q -s --connect-timeout 30 10.244.2.2:8080/clientip'
Apr 29 23:20:38.314: INFO: stderr: "+ curl -q -s --connect-timeout 30 10.244.2.2:8080/clientip\n"
Apr 29 23:20:38.314: INFO: stdout: "10.244.1.9:50968"
STEP: Verifying the preserved source ip
Apr 29 23:20:38.314: INFO: Waiting up to 2m0s to get response from 10.244.0.9:8080
Apr 29 23:20:38.314: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=no-snat-test-8735 exec no-snat-testzvbcb -- /bin/sh -x -c curl -q -s --connect-timeout 30 10.244.0.9:8080/clientip'
Apr 29 23:20:38.551: INFO: stderr: "+ curl -q -s --connect-timeout 30 10.244.0.9:8080/clientip\n"
Apr 29 23:20:38.551: INFO: stdout: "10.244.1.9:48840"
STEP: Verifying the preserved source ip
[AfterEach] [sig-network] NoSNAT [Feature:NoSNAT] [Slow]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Apr 29 23:20:38.551: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "no-snat-test-8735" for this suite.


• [SLOW TEST:15.875 seconds]
[sig-network] NoSNAT [Feature:NoSNAT] [Slow]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  Should be able to send traffic between Pods without SNAT
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/no_snat.go:64
------------------------------
{"msg":"PASSED [sig-network] NoSNAT [Feature:NoSNAT] [Slow] Should be able to send traffic between Pods without SNAT","total":-1,"completed":5,"skipped":1026,"failed":0}

SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Apr 29 23:20:32.080: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746
[It] should allow pods to hairpin back to themselves through services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:986
STEP: creating a TCP service hairpin-test with type=ClusterIP in namespace services-8817
Apr 29 23:20:32.140: INFO: hairpin-test cluster ip: 10.233.52.128
STEP: creating a client/server pod
Apr 29 23:20:32.198: INFO: The status of Pod hairpin is Pending, waiting for it to be Running (with Ready = true)
Apr 29 23:20:34.201: INFO: The status of Pod hairpin is Pending, waiting for it to be Running (with Ready = true)
Apr 29 23:20:36.202: INFO: The status of Pod hairpin is Pending, waiting for it to be Running (with Ready = true)
Apr 29 23:20:38.204: INFO: The status of Pod hairpin is Running (Ready = true)
STEP: waiting for the service to expose an endpoint
STEP: waiting up to 3m0s for service hairpin-test in namespace services-8817 to expose endpoints map[hairpin:[8080]]
Apr 29 23:20:38.212: INFO: successfully validated that service hairpin-test in namespace services-8817 exposes endpoints map[hairpin:[8080]]
STEP: Checking if the pod can reach itself
Apr 29 23:20:39.213: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8817 exec hairpin -- /bin/sh -x -c echo hostName | nc -v -t -w 2 hairpin-test 8080'
Apr 29 23:20:39.582: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 hairpin-test 8080\nConnection to hairpin-test 8080 port [tcp/http-alt] succeeded!\n"
Apr 29 23:20:39.582: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request"
Apr 29 23:20:39.582: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8817 exec hairpin -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.233.52.128 8080'
Apr 29 23:20:39.972: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 10.233.52.128 8080\nConnection to 10.233.52.128 8080 port [tcp/http-alt] succeeded!\n"
Apr 29 23:20:39.972: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request"
[AfterEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Apr 29 23:20:39.972: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-8817" for this suite.
[AfterEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750


• [SLOW TEST:7.903 seconds]
[sig-network] Services
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should allow pods to hairpin back to themselves through services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:986
------------------------------
{"msg":"PASSED [sig-network] Services should allow pods to hairpin back to themselves through services","total":-1,"completed":2,"skipped":623,"failed":0}

SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Apr 29 23:20:20.180: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename nettest
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:83
STEP: Executing a successful http request from the external internet
[It] should be able to handle large requests: udp
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:461
STEP: Performing setup for networking test in namespace nettest-4262
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Apr 29 23:20:20.281: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
Apr 29 23:20:20.314: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Apr 29 23:20:22.317: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Apr 29 23:20:24.322: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Apr 29 23:20:26.318: INFO: The status of Pod netserver-0 is Running (Ready = false)
Apr 29 23:20:28.319: INFO: The status of Pod netserver-0 is Running (Ready = false)
Apr 29 23:20:30.318: INFO: The status of Pod netserver-0 is Running (Ready = false)
Apr 29 23:20:32.317: INFO: The status of Pod netserver-0 is Running (Ready = false)
Apr 29 23:20:34.317: INFO: The status of Pod netserver-0 is Running (Ready = false)
Apr 29 23:20:36.318: INFO: The status of Pod netserver-0 is Running (Ready = false)
Apr 29 23:20:38.318: INFO: The status of Pod netserver-0 is Running (Ready = false)
Apr 29 23:20:40.319: INFO: The status of Pod netserver-0 is Running (Ready = false)
Apr 29 23:20:42.316: INFO: The status of Pod netserver-0 is Running (Ready = true)
Apr 29 23:20:42.321: INFO: The status of Pod netserver-1 is Running (Ready = false)
Apr 29 23:20:44.325: INFO: The status of Pod netserver-1 is Running (Ready = true)
STEP: Creating test pods
Apr 29 23:20:48.347: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2
STEP: Getting node addresses
Apr 29 23:20:48.347: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
Apr 29 23:20:48.354: INFO: Requires at least 2 nodes (not -1)
[AfterEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Apr 29 23:20:48.356: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "nettest-4262" for this suite.


S [SKIPPING] [28.185 seconds]
[sig-network] Networking
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  Granular Checks: Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:151
    should be able to handle large requests: udp [It]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:461

    Requires at least 2 nodes (not -1)

    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/network/utils.go:782
------------------------------
SSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Apr 29 23:20:40.446: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename nettest
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:83
STEP: Executing a successful http request from the external internet
[It] should provide Internet connection for containers [Feature:Networking-IPv4]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:97
STEP: Running container which tries to connect to 8.8.8.8
Apr 29 23:20:40.566: INFO: Waiting up to 5m0s for pod "connectivity-test" in namespace "nettest-3738" to be "Succeeded or Failed"
Apr 29 23:20:40.568: INFO: Pod "connectivity-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.325311ms
Apr 29 23:20:42.572: INFO: Pod "connectivity-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005485603s
Apr 29 23:20:44.576: INFO: Pod "connectivity-test": Phase="Pending", Reason="", readiness=false. Elapsed: 4.009474191s
Apr 29 23:20:46.580: INFO: Pod "connectivity-test": Phase="Pending", Reason="", readiness=false. Elapsed: 6.013581294s
Apr 29 23:20:48.583: INFO: Pod "connectivity-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.016540103s
STEP: Saw pod success
Apr 29 23:20:48.583: INFO: Pod "connectivity-test" satisfied condition "Succeeded or Failed"
[AfterEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Apr 29 23:20:48.583: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "nettest-3738" for this suite.


• [SLOW TEST:8.145 seconds]
[sig-network] Networking
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should provide Internet connection for containers [Feature:Networking-IPv4]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:97
------------------------------
{"msg":"PASSED [sig-network] Networking should provide Internet connection for containers [Feature:Networking-IPv4]","total":-1,"completed":3,"skipped":860,"failed":0}

SSSSSSSSSSS
------------------------------
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Apr 29 23:20:48.403: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename nettest
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:83
STEP: Executing a successful http request from the external internet
[It] should provide unchanging, static URL paths for kubernetes api services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:112
STEP: testing: /healthz
STEP: testing: /api
STEP: testing: /apis
STEP: testing: /metrics
STEP: testing: /openapi/v2
STEP: testing: /version
STEP: testing: /logs
[AfterEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Apr 29 23:20:48.675: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "nettest-263" for this suite.

•
------------------------------
{"msg":"PASSED [sig-network] Networking should provide unchanging, static URL paths for kubernetes api services","total":-1,"completed":3,"skipped":755,"failed":0}

SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Apr 29 23:20:33.889: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename nettest
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:83
STEP: Executing a successful http request from the external internet
[It] should function for pod-Service: http
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:153
STEP: Performing setup for networking test in namespace nettest-4188
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Apr 29 23:20:34.002: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
Apr 29 23:20:34.036: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Apr 29 23:20:36.040: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Apr 29 23:20:38.041: INFO: The status of Pod netserver-0 is Running (Ready = false)
Apr 29 23:20:40.041: INFO: The status of Pod netserver-0 is Running (Ready = false)
Apr 29 23:20:42.044: INFO: The status of Pod netserver-0 is Running (Ready = false)
Apr 29 23:20:44.040: INFO: The status of Pod netserver-0 is Running (Ready = false)
Apr 29 23:20:46.039: INFO: The status of Pod netserver-0 is Running (Ready = false)
Apr 29 23:20:48.041: INFO: The status of Pod netserver-0 is Running (Ready = false)
Apr 29 23:20:50.040: INFO: The status of Pod netserver-0 is Running (Ready = false)
Apr 29 23:20:52.040: INFO: The status of Pod netserver-0 is Running (Ready = false)
Apr 29 23:20:54.040: INFO: The status of Pod netserver-0 is Running (Ready = false)
Apr 29 23:20:56.039: INFO: The status of Pod netserver-0 is Running (Ready = true)
Apr 29 23:20:56.043: INFO: The status of Pod netserver-1 is Running (Ready = true)
STEP: Creating test pods
Apr 29 23:21:02.070: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2
STEP: Getting node addresses
Apr 29 23:21:02.070: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
Apr 29 23:21:02.076: INFO: Requires at least 2 nodes (not -1)
[AfterEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Apr 29 23:21:02.078: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "nettest-4188" for this suite.


S [SKIPPING] [28.199 seconds]
[sig-network] Networking
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  Granular Checks: Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:151
    should function for pod-Service: http [It]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:153

    Requires at least 2 nodes (not -1)

    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/network/utils.go:782
------------------------------
SSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Apr 29 23:20:32.649: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename nettest
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:83
STEP: Executing a successful http request from the external internet
[It] should check kube-proxy urls
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:138
STEP: Performing setup for networking test in namespace nettest-458
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Apr 29 23:20:32.772: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
Apr 29 23:20:32.804: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Apr 29 23:20:34.808: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Apr 29 23:20:36.808: INFO: The status of Pod netserver-0 is Running (Ready = false)
Apr 29 23:20:38.809: INFO: The status of Pod netserver-0 is Running (Ready = false)
Apr 29 23:20:40.808: INFO: The status of Pod netserver-0 is Running (Ready = false)
Apr 29 23:20:42.810: INFO: The status of Pod netserver-0 is Running (Ready = false)
Apr 29 23:20:44.809: INFO: The status of Pod netserver-0 is Running (Ready = false)
Apr 29 23:20:46.807: INFO: The status of Pod netserver-0 is Running (Ready = false)
Apr 29 23:20:48.808: INFO: The status of Pod netserver-0 is Running (Ready = false)
Apr 29 23:20:50.808: INFO: The status of Pod netserver-0 is Running (Ready = false)
Apr 29 23:20:52.807: INFO: The status of Pod netserver-0 is Running (Ready = false)
Apr 29 23:20:54.809: INFO: The status of Pod netserver-0 is Running (Ready = true)
Apr 29 23:20:54.815: INFO: The status of Pod netserver-1 is Running (Ready = true)
STEP: Creating test pods
Apr 29 23:21:02.888: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2
STEP: Getting node addresses
Apr 29 23:21:02.888: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
Apr 29 23:21:02.894: INFO: Requires at least 2 nodes (not -1)
[AfterEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Apr 29 23:21:02.896: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "nettest-458" for this suite.


S [SKIPPING] [30.255 seconds]
[sig-network] Networking
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should check kube-proxy urls [It]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:138

  Requires at least 2 nodes (not -1)

  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/network/utils.go:782
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-network] ESIPP [Slow]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Apr 29 23:21:03.057: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename esipp
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] ESIPP [Slow]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/loadbalancer.go:858
Apr 29 23:21:03.078: INFO: Only supported for providers [gce gke] (not local)
[AfterEach] [sig-network] ESIPP [Slow]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Apr 29 23:21:03.079: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "esipp-5074" for this suite.
[AfterEach] [sig-network] ESIPP [Slow]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/loadbalancer.go:866


S [SKIPPING] in Spec Setup (BeforeEach) [0.030 seconds]
[sig-network] ESIPP [Slow]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should work from pods [BeforeEach]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/loadbalancer.go:1036

  Only supported for providers [gce gke] (not local)

  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/loadbalancer.go:860
------------------------------
SS
------------------------------
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Apr 29 23:20:48.618: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename nettest
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:83
STEP: Executing a successful http request from the external internet
[It] should function for client IP based session affinity: udp [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:434
STEP: Performing setup for networking test in namespace nettest-9335
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Apr 29 23:20:48.719: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
Apr 29 23:20:48.769: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Apr 29 23:20:50.772: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Apr 29 23:20:52.774: INFO: The status of Pod netserver-0 is Running (Ready = false)
Apr 29 23:20:54.775: INFO: The status of Pod netserver-0 is Running (Ready = false)
Apr 29 23:20:56.773: INFO: The status of Pod netserver-0 is Running (Ready = false)
Apr 29 23:20:58.774: INFO: The status of Pod netserver-0 is Running (Ready = false)
Apr 29 23:21:00.774: INFO: The status of Pod netserver-0 is Running (Ready = false)
Apr 29 23:21:02.772: INFO: The status of Pod netserver-0 is Running (Ready = false)
Apr 29 23:21:04.773: INFO: The status of Pod netserver-0 is Running (Ready = false)
Apr 29 23:21:06.774: INFO: The status of Pod netserver-0 is Running (Ready = false)
Apr 29 23:21:08.776: INFO: The status of Pod netserver-0 is Running (Ready = false)
Apr 29 23:21:10.775: INFO: The status of Pod netserver-0 is Running (Ready = true)
Apr 29 23:21:10.780: INFO: The status of Pod netserver-1 is Running (Ready = true)
STEP: Creating test pods
Apr 29 23:21:14.808: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2
STEP: Getting node addresses
Apr 29 23:21:14.809: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
Apr 29 23:21:14.815: INFO: Requires at least 2 nodes (not -1)
[AfterEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Apr 29 23:21:14.817: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "nettest-9335" for this suite.


S [SKIPPING] [26.208 seconds]
[sig-network] Networking
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  Granular Checks: Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:151
    should function for client IP based session affinity: udp [LinuxOnly] [It]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:434

    Requires at least 2 nodes (not -1)

    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/network/utils.go:782
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-network] Networking IPerf2 [Feature:Networking-Performance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Apr 29 23:20:24.261: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename network-perf
STEP: Waiting for a default service account to be provisioned in namespace
[It] should run iperf2
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking_perf.go:188
Apr 29 23:20:24.290: INFO: deploying iperf2 server
Apr 29 23:20:24.294: INFO: Waiting for deployment "iperf2-server-deployment" to complete
Apr 29 23:20:24.296: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:0, Replicas:0, UpdatedReplicas:0, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:0, Conditions:[]v1.DeploymentCondition(nil), CollisionCount:(*int32)(nil)}
Apr 29 23:20:26.299: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63786871224, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63786871224, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63786871224, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63786871224, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"iperf2-server-deployment-59979d877\" is progressing."}}, CollisionCount:(*int32)(nil)}
Apr 29 23:20:28.302: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63786871224, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63786871224, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63786871224, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63786871224, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"iperf2-server-deployment-59979d877\" is progressing."}}, CollisionCount:(*int32)(nil)}
Apr 29 23:20:30.301: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63786871224, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63786871224, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63786871224, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63786871224, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"iperf2-server-deployment-59979d877\" is progressing."}}, CollisionCount:(*int32)(nil)}
Apr 29 23:20:32.300: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63786871224, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63786871224, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63786871224, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63786871224, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"iperf2-server-deployment-59979d877\" is progressing."}}, CollisionCount:(*int32)(nil)}
Apr 29 23:20:34.311: INFO: waiting for iperf2 server endpoints
Apr 29 23:20:36.315: INFO: found iperf2 server endpoints
Apr 29 23:20:36.315: INFO: waiting for client pods to be running
Apr 29 23:20:46.320: INFO: all client pods are ready: 2 pods
Apr 29 23:20:46.323: INFO: server pod phase Running
Apr 29 23:20:46.323: INFO: server pod condition 0: {Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-29 23:20:24 +0000 UTC Reason: Message:}
Apr 29 23:20:46.323: INFO: server pod condition 1: {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-29 23:20:33 +0000 UTC Reason: Message:}
Apr 29 23:20:46.323: INFO: server pod condition 2: {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-29 23:20:33 +0000 UTC Reason: Message:}
Apr 29 23:20:46.323: INFO: server pod condition 3: {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-29 23:20:24 +0000 UTC Reason: Message:}
Apr 29 23:20:46.323: INFO: server pod container status 0: {Name:iperf2-server State:{Waiting:nil Running:&ContainerStateRunning{StartedAt:2022-04-29 23:20:32 +0000 UTC,} Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:true RestartCount:0 Image:k8s.gcr.io/e2e-test-images/agnhost:2.32 ImageID:docker-pullable://k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 ContainerID:docker://fcb247333c8ca6f6dc245f4775fabcb35e303cb3ad9574909b474bfc097d0331 Started:0xc004cd73ac}
Apr 29 23:20:46.323: INFO: found 2 matching client pods
Apr 29 23:20:46.326: INFO: ExecWithOptions {Command:[/bin/sh -c iperf -v || true] Namespace:network-perf-8154 PodName:iperf2-clients-jdwhz ContainerName:iperf2-client Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Apr 29 23:20:46.326: INFO: >>> kubeConfig: /root/.kube/config
Apr 29 23:20:46.498: INFO: Exec stderr: "iperf version 2.0.13 (21 Jan 2019) pthreads"
Apr 29 23:20:46.498: INFO: iperf version: 
Apr 29 23:20:46.498: INFO: attempting to run command 'iperf  -e -p 6789 --reportstyle C -i 1 -c iperf2-server && sleep 5' in client pod iperf2-clients-jdwhz (node node2)
Apr 29 23:20:46.501: INFO: ExecWithOptions {Command:[/bin/sh -c iperf  -e -p 6789 --reportstyle C -i 1 -c iperf2-server && sleep 5] Namespace:network-perf-8154 PodName:iperf2-clients-jdwhz ContainerName:iperf2-client Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Apr 29 23:20:46.501: INFO: >>> kubeConfig: /root/.kube/config
Apr 29 23:21:01.638: INFO: Exec stderr: ""
Apr 29 23:21:01.638: INFO: output from exec on client pod iperf2-clients-jdwhz (node node2): 
20220429232047.600,10.244.4.146,54996,10.233.15.240,6789,3,0.0-1.0,119406592,955252736
20220429232048.607,10.244.4.146,54996,10.233.15.240,6789,3,1.0-2.0,112328704,898629632
20220429232049.585,10.244.4.146,54996,10.233.15.240,6789,3,2.0-3.0,101974016,815792128
20220429232050.590,10.244.4.146,54996,10.233.15.240,6789,3,3.0-4.0,113770496,910163968
20220429232051.596,10.244.4.146,54996,10.233.15.240,6789,3,4.0-5.0,117571584,940572672
20220429232052.602,10.244.4.146,54996,10.233.15.240,6789,3,5.0-6.0,117702656,941621248
20220429232053.590,10.244.4.146,54996,10.233.15.240,6789,3,6.0-7.0,116391936,931135488
20220429232054.600,10.244.4.146,54996,10.233.15.240,6789,3,7.0-8.0,117702656,941621248
20220429232055.591,10.244.4.146,54996,10.233.15.240,6789,3,8.0-9.0,117833728,942669824
20220429232056.602,10.244.4.146,54996,10.233.15.240,6789,3,9.0-10.0,117571584,940572672
20220429232056.602,10.244.4.146,54996,10.233.15.240,6789,3,0.0-10.0,1152253952,921286780

Apr 29 23:21:01.641: INFO: ExecWithOptions {Command:[/bin/sh -c iperf -v || true] Namespace:network-perf-8154 PodName:iperf2-clients-rxk5j ContainerName:iperf2-client Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Apr 29 23:21:01.641: INFO: >>> kubeConfig: /root/.kube/config
Apr 29 23:21:01.754: INFO: Exec stderr: "iperf version 2.0.13 (21 Jan 2019) pthreads"
Apr 29 23:21:01.754: INFO: iperf version: 
Apr 29 23:21:01.754: INFO: attempting to run command 'iperf  -e -p 6789 --reportstyle C -i 1 -c iperf2-server && sleep 5' in client pod iperf2-clients-rxk5j (node node1)
Apr 29 23:21:01.757: INFO: ExecWithOptions {Command:[/bin/sh -c iperf  -e -p 6789 --reportstyle C -i 1 -c iperf2-server && sleep 5] Namespace:network-perf-8154 PodName:iperf2-clients-rxk5j ContainerName:iperf2-client Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Apr 29 23:21:01.757: INFO: >>> kubeConfig: /root/.kube/config
Apr 29 23:21:16.900: INFO: Exec stderr: ""
Apr 29 23:21:16.900: INFO: output from exec on client pod iperf2-clients-rxk5j (node node1): 
20220429232102.872,10.244.3.107,59166,10.233.15.240,6789,3,0.0-1.0,1978007552,15824060416
20220429232103.956,10.244.3.107,59166,10.233.15.240,6789,3,1.0-2.0,2294939648,18359517184
20220429232104.935,10.244.3.107,59166,10.233.15.240,6789,3,2.0-3.0,2922905600,23383244800
20220429232105.875,10.244.3.107,59166,10.233.15.240,6789,3,3.0-4.0,3178627072,25429016576
20220429232106.866,10.244.3.107,59166,10.233.15.240,6789,3,4.0-5.0,3295674368,26365394944
20220429232107.865,10.244.3.107,59166,10.233.15.240,6789,3,5.0-6.0,3332112384,26656899072
20220429232108.861,10.244.3.107,59166,10.233.15.240,6789,3,6.0-7.0,2610429952,20883439616
20220429232109.867,10.244.3.107,59166,10.233.15.240,6789,3,7.0-8.0,1806172160,14449377280
20220429232110.953,10.244.3.107,59166,10.233.15.240,6789,3,8.0-9.0,1208352768,9666822144
20220429232111.865,10.244.3.107,59166,10.233.15.240,6789,3,9.0-10.0,1478361088,11826888704
20220429232111.865,10.244.3.107,59166,10.233.15.240,6789,3,0.0-10.0,24105713664,19284426298

Apr 29 23:21:16.900: INFO:                                From                                 To    Bandwidth (MB/s)
Apr 29 23:21:16.900: INFO:                               node2                              node1                 110
Apr 29 23:21:16.900: INFO:                               node1                              node1                2299
[AfterEach] [sig-network] Networking IPerf2 [Feature:Networking-Performance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Apr 29 23:21:16.900: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "network-perf-8154" for this suite.


• [SLOW TEST:52.648 seconds]
[sig-network] Networking IPerf2 [Feature:Networking-Performance]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should run iperf2
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking_perf.go:188
------------------------------
{"msg":"PASSED [sig-network] Networking IPerf2 [Feature:Networking-Performance] should run iperf2","total":-1,"completed":2,"skipped":71,"failed":0}

SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] version v1
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Apr 29 23:21:17.105: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename proxy
STEP: Waiting for a default service account to be provisioned in namespace
[It] should proxy logs on node with explicit kubelet port using proxy subresource 
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:85
Apr 29 23:21:17.142: INFO: (0) /api/v1/nodes/node2:10250/proxy/logs/: 
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746
[It] should check NodePort out-of-range
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1494
STEP: creating service nodeport-range-test with type NodePort in namespace services-6325
STEP: changing service nodeport-range-test to out-of-range NodePort 60399
STEP: deleting original service nodeport-range-test
STEP: creating service nodeport-range-test with out-of-range NodePort 60399
[AfterEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Apr 29 23:21:17.353: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-6325" for this suite.
[AfterEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750

•
------------------------------
{"msg":"PASSED [sig-network] Services should check NodePort out-of-range","total":-1,"completed":4,"skipped":214,"failed":0}

SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Apr 29 23:20:49.257: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename nettest
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:83
STEP: Executing a successful http request from the external internet
[It] should function for node-Service: udp
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:212
STEP: Performing setup for networking test in namespace nettest-9224
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Apr 29 23:20:49.359: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
Apr 29 23:20:49.418: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Apr 29 23:20:51.421: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Apr 29 23:20:53.423: INFO: The status of Pod netserver-0 is Running (Ready = false)
Apr 29 23:20:55.421: INFO: The status of Pod netserver-0 is Running (Ready = false)
Apr 29 23:20:57.420: INFO: The status of Pod netserver-0 is Running (Ready = false)
Apr 29 23:20:59.422: INFO: The status of Pod netserver-0 is Running (Ready = false)
Apr 29 23:21:01.422: INFO: The status of Pod netserver-0 is Running (Ready = false)
Apr 29 23:21:03.422: INFO: The status of Pod netserver-0 is Running (Ready = false)
Apr 29 23:21:05.421: INFO: The status of Pod netserver-0 is Running (Ready = false)
Apr 29 23:21:07.422: INFO: The status of Pod netserver-0 is Running (Ready = false)
Apr 29 23:21:09.421: INFO: The status of Pod netserver-0 is Running (Ready = false)
Apr 29 23:21:11.421: INFO: The status of Pod netserver-0 is Running (Ready = true)
Apr 29 23:21:11.428: INFO: The status of Pod netserver-1 is Running (Ready = true)
STEP: Creating test pods
Apr 29 23:21:17.470: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2
STEP: Getting node addresses
Apr 29 23:21:17.470: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
Apr 29 23:21:17.479: INFO: Requires at least 2 nodes (not -1)
[AfterEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Apr 29 23:21:17.482: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "nettest-9224" for this suite.


S [SKIPPING] [28.233 seconds]
[sig-network] Networking
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  Granular Checks: Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:151
    should function for node-Service: udp [It]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:212

    Requires at least 2 nodes (not -1)

    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/network/utils.go:782
------------------------------
SSSSSS
------------------------------
Apr 29 23:21:17.496: INFO: Running AfterSuite actions on all nodes


Apr 29 23:21:17.496: INFO: Running AfterSuite actions on all nodes


[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Apr 29 23:20:38.711: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename nettest
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:83
STEP: Executing a successful http request from the external internet
[It] should function for node-Service: http
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:198
STEP: Performing setup for networking test in namespace nettest-5435
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Apr 29 23:20:38.818: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
Apr 29 23:20:38.849: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Apr 29 23:20:40.853: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Apr 29 23:20:42.853: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Apr 29 23:20:44.855: INFO: The status of Pod netserver-0 is Running (Ready = false)
Apr 29 23:20:46.853: INFO: The status of Pod netserver-0 is Running (Ready = false)
Apr 29 23:20:48.852: INFO: The status of Pod netserver-0 is Running (Ready = false)
Apr 29 23:20:50.854: INFO: The status of Pod netserver-0 is Running (Ready = false)
Apr 29 23:20:52.852: INFO: The status of Pod netserver-0 is Running (Ready = false)
Apr 29 23:20:54.855: INFO: The status of Pod netserver-0 is Running (Ready = false)
Apr 29 23:20:56.853: INFO: The status of Pod netserver-0 is Running (Ready = false)
Apr 29 23:20:58.855: INFO: The status of Pod netserver-0 is Running (Ready = false)
Apr 29 23:21:00.855: INFO: The status of Pod netserver-0 is Running (Ready = true)
Apr 29 23:21:00.859: INFO: The status of Pod netserver-1 is Running (Ready = true)
STEP: Creating test pods
Apr 29 23:21:18.897: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2
STEP: Getting node addresses
Apr 29 23:21:18.897: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
Apr 29 23:21:18.904: INFO: Requires at least 2 nodes (not -1)
[AfterEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Apr 29 23:21:18.906: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "nettest-5435" for this suite.


S [SKIPPING] [40.203 seconds]
[sig-network] Networking
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  Granular Checks: Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:151
    should function for node-Service: http [It]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:198

    Requires at least 2 nodes (not -1)

    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/network/utils.go:782
------------------------------
Apr 29 23:21:18.915: INFO: Running AfterSuite actions on all nodes


[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Apr 29 23:19:12.246: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
W0429 23:19:12.268982      29 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
Apr 29 23:19:12.269: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled
Apr 29 23:19:12.271: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746
[It] should be able to up and down services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1015
STEP: creating up-down-1 in namespace services-3791
STEP: creating service up-down-1 in namespace services-3791
STEP: creating replication controller up-down-1 in namespace services-3791
I0429 23:19:12.284045      29 runners.go:190] Created replication controller with name: up-down-1, namespace: services-3791, replica count: 3
I0429 23:19:15.335194      29 runners.go:190] up-down-1 Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0429 23:19:18.336141      29 runners.go:190] up-down-1 Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0429 23:19:21.337143      29 runners.go:190] up-down-1 Pods: 3 out of 3 created, 2 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0429 23:19:24.339869      29 runners.go:190] up-down-1 Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
STEP: creating up-down-2 in namespace services-3791
STEP: creating service up-down-2 in namespace services-3791
STEP: creating replication controller up-down-2 in namespace services-3791
I0429 23:19:24.350972      29 runners.go:190] Created replication controller with name: up-down-2, namespace: services-3791, replica count: 3
I0429 23:19:27.403610      29 runners.go:190] up-down-2 Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0429 23:19:30.404583      29 runners.go:190] up-down-2 Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
STEP: verifying service up-down-1 is up
Apr 29 23:19:30.406: INFO: Creating new host exec pod
Apr 29 23:19:30.417: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Apr 29 23:19:32.421: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Apr 29 23:19:34.425: INFO: The status of Pod verify-service-up-host-exec-pod is Running (Ready = true)
Apr 29 23:19:34.425: INFO: Creating new exec pod
STEP: verifying service has 3 reachable backends
Apr 29 23:19:42.444: INFO: Executing cmd "for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.15.3:80 2>&1 || true; echo; done" in pod services-3791/verify-service-up-host-exec-pod
Apr 29 23:19:42.444: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3791 exec verify-service-up-host-exec-pod -- /bin/sh -x -c for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.15.3:80 2>&1 || true; echo; done'
Apr 29 23:19:42.925: INFO: stderr: "+ seq 1 150\n+ wget -q -T 1 -O - http://10.233.15.3:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.15.3:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.15.3:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.15.3:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.15.3:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.15.3:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.15.3:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.15.3:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.15.3:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.15.3:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.15.3:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.15.3:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.15.3:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.15.3:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.15.3:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.15.3:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.15.3:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.15.3:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.15.3:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.15.3:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.15.3:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.15.3:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.15.3:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.15.3:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.15.3:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.15.3:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.15.3:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.15.3:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.15.3:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.15.3:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.15.3:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.15.3:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.15.3:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.15.3:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.15.3:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.15.3:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.15.3:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.15.3:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.15.3:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.15.3:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.15.3:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.15.3:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.15.3:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.15.3:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.15.3:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.15.3:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.15.3:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.15.3:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.15.3:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.15.3:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.15.3:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.15.3:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.15.3:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.15.3:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.15.3:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.15.3:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.15.3:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.15.3:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.15.3:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.15.3:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.15.3:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.15.3:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.15.3:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.15.3:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.15.3:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.15.3:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.15.3:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.15.3:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.15.3:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.15.3:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.15.3:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.15.3:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.15.3:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.15.3:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.15.3:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.15.3:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.15.3:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.15.3:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.15.3:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.15.3:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.15.3:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.15.3:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.15.3:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.15.3:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.15.3:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.15.3:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.15.3:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.15.3:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.15.3:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.15.3:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.15.3:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.15.3:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.15.3:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.15.3:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.15.3:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.15.3:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.15.3:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.15.3:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.15.3:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.15.3:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.15.3:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.15.3:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.15.3:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.15.3:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.15.3:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.15.3:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.15.3:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.15.3:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.15.3:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.15.3:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.15.3:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.15.3:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.15.3:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.15.3:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.15.3:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.15.3:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.15.3:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.15.3:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.15.3:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.15.3:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.15.3:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.15.3:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.15.3:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.15.3:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.15.3:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.15.3:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.15.3:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.15.3:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.15.3:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.15.3:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.15.3:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.15.3:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.15.3:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.15.3:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.15.3:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.15.3:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.15.3:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.15.3:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.15.3:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.15.3:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.15.3:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.15.3:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.15.3:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.15.3:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.15.3:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.15.3:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.15.3:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.15.3:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.15.3:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.15.3:80\n+ echo\n"
Apr 29 23:19:42.926: INFO: stdout: "up-down-1-t2mfg\nup-down-1-t2mfg\nup-down-1-xk4l2\nup-down-1-nrh49\nup-down-1-xk4l2\nup-down-1-nrh49\nup-down-1-t2mfg\nup-down-1-xk4l2\nup-down-1-nrh49\nup-down-1-nrh49\nup-down-1-nrh49\nup-down-1-xk4l2\nup-down-1-xk4l2\nup-down-1-nrh49\nup-down-1-nrh49\nup-down-1-t2mfg\nup-down-1-nrh49\nup-down-1-nrh49\nup-down-1-t2mfg\nup-down-1-nrh49\nup-down-1-xk4l2\nup-down-1-xk4l2\nup-down-1-nrh49\nup-down-1-nrh49\nup-down-1-t2mfg\nup-down-1-t2mfg\nup-down-1-nrh49\nup-down-1-xk4l2\nup-down-1-xk4l2\nup-down-1-t2mfg\nup-down-1-t2mfg\nup-down-1-xk4l2\nup-down-1-t2mfg\nup-down-1-t2mfg\nup-down-1-xk4l2\nup-down-1-nrh49\nup-down-1-nrh49\nup-down-1-nrh49\nup-down-1-t2mfg\nup-down-1-nrh49\nup-down-1-xk4l2\nup-down-1-nrh49\nup-down-1-xk4l2\nup-down-1-xk4l2\nup-down-1-t2mfg\nup-down-1-xk4l2\nup-down-1-t2mfg\nup-down-1-t2mfg\nup-down-1-xk4l2\nup-down-1-nrh49\nup-down-1-t2mfg\nup-down-1-t2mfg\nup-down-1-xk4l2\nup-down-1-nrh49\nup-down-1-nrh49\nup-down-1-xk4l2\nup-down-1-nrh49\nup-down-1-t2mfg\nup-down-1-xk4l2\nup-down-1-t2mfg\nup-down-1-xk4l2\nup-down-1-nrh49\nup-down-1-xk4l2\nup-down-1-nrh49\nup-down-1-t2mfg\nup-down-1-nrh49\nup-down-1-xk4l2\nup-down-1-xk4l2\nup-down-1-nrh49\nup-down-1-xk4l2\nup-down-1-t2mfg\nup-down-1-t2mfg\nup-down-1-nrh49\nup-down-1-nrh49\nup-down-1-nrh49\nup-down-1-t2mfg\nup-down-1-t2mfg\nup-down-1-t2mfg\nup-down-1-t2mfg\nup-down-1-t2mfg\nup-down-1-t2mfg\nup-down-1-t2mfg\nup-down-1-t2mfg\nup-down-1-xk4l2\nup-down-1-xk4l2\nup-down-1-nrh49\nup-down-1-nrh49\nup-down-1-nrh49\nup-down-1-nrh49\nup-down-1-xk4l2\nup-down-1-t2mfg\nup-down-1-nrh49\nup-down-1-xk4l2\nup-down-1-t2mfg\nup-down-1-xk4l2\nup-down-1-xk4l2\nup-down-1-xk4l2\nup-down-1-xk4l2\nup-down-1-nrh49\nup-down-1-t2mfg\nup-down-1-nrh49\nup-down-1-t2mfg\nup-down-1-t2mfg\nup-down-1-nrh49\nup-down-1-t2mfg\nup-down-1-xk4l2\nup-down-1-xk4l2\nup-down-1-nrh49\nup-down-1-xk4l2\nup-down-1-nrh49\nup-down-1-t2mfg\nup-down-1-t2mfg\nup-down-1-t2mfg\nup-down-1-t2mfg\nup-down-1-nrh49\nup-down-1-nrh49\nup-down-1-t2mfg\nup-down-1-nrh49\nup-down-1-t2mfg\nup-down-1-nrh49\nup-down-1-nrh49\nup-down-1-xk4l2\nup-down-1-nrh49\nup-down-1-xk4l2\nup-down-1-t2mfg\nup-down-1-nrh49\nup-down-1-t2mfg\nup-down-1-t2mfg\nup-down-1-nrh49\nup-down-1-nrh49\nup-down-1-nrh49\nup-down-1-t2mfg\nup-down-1-nrh49\nup-down-1-t2mfg\nup-down-1-xk4l2\nup-down-1-nrh49\nup-down-1-t2mfg\nup-down-1-t2mfg\nup-down-1-t2mfg\nup-down-1-xk4l2\nup-down-1-xk4l2\nup-down-1-xk4l2\nup-down-1-t2mfg\nup-down-1-xk4l2\nup-down-1-xk4l2\nup-down-1-xk4l2\nup-down-1-t2mfg\nup-down-1-t2mfg\nup-down-1-nrh49\nup-down-1-t2mfg\n"
Apr 29 23:19:42.926: INFO: Executing cmd "for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.15.3:80 2>&1 || true; echo; done" in pod services-3791/verify-service-up-exec-pod-z792q
Apr 29 23:19:42.926: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3791 exec verify-service-up-exec-pod-z792q -- /bin/sh -x -c for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.15.3:80 2>&1 || true; echo; done'
Apr 29 23:19:43.292: INFO: stderr: "+ seq 1 150\n+ wget -q -T 1 -O - http://10.233.15.3:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.15.3:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.15.3:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.15.3:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.15.3:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.15.3:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.15.3:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.15.3:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.15.3:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.15.3:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.15.3:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.15.3:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.15.3:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.15.3:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.15.3:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.15.3:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.15.3:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.15.3:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.15.3:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.15.3:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.15.3:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.15.3:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.15.3:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.15.3:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.15.3:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.15.3:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.15.3:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.15.3:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.15.3:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.15.3:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.15.3:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.15.3:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.15.3:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.15.3:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.15.3:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.15.3:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.15.3:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.15.3:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.15.3:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.15.3:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.15.3:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.15.3:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.15.3:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.15.3:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.15.3:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.15.3:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.15.3:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.15.3:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.15.3:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.15.3:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.15.3:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.15.3:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.15.3:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.15.3:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.15.3:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.15.3:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.15.3:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.15.3:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.15.3:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.15.3:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.15.3:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.15.3:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.15.3:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.15.3:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.15.3:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.15.3:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.15.3:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.15.3:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.15.3:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.15.3:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.15.3:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.15.3:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.15.3:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.15.3:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.15.3:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.15.3:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.15.3:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.15.3:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.15.3:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.15.3:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.15.3:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.15.3:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.15.3:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.15.3:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.15.3:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.15.3:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.15.3:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.15.3:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.15.3:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.15.3:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.15.3:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.15.3:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.15.3:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.15.3:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.15.3:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.15.3:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.15.3:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.15.3:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.15.3:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.15.3:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.15.3:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.15.3:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.15.3:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.15.3:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.15.3:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.15.3:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.15.3:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.15.3:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.15.3:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.15.3:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.15.3:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.15.3:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.15.3:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.15.3:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.15.3:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.15.3:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.15.3:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.15.3:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.15.3:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.15.3:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.15.3:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.15.3:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.15.3:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.15.3:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.15.3:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.15.3:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.15.3:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.15.3:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.15.3:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.15.3:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.15.3:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.15.3:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.15.3:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.15.3:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.15.3:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.15.3:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.15.3:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.15.3:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.15.3:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.15.3:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.15.3:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.15.3:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.15.3:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.15.3:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.15.3:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.15.3:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.15.3:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.15.3:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.15.3:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.15.3:80\n+ echo\n"
Apr 29 23:19:43.292: INFO: stdout: "up-down-1-xk4l2\nup-down-1-xk4l2\nup-down-1-nrh49\nup-down-1-nrh49\nup-down-1-t2mfg\nup-down-1-nrh49\nup-down-1-xk4l2\nup-down-1-t2mfg\nup-down-1-xk4l2\nup-down-1-t2mfg\nup-down-1-nrh49\nup-down-1-xk4l2\nup-down-1-t2mfg\nup-down-1-xk4l2\nup-down-1-nrh49\nup-down-1-xk4l2\nup-down-1-xk4l2\nup-down-1-nrh49\nup-down-1-nrh49\nup-down-1-t2mfg\nup-down-1-xk4l2\nup-down-1-xk4l2\nup-down-1-t2mfg\nup-down-1-xk4l2\nup-down-1-t2mfg\nup-down-1-xk4l2\nup-down-1-t2mfg\nup-down-1-nrh49\nup-down-1-t2mfg\nup-down-1-nrh49\nup-down-1-nrh49\nup-down-1-xk4l2\nup-down-1-xk4l2\nup-down-1-xk4l2\nup-down-1-nrh49\nup-down-1-xk4l2\nup-down-1-xk4l2\nup-down-1-t2mfg\nup-down-1-xk4l2\nup-down-1-nrh49\nup-down-1-t2mfg\nup-down-1-xk4l2\nup-down-1-t2mfg\nup-down-1-xk4l2\nup-down-1-nrh49\nup-down-1-nrh49\nup-down-1-nrh49\nup-down-1-t2mfg\nup-down-1-t2mfg\nup-down-1-xk4l2\nup-down-1-t2mfg\nup-down-1-t2mfg\nup-down-1-nrh49\nup-down-1-xk4l2\nup-down-1-nrh49\nup-down-1-t2mfg\nup-down-1-t2mfg\nup-down-1-nrh49\nup-down-1-t2mfg\nup-down-1-t2mfg\nup-down-1-nrh49\nup-down-1-t2mfg\nup-down-1-xk4l2\nup-down-1-t2mfg\nup-down-1-nrh49\nup-down-1-t2mfg\nup-down-1-xk4l2\nup-down-1-nrh49\nup-down-1-nrh49\nup-down-1-xk4l2\nup-down-1-xk4l2\nup-down-1-xk4l2\nup-down-1-t2mfg\nup-down-1-nrh49\nup-down-1-t2mfg\nup-down-1-xk4l2\nup-down-1-t2mfg\nup-down-1-nrh49\nup-down-1-t2mfg\nup-down-1-nrh49\nup-down-1-nrh49\nup-down-1-xk4l2\nup-down-1-nrh49\nup-down-1-nrh49\nup-down-1-xk4l2\nup-down-1-nrh49\nup-down-1-nrh49\nup-down-1-nrh49\nup-down-1-nrh49\nup-down-1-t2mfg\nup-down-1-xk4l2\nup-down-1-xk4l2\nup-down-1-xk4l2\nup-down-1-xk4l2\nup-down-1-xk4l2\nup-down-1-nrh49\nup-down-1-xk4l2\nup-down-1-xk4l2\nup-down-1-nrh49\nup-down-1-t2mfg\nup-down-1-t2mfg\nup-down-1-nrh49\nup-down-1-nrh49\nup-down-1-t2mfg\nup-down-1-xk4l2\nup-down-1-t2mfg\nup-down-1-t2mfg\nup-down-1-nrh49\nup-down-1-t2mfg\nup-down-1-nrh49\nup-down-1-t2mfg\nup-down-1-xk4l2\nup-down-1-t2mfg\nup-down-1-nrh49\nup-down-1-t2mfg\nup-down-1-nrh49\nup-down-1-nrh49\nup-down-1-t2mfg\nup-down-1-t2mfg\nup-down-1-nrh49\nup-down-1-t2mfg\nup-down-1-t2mfg\nup-down-1-nrh49\nup-down-1-nrh49\nup-down-1-t2mfg\nup-down-1-xk4l2\nup-down-1-xk4l2\nup-down-1-t2mfg\nup-down-1-t2mfg\nup-down-1-nrh49\nup-down-1-nrh49\nup-down-1-nrh49\nup-down-1-t2mfg\nup-down-1-xk4l2\nup-down-1-t2mfg\nup-down-1-t2mfg\nup-down-1-xk4l2\nup-down-1-nrh49\nup-down-1-t2mfg\nup-down-1-nrh49\nup-down-1-nrh49\nup-down-1-t2mfg\nup-down-1-xk4l2\nup-down-1-t2mfg\nup-down-1-nrh49\nup-down-1-nrh49\nup-down-1-t2mfg\nup-down-1-t2mfg\nup-down-1-nrh49\nup-down-1-t2mfg\n"
STEP: Deleting pod verify-service-up-host-exec-pod in namespace services-3791
STEP: Deleting pod verify-service-up-exec-pod-z792q in namespace services-3791
STEP: verifying service up-down-2 is up
Apr 29 23:19:43.307: INFO: Creating new host exec pod
Apr 29 23:19:43.323: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Apr 29 23:19:45.326: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Apr 29 23:19:47.327: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Apr 29 23:19:49.328: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Apr 29 23:19:51.328: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Apr 29 23:19:53.328: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Apr 29 23:19:55.327: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Apr 29 23:19:57.327: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Apr 29 23:19:59.328: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Apr 29 23:20:01.327: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Apr 29 23:20:03.330: INFO: The status of Pod verify-service-up-host-exec-pod is Running (Ready = true)
Apr 29 23:20:03.330: INFO: Creating new exec pod
STEP: verifying service has 3 reachable backends
Apr 29 23:20:11.348: INFO: Executing cmd "for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.20.252:80 2>&1 || true; echo; done" in pod services-3791/verify-service-up-host-exec-pod
Apr 29 23:20:11.348: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3791 exec verify-service-up-host-exec-pod -- /bin/sh -x -c for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.20.252:80 2>&1 || true; echo; done'
Apr 29 23:20:11.816: INFO: stderr: "+ seq 1 150\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n"
Apr 29 23:20:11.817: INFO: stdout: "up-down-2-xcs5g\nup-down-2-8q9zz\nup-down-2-8q9zz\nup-down-2-8q9zz\nup-down-2-65dld\nup-down-2-8q9zz\nup-down-2-xcs5g\nup-down-2-xcs5g\nup-down-2-8q9zz\nup-down-2-xcs5g\nup-down-2-8q9zz\nup-down-2-8q9zz\nup-down-2-65dld\nup-down-2-xcs5g\nup-down-2-8q9zz\nup-down-2-xcs5g\nup-down-2-8q9zz\nup-down-2-65dld\nup-down-2-8q9zz\nup-down-2-xcs5g\nup-down-2-65dld\nup-down-2-8q9zz\nup-down-2-xcs5g\nup-down-2-xcs5g\nup-down-2-xcs5g\nup-down-2-xcs5g\nup-down-2-xcs5g\nup-down-2-8q9zz\nup-down-2-xcs5g\nup-down-2-65dld\nup-down-2-65dld\nup-down-2-xcs5g\nup-down-2-xcs5g\nup-down-2-65dld\nup-down-2-8q9zz\nup-down-2-8q9zz\nup-down-2-8q9zz\nup-down-2-xcs5g\nup-down-2-8q9zz\nup-down-2-65dld\nup-down-2-8q9zz\nup-down-2-8q9zz\nup-down-2-8q9zz\nup-down-2-xcs5g\nup-down-2-xcs5g\nup-down-2-65dld\nup-down-2-8q9zz\nup-down-2-8q9zz\nup-down-2-xcs5g\nup-down-2-8q9zz\nup-down-2-65dld\nup-down-2-65dld\nup-down-2-8q9zz\nup-down-2-xcs5g\nup-down-2-8q9zz\nup-down-2-65dld\nup-down-2-65dld\nup-down-2-65dld\nup-down-2-65dld\nup-down-2-8q9zz\nup-down-2-65dld\nup-down-2-65dld\nup-down-2-xcs5g\nup-down-2-65dld\nup-down-2-xcs5g\nup-down-2-xcs5g\nup-down-2-xcs5g\nup-down-2-65dld\nup-down-2-xcs5g\nup-down-2-65dld\nup-down-2-65dld\nup-down-2-65dld\nup-down-2-65dld\nup-down-2-xcs5g\nup-down-2-8q9zz\nup-down-2-xcs5g\nup-down-2-8q9zz\nup-down-2-8q9zz\nup-down-2-65dld\nup-down-2-8q9zz\nup-down-2-65dld\nup-down-2-8q9zz\nup-down-2-xcs5g\nup-down-2-xcs5g\nup-down-2-8q9zz\nup-down-2-8q9zz\nup-down-2-65dld\nup-down-2-65dld\nup-down-2-xcs5g\nup-down-2-8q9zz\nup-down-2-xcs5g\nup-down-2-65dld\nup-down-2-8q9zz\nup-down-2-8q9zz\nup-down-2-8q9zz\nup-down-2-8q9zz\nup-down-2-xcs5g\nup-down-2-xcs5g\nup-down-2-8q9zz\nup-down-2-xcs5g\nup-down-2-xcs5g\nup-down-2-xcs5g\nup-down-2-xcs5g\nup-down-2-8q9zz\nup-down-2-xcs5g\nup-down-2-xcs5g\nup-down-2-8q9zz\nup-down-2-65dld\nup-down-2-65dld\nup-down-2-xcs5g\nup-down-2-8q9zz\nup-down-2-8q9zz\nup-down-2-8q9zz\nup-down-2-65dld\nup-down-2-8q9zz\nup-down-2-65dld\nup-down-2-65dld\nup-down-2-xcs5g\nup-down-2-8q9zz\nup-down-2-8q9zz\nup-down-2-65dld\nup-down-2-8q9zz\nup-down-2-65dld\nup-down-2-65dld\nup-down-2-8q9zz\nup-down-2-xcs5g\nup-down-2-65dld\nup-down-2-xcs5g\nup-down-2-xcs5g\nup-down-2-65dld\nup-down-2-8q9zz\nup-down-2-65dld\nup-down-2-8q9zz\nup-down-2-8q9zz\nup-down-2-8q9zz\nup-down-2-xcs5g\nup-down-2-xcs5g\nup-down-2-8q9zz\nup-down-2-xcs5g\nup-down-2-8q9zz\nup-down-2-xcs5g\nup-down-2-65dld\nup-down-2-xcs5g\nup-down-2-65dld\nup-down-2-65dld\nup-down-2-8q9zz\nup-down-2-65dld\nup-down-2-65dld\nup-down-2-8q9zz\nup-down-2-65dld\n"
Apr 29 23:20:11.817: INFO: Executing cmd "for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.20.252:80 2>&1 || true; echo; done" in pod services-3791/verify-service-up-exec-pod-qgsmf
Apr 29 23:20:11.817: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3791 exec verify-service-up-exec-pod-qgsmf -- /bin/sh -x -c for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.20.252:80 2>&1 || true; echo; done'
Apr 29 23:20:12.236: INFO: stderr: "+ seq 1 150\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n"
Apr 29 23:20:12.237: INFO: stdout: "up-down-2-65dld\nup-down-2-xcs5g\nup-down-2-xcs5g\nup-down-2-8q9zz\nup-down-2-65dld\nup-down-2-65dld\nup-down-2-65dld\nup-down-2-xcs5g\nup-down-2-8q9zz\nup-down-2-65dld\nup-down-2-xcs5g\nup-down-2-8q9zz\nup-down-2-65dld\nup-down-2-8q9zz\nup-down-2-65dld\nup-down-2-65dld\nup-down-2-8q9zz\nup-down-2-8q9zz\nup-down-2-xcs5g\nup-down-2-8q9zz\nup-down-2-8q9zz\nup-down-2-xcs5g\nup-down-2-65dld\nup-down-2-8q9zz\nup-down-2-8q9zz\nup-down-2-xcs5g\nup-down-2-8q9zz\nup-down-2-65dld\nup-down-2-8q9zz\nup-down-2-xcs5g\nup-down-2-65dld\nup-down-2-8q9zz\nup-down-2-8q9zz\nup-down-2-8q9zz\nup-down-2-8q9zz\nup-down-2-8q9zz\nup-down-2-65dld\nup-down-2-8q9zz\nup-down-2-65dld\nup-down-2-65dld\nup-down-2-xcs5g\nup-down-2-xcs5g\nup-down-2-65dld\nup-down-2-65dld\nup-down-2-65dld\nup-down-2-xcs5g\nup-down-2-65dld\nup-down-2-8q9zz\nup-down-2-8q9zz\nup-down-2-65dld\nup-down-2-65dld\nup-down-2-65dld\nup-down-2-xcs5g\nup-down-2-xcs5g\nup-down-2-xcs5g\nup-down-2-8q9zz\nup-down-2-xcs5g\nup-down-2-xcs5g\nup-down-2-8q9zz\nup-down-2-8q9zz\nup-down-2-xcs5g\nup-down-2-8q9zz\nup-down-2-65dld\nup-down-2-xcs5g\nup-down-2-65dld\nup-down-2-8q9zz\nup-down-2-65dld\nup-down-2-xcs5g\nup-down-2-8q9zz\nup-down-2-65dld\nup-down-2-xcs5g\nup-down-2-65dld\nup-down-2-65dld\nup-down-2-xcs5g\nup-down-2-65dld\nup-down-2-xcs5g\nup-down-2-65dld\nup-down-2-xcs5g\nup-down-2-65dld\nup-down-2-65dld\nup-down-2-xcs5g\nup-down-2-65dld\nup-down-2-65dld\nup-down-2-8q9zz\nup-down-2-8q9zz\nup-down-2-65dld\nup-down-2-xcs5g\nup-down-2-65dld\nup-down-2-8q9zz\nup-down-2-65dld\nup-down-2-8q9zz\nup-down-2-65dld\nup-down-2-8q9zz\nup-down-2-65dld\nup-down-2-xcs5g\nup-down-2-8q9zz\nup-down-2-xcs5g\nup-down-2-xcs5g\nup-down-2-65dld\nup-down-2-65dld\nup-down-2-xcs5g\nup-down-2-65dld\nup-down-2-65dld\nup-down-2-8q9zz\nup-down-2-8q9zz\nup-down-2-xcs5g\nup-down-2-xcs5g\nup-down-2-xcs5g\nup-down-2-8q9zz\nup-down-2-8q9zz\nup-down-2-8q9zz\nup-down-2-8q9zz\nup-down-2-xcs5g\nup-down-2-8q9zz\nup-down-2-65dld\nup-down-2-8q9zz\nup-down-2-65dld\nup-down-2-65dld\nup-down-2-xcs5g\nup-down-2-xcs5g\nup-down-2-xcs5g\nup-down-2-65dld\nup-down-2-65dld\nup-down-2-xcs5g\nup-down-2-xcs5g\nup-down-2-xcs5g\nup-down-2-65dld\nup-down-2-8q9zz\nup-down-2-65dld\nup-down-2-8q9zz\nup-down-2-xcs5g\nup-down-2-65dld\nup-down-2-xcs5g\nup-down-2-8q9zz\nup-down-2-xcs5g\nup-down-2-8q9zz\nup-down-2-xcs5g\nup-down-2-8q9zz\nup-down-2-65dld\nup-down-2-8q9zz\nup-down-2-xcs5g\nup-down-2-65dld\nup-down-2-xcs5g\nup-down-2-8q9zz\nup-down-2-xcs5g\nup-down-2-8q9zz\nup-down-2-65dld\nup-down-2-8q9zz\nup-down-2-xcs5g\nup-down-2-xcs5g\n"
STEP: Deleting pod verify-service-up-host-exec-pod in namespace services-3791
STEP: Deleting pod verify-service-up-exec-pod-qgsmf in namespace services-3791
STEP: stopping service up-down-1
STEP: deleting ReplicationController up-down-1 in namespace services-3791, will wait for the garbage collector to delete the pods
Apr 29 23:20:12.309: INFO: Deleting ReplicationController up-down-1 took: 4.214763ms
Apr 29 23:20:12.409: INFO: Terminating ReplicationController up-down-1 pods took: 100.233778ms
STEP: verifying service up-down-1 is not up
Apr 29 23:20:25.619: INFO: Creating new host exec pod
Apr 29 23:20:25.636: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Apr 29 23:20:27.639: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Apr 29 23:20:29.640: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Apr 29 23:20:31.640: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Apr 29 23:20:33.639: INFO: The status of Pod verify-service-down-host-exec-pod is Running (Ready = true)
Apr 29 23:20:33.640: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3791 exec verify-service-down-host-exec-pod -- /bin/sh -x -c curl -g -s --connect-timeout 2 http://10.233.15.3:80 && echo service-down-failed'
Apr 29 23:20:36.330: INFO: rc: 28
Apr 29 23:20:36.330: INFO: error while kubectl execing "curl -g -s --connect-timeout 2 http://10.233.15.3:80 && echo service-down-failed" in pod services-3791/verify-service-down-host-exec-pod: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3791 exec verify-service-down-host-exec-pod -- /bin/sh -x -c curl -g -s --connect-timeout 2 http://10.233.15.3:80 && echo service-down-failed:
Command stdout:

stderr:
+ curl -g -s --connect-timeout 2 http://10.233.15.3:80
command terminated with exit code 28

error:
exit status 28
Output: 
STEP: Deleting pod verify-service-down-host-exec-pod in namespace services-3791
STEP: verifying service up-down-2 is still up
Apr 29 23:20:36.340: INFO: Creating new host exec pod
Apr 29 23:20:36.352: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Apr 29 23:20:38.357: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Apr 29 23:20:40.356: INFO: The status of Pod verify-service-up-host-exec-pod is Running (Ready = true)
Apr 29 23:20:40.356: INFO: Creating new exec pod
STEP: verifying service has 3 reachable backends
Apr 29 23:20:46.372: INFO: Executing cmd "for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.20.252:80 2>&1 || true; echo; done" in pod services-3791/verify-service-up-host-exec-pod
Apr 29 23:20:46.372: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3791 exec verify-service-up-host-exec-pod -- /bin/sh -x -c for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.20.252:80 2>&1 || true; echo; done'
Apr 29 23:20:46.845: INFO: stderr: "+ seq 1 150\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n"
Apr 29 23:20:46.845: INFO: stdout: "up-down-2-65dld\nup-down-2-8q9zz\nup-down-2-8q9zz\nup-down-2-xcs5g\nup-down-2-xcs5g\nup-down-2-65dld\nup-down-2-xcs5g\nup-down-2-xcs5g\nup-down-2-8q9zz\nup-down-2-xcs5g\nup-down-2-65dld\nup-down-2-8q9zz\nup-down-2-8q9zz\nup-down-2-8q9zz\nup-down-2-xcs5g\nup-down-2-65dld\nup-down-2-8q9zz\nup-down-2-65dld\nup-down-2-65dld\nup-down-2-8q9zz\nup-down-2-8q9zz\nup-down-2-8q9zz\nup-down-2-xcs5g\nup-down-2-65dld\nup-down-2-xcs5g\nup-down-2-xcs5g\nup-down-2-65dld\nup-down-2-8q9zz\nup-down-2-65dld\nup-down-2-65dld\nup-down-2-65dld\nup-down-2-xcs5g\nup-down-2-65dld\nup-down-2-8q9zz\nup-down-2-8q9zz\nup-down-2-xcs5g\nup-down-2-xcs5g\nup-down-2-8q9zz\nup-down-2-65dld\nup-down-2-xcs5g\nup-down-2-8q9zz\nup-down-2-65dld\nup-down-2-65dld\nup-down-2-65dld\nup-down-2-65dld\nup-down-2-xcs5g\nup-down-2-xcs5g\nup-down-2-65dld\nup-down-2-65dld\nup-down-2-xcs5g\nup-down-2-65dld\nup-down-2-65dld\nup-down-2-xcs5g\nup-down-2-xcs5g\nup-down-2-8q9zz\nup-down-2-65dld\nup-down-2-8q9zz\nup-down-2-65dld\nup-down-2-65dld\nup-down-2-xcs5g\nup-down-2-xcs5g\nup-down-2-xcs5g\nup-down-2-xcs5g\nup-down-2-65dld\nup-down-2-8q9zz\nup-down-2-xcs5g\nup-down-2-xcs5g\nup-down-2-xcs5g\nup-down-2-xcs5g\nup-down-2-65dld\nup-down-2-65dld\nup-down-2-xcs5g\nup-down-2-8q9zz\nup-down-2-8q9zz\nup-down-2-65dld\nup-down-2-xcs5g\nup-down-2-65dld\nup-down-2-8q9zz\nup-down-2-8q9zz\nup-down-2-65dld\nup-down-2-65dld\nup-down-2-xcs5g\nup-down-2-8q9zz\nup-down-2-65dld\nup-down-2-8q9zz\nup-down-2-xcs5g\nup-down-2-8q9zz\nup-down-2-8q9zz\nup-down-2-65dld\nup-down-2-xcs5g\nup-down-2-xcs5g\nup-down-2-8q9zz\nup-down-2-8q9zz\nup-down-2-8q9zz\nup-down-2-8q9zz\nup-down-2-xcs5g\nup-down-2-65dld\nup-down-2-xcs5g\nup-down-2-65dld\nup-down-2-xcs5g\nup-down-2-xcs5g\nup-down-2-65dld\nup-down-2-xcs5g\nup-down-2-xcs5g\nup-down-2-65dld\nup-down-2-8q9zz\nup-down-2-8q9zz\nup-down-2-65dld\nup-down-2-8q9zz\nup-down-2-xcs5g\nup-down-2-xcs5g\nup-down-2-xcs5g\nup-down-2-xcs5g\nup-down-2-xcs5g\nup-down-2-xcs5g\nup-down-2-65dld\nup-down-2-65dld\nup-down-2-8q9zz\nup-down-2-xcs5g\nup-down-2-8q9zz\nup-down-2-8q9zz\nup-down-2-xcs5g\nup-down-2-8q9zz\nup-down-2-8q9zz\nup-down-2-65dld\nup-down-2-65dld\nup-down-2-xcs5g\nup-down-2-65dld\nup-down-2-8q9zz\nup-down-2-65dld\nup-down-2-8q9zz\nup-down-2-8q9zz\nup-down-2-65dld\nup-down-2-8q9zz\nup-down-2-xcs5g\nup-down-2-65dld\nup-down-2-8q9zz\nup-down-2-xcs5g\nup-down-2-8q9zz\nup-down-2-xcs5g\nup-down-2-8q9zz\nup-down-2-xcs5g\nup-down-2-65dld\nup-down-2-xcs5g\nup-down-2-65dld\nup-down-2-65dld\nup-down-2-65dld\nup-down-2-8q9zz\nup-down-2-8q9zz\nup-down-2-65dld\n"
Apr 29 23:20:46.845: INFO: Executing cmd "for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.20.252:80 2>&1 || true; echo; done" in pod services-3791/verify-service-up-exec-pod-k55c9
Apr 29 23:20:46.845: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3791 exec verify-service-up-exec-pod-k55c9 -- /bin/sh -x -c for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.20.252:80 2>&1 || true; echo; done'
Apr 29 23:20:47.872: INFO: stderr: "+ seq 1 150\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n"
Apr 29 23:20:47.873: INFO: stdout: "up-down-2-8q9zz\nup-down-2-xcs5g\nup-down-2-65dld\nup-down-2-8q9zz\nup-down-2-8q9zz\nup-down-2-65dld\nup-down-2-xcs5g\nup-down-2-65dld\nup-down-2-8q9zz\nup-down-2-8q9zz\nup-down-2-65dld\nup-down-2-8q9zz\nup-down-2-65dld\nup-down-2-8q9zz\nup-down-2-xcs5g\nup-down-2-65dld\nup-down-2-65dld\nup-down-2-8q9zz\nup-down-2-8q9zz\nup-down-2-xcs5g\nup-down-2-65dld\nup-down-2-65dld\nup-down-2-xcs5g\nup-down-2-65dld\nup-down-2-65dld\nup-down-2-8q9zz\nup-down-2-8q9zz\nup-down-2-xcs5g\nup-down-2-65dld\nup-down-2-65dld\nup-down-2-65dld\nup-down-2-65dld\nup-down-2-65dld\nup-down-2-8q9zz\nup-down-2-65dld\nup-down-2-xcs5g\nup-down-2-8q9zz\nup-down-2-8q9zz\nup-down-2-8q9zz\nup-down-2-65dld\nup-down-2-65dld\nup-down-2-xcs5g\nup-down-2-xcs5g\nup-down-2-8q9zz\nup-down-2-8q9zz\nup-down-2-8q9zz\nup-down-2-8q9zz\nup-down-2-xcs5g\nup-down-2-65dld\nup-down-2-65dld\nup-down-2-8q9zz\nup-down-2-8q9zz\nup-down-2-8q9zz\nup-down-2-65dld\nup-down-2-xcs5g\nup-down-2-65dld\nup-down-2-xcs5g\nup-down-2-65dld\nup-down-2-xcs5g\nup-down-2-65dld\nup-down-2-xcs5g\nup-down-2-65dld\nup-down-2-8q9zz\nup-down-2-65dld\nup-down-2-xcs5g\nup-down-2-8q9zz\nup-down-2-65dld\nup-down-2-xcs5g\nup-down-2-65dld\nup-down-2-65dld\nup-down-2-xcs5g\nup-down-2-xcs5g\nup-down-2-65dld\nup-down-2-65dld\nup-down-2-8q9zz\nup-down-2-8q9zz\nup-down-2-65dld\nup-down-2-8q9zz\nup-down-2-65dld\nup-down-2-65dld\nup-down-2-xcs5g\nup-down-2-8q9zz\nup-down-2-8q9zz\nup-down-2-xcs5g\nup-down-2-8q9zz\nup-down-2-8q9zz\nup-down-2-8q9zz\nup-down-2-xcs5g\nup-down-2-65dld\nup-down-2-65dld\nup-down-2-65dld\nup-down-2-65dld\nup-down-2-65dld\nup-down-2-65dld\nup-down-2-65dld\nup-down-2-65dld\nup-down-2-xcs5g\nup-down-2-8q9zz\nup-down-2-xcs5g\nup-down-2-xcs5g\nup-down-2-8q9zz\nup-down-2-xcs5g\nup-down-2-8q9zz\nup-down-2-65dld\nup-down-2-xcs5g\nup-down-2-65dld\nup-down-2-8q9zz\nup-down-2-65dld\nup-down-2-65dld\nup-down-2-8q9zz\nup-down-2-65dld\nup-down-2-8q9zz\nup-down-2-65dld\nup-down-2-8q9zz\nup-down-2-65dld\nup-down-2-xcs5g\nup-down-2-65dld\nup-down-2-65dld\nup-down-2-8q9zz\nup-down-2-65dld\nup-down-2-8q9zz\nup-down-2-65dld\nup-down-2-65dld\nup-down-2-xcs5g\nup-down-2-8q9zz\nup-down-2-8q9zz\nup-down-2-8q9zz\nup-down-2-65dld\nup-down-2-xcs5g\nup-down-2-xcs5g\nup-down-2-8q9zz\nup-down-2-65dld\nup-down-2-65dld\nup-down-2-xcs5g\nup-down-2-xcs5g\nup-down-2-xcs5g\nup-down-2-65dld\nup-down-2-65dld\nup-down-2-65dld\nup-down-2-xcs5g\nup-down-2-65dld\nup-down-2-65dld\nup-down-2-65dld\nup-down-2-xcs5g\nup-down-2-xcs5g\nup-down-2-65dld\nup-down-2-xcs5g\nup-down-2-65dld\nup-down-2-8q9zz\nup-down-2-65dld\n"
STEP: Deleting pod verify-service-up-host-exec-pod in namespace services-3791
STEP: Deleting pod verify-service-up-exec-pod-k55c9 in namespace services-3791
STEP: creating service up-down-3 in namespace services-3791
STEP: creating service up-down-3 in namespace services-3791
STEP: creating replication controller up-down-3 in namespace services-3791
I0429 23:20:47.902171      29 runners.go:190] Created replication controller with name: up-down-3, namespace: services-3791, replica count: 3
I0429 23:20:50.952990      29 runners.go:190] up-down-3 Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0429 23:20:53.953313      29 runners.go:190] up-down-3 Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
STEP: verifying service up-down-2 is still up
Apr 29 23:20:53.955: INFO: Creating new host exec pod
Apr 29 23:20:54.057: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Apr 29 23:20:56.061: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Apr 29 23:20:58.063: INFO: The status of Pod verify-service-up-host-exec-pod is Running (Ready = true)
Apr 29 23:20:58.064: INFO: Creating new exec pod
STEP: verifying service has 3 reachable backends
Apr 29 23:21:02.081: INFO: Executing cmd "for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.20.252:80 2>&1 || true; echo; done" in pod services-3791/verify-service-up-host-exec-pod
Apr 29 23:21:02.081: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3791 exec verify-service-up-host-exec-pod -- /bin/sh -x -c for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.20.252:80 2>&1 || true; echo; done'
Apr 29 23:21:02.448: INFO: stderr: "+ seq 1 150\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n"
Apr 29 23:21:02.449: INFO: stdout: "up-down-2-xcs5g\nup-down-2-8q9zz\nup-down-2-8q9zz\nup-down-2-65dld\nup-down-2-xcs5g\nup-down-2-xcs5g\nup-down-2-65dld\nup-down-2-8q9zz\nup-down-2-8q9zz\nup-down-2-xcs5g\nup-down-2-65dld\nup-down-2-65dld\nup-down-2-65dld\nup-down-2-8q9zz\nup-down-2-xcs5g\nup-down-2-65dld\nup-down-2-8q9zz\nup-down-2-65dld\nup-down-2-8q9zz\nup-down-2-8q9zz\nup-down-2-65dld\nup-down-2-8q9zz\nup-down-2-xcs5g\nup-down-2-8q9zz\nup-down-2-8q9zz\nup-down-2-xcs5g\nup-down-2-8q9zz\nup-down-2-8q9zz\nup-down-2-xcs5g\nup-down-2-xcs5g\nup-down-2-xcs5g\nup-down-2-xcs5g\nup-down-2-8q9zz\nup-down-2-8q9zz\nup-down-2-xcs5g\nup-down-2-8q9zz\nup-down-2-8q9zz\nup-down-2-65dld\nup-down-2-65dld\nup-down-2-xcs5g\nup-down-2-8q9zz\nup-down-2-xcs5g\nup-down-2-xcs5g\nup-down-2-65dld\nup-down-2-65dld\nup-down-2-8q9zz\nup-down-2-8q9zz\nup-down-2-8q9zz\nup-down-2-65dld\nup-down-2-65dld\nup-down-2-65dld\nup-down-2-65dld\nup-down-2-65dld\nup-down-2-65dld\nup-down-2-xcs5g\nup-down-2-8q9zz\nup-down-2-8q9zz\nup-down-2-65dld\nup-down-2-xcs5g\nup-down-2-8q9zz\nup-down-2-xcs5g\nup-down-2-8q9zz\nup-down-2-65dld\nup-down-2-65dld\nup-down-2-xcs5g\nup-down-2-65dld\nup-down-2-8q9zz\nup-down-2-xcs5g\nup-down-2-65dld\nup-down-2-65dld\nup-down-2-xcs5g\nup-down-2-8q9zz\nup-down-2-xcs5g\nup-down-2-xcs5g\nup-down-2-65dld\nup-down-2-65dld\nup-down-2-xcs5g\nup-down-2-8q9zz\nup-down-2-xcs5g\nup-down-2-xcs5g\nup-down-2-65dld\nup-down-2-65dld\nup-down-2-65dld\nup-down-2-65dld\nup-down-2-65dld\nup-down-2-xcs5g\nup-down-2-65dld\nup-down-2-8q9zz\nup-down-2-8q9zz\nup-down-2-8q9zz\nup-down-2-65dld\nup-down-2-xcs5g\nup-down-2-65dld\nup-down-2-xcs5g\nup-down-2-8q9zz\nup-down-2-8q9zz\nup-down-2-65dld\nup-down-2-8q9zz\nup-down-2-8q9zz\nup-down-2-65dld\nup-down-2-8q9zz\nup-down-2-8q9zz\nup-down-2-8q9zz\nup-down-2-xcs5g\nup-down-2-xcs5g\nup-down-2-8q9zz\nup-down-2-xcs5g\nup-down-2-xcs5g\nup-down-2-8q9zz\nup-down-2-65dld\nup-down-2-xcs5g\nup-down-2-8q9zz\nup-down-2-65dld\nup-down-2-8q9zz\nup-down-2-8q9zz\nup-down-2-65dld\nup-down-2-xcs5g\nup-down-2-xcs5g\nup-down-2-8q9zz\nup-down-2-8q9zz\nup-down-2-xcs5g\nup-down-2-xcs5g\nup-down-2-xcs5g\nup-down-2-65dld\nup-down-2-xcs5g\nup-down-2-xcs5g\nup-down-2-65dld\nup-down-2-8q9zz\nup-down-2-8q9zz\nup-down-2-8q9zz\nup-down-2-65dld\nup-down-2-8q9zz\nup-down-2-8q9zz\nup-down-2-8q9zz\nup-down-2-65dld\nup-down-2-8q9zz\nup-down-2-8q9zz\nup-down-2-8q9zz\nup-down-2-xcs5g\nup-down-2-65dld\nup-down-2-65dld\nup-down-2-65dld\nup-down-2-xcs5g\nup-down-2-8q9zz\nup-down-2-65dld\nup-down-2-65dld\nup-down-2-8q9zz\nup-down-2-65dld\nup-down-2-8q9zz\nup-down-2-8q9zz\n"
Apr 29 23:21:02.449: INFO: Executing cmd "for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.20.252:80 2>&1 || true; echo; done" in pod services-3791/verify-service-up-exec-pod-wcs7c
Apr 29 23:21:02.449: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3791 exec verify-service-up-exec-pod-wcs7c -- /bin/sh -x -c for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.20.252:80 2>&1 || true; echo; done'
Apr 29 23:21:02.854: INFO: stderr: "+ seq 1 150\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.20.252:80\n+ echo\n"
Apr 29 23:21:02.854: INFO: stdout: "up-down-2-65dld\nup-down-2-xcs5g\nup-down-2-xcs5g\nup-down-2-65dld\nup-down-2-65dld\nup-down-2-8q9zz\nup-down-2-xcs5g\nup-down-2-xcs5g\nup-down-2-8q9zz\nup-down-2-65dld\nup-down-2-65dld\nup-down-2-xcs5g\nup-down-2-xcs5g\nup-down-2-xcs5g\nup-down-2-8q9zz\nup-down-2-65dld\nup-down-2-xcs5g\nup-down-2-8q9zz\nup-down-2-8q9zz\nup-down-2-8q9zz\nup-down-2-xcs5g\nup-down-2-8q9zz\nup-down-2-8q9zz\nup-down-2-xcs5g\nup-down-2-8q9zz\nup-down-2-xcs5g\nup-down-2-65dld\nup-down-2-8q9zz\nup-down-2-65dld\nup-down-2-65dld\nup-down-2-65dld\nup-down-2-65dld\nup-down-2-65dld\nup-down-2-xcs5g\nup-down-2-8q9zz\nup-down-2-8q9zz\nup-down-2-xcs5g\nup-down-2-xcs5g\nup-down-2-xcs5g\nup-down-2-xcs5g\nup-down-2-xcs5g\nup-down-2-8q9zz\nup-down-2-65dld\nup-down-2-8q9zz\nup-down-2-xcs5g\nup-down-2-8q9zz\nup-down-2-xcs5g\nup-down-2-xcs5g\nup-down-2-65dld\nup-down-2-xcs5g\nup-down-2-xcs5g\nup-down-2-65dld\nup-down-2-8q9zz\nup-down-2-8q9zz\nup-down-2-xcs5g\nup-down-2-8q9zz\nup-down-2-8q9zz\nup-down-2-65dld\nup-down-2-8q9zz\nup-down-2-8q9zz\nup-down-2-65dld\nup-down-2-xcs5g\nup-down-2-8q9zz\nup-down-2-65dld\nup-down-2-65dld\nup-down-2-xcs5g\nup-down-2-xcs5g\nup-down-2-8q9zz\nup-down-2-65dld\nup-down-2-65dld\nup-down-2-xcs5g\nup-down-2-8q9zz\nup-down-2-8q9zz\nup-down-2-8q9zz\nup-down-2-65dld\nup-down-2-xcs5g\nup-down-2-8q9zz\nup-down-2-8q9zz\nup-down-2-8q9zz\nup-down-2-65dld\nup-down-2-xcs5g\nup-down-2-xcs5g\nup-down-2-65dld\nup-down-2-65dld\nup-down-2-xcs5g\nup-down-2-xcs5g\nup-down-2-xcs5g\nup-down-2-8q9zz\nup-down-2-65dld\nup-down-2-65dld\nup-down-2-65dld\nup-down-2-65dld\nup-down-2-xcs5g\nup-down-2-65dld\nup-down-2-xcs5g\nup-down-2-65dld\nup-down-2-65dld\nup-down-2-65dld\nup-down-2-xcs5g\nup-down-2-65dld\nup-down-2-8q9zz\nup-down-2-xcs5g\nup-down-2-xcs5g\nup-down-2-8q9zz\nup-down-2-xcs5g\nup-down-2-65dld\nup-down-2-xcs5g\nup-down-2-65dld\nup-down-2-8q9zz\nup-down-2-65dld\nup-down-2-8q9zz\nup-down-2-xcs5g\nup-down-2-8q9zz\nup-down-2-65dld\nup-down-2-xcs5g\nup-down-2-8q9zz\nup-down-2-xcs5g\nup-down-2-8q9zz\nup-down-2-8q9zz\nup-down-2-xcs5g\nup-down-2-xcs5g\nup-down-2-8q9zz\nup-down-2-8q9zz\nup-down-2-8q9zz\nup-down-2-8q9zz\nup-down-2-xcs5g\nup-down-2-8q9zz\nup-down-2-xcs5g\nup-down-2-65dld\nup-down-2-65dld\nup-down-2-xcs5g\nup-down-2-xcs5g\nup-down-2-xcs5g\nup-down-2-xcs5g\nup-down-2-xcs5g\nup-down-2-8q9zz\nup-down-2-8q9zz\nup-down-2-8q9zz\nup-down-2-8q9zz\nup-down-2-xcs5g\nup-down-2-xcs5g\nup-down-2-65dld\nup-down-2-xcs5g\nup-down-2-65dld\nup-down-2-65dld\nup-down-2-65dld\nup-down-2-8q9zz\nup-down-2-65dld\nup-down-2-65dld\nup-down-2-8q9zz\n"
STEP: Deleting pod verify-service-up-host-exec-pod in namespace services-3791
STEP: Deleting pod verify-service-up-exec-pod-wcs7c in namespace services-3791
STEP: verifying service up-down-3 is up
Apr 29 23:21:02.869: INFO: Creating new host exec pod
Apr 29 23:21:02.880: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Apr 29 23:21:04.885: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Apr 29 23:21:06.884: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Apr 29 23:21:08.885: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Apr 29 23:21:10.884: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Apr 29 23:21:12.884: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Apr 29 23:21:14.884: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Apr 29 23:21:16.884: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Apr 29 23:21:18.885: INFO: The status of Pod verify-service-up-host-exec-pod is Running (Ready = true)
Apr 29 23:21:18.885: INFO: Creating new exec pod
STEP: verifying service has 3 reachable backends
Apr 29 23:21:22.903: INFO: Executing cmd "for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.42.94:80 2>&1 || true; echo; done" in pod services-3791/verify-service-up-host-exec-pod
Apr 29 23:21:22.903: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3791 exec verify-service-up-host-exec-pod -- /bin/sh -x -c for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.42.94:80 2>&1 || true; echo; done'
Apr 29 23:21:23.524: INFO: stderr: "+ seq 1 150\n+ wget -q -T 1 -O - http://10.233.42.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.42.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.42.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.42.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.42.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.42.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.42.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.42.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.42.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.42.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.42.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.42.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.42.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.42.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.42.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.42.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.42.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.42.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.42.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.42.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.42.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.42.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.42.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.42.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.42.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.42.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.42.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.42.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.42.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.42.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.42.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.42.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.42.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.42.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.42.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.42.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.42.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.42.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.42.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.42.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.42.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.42.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.42.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.42.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.42.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.42.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.42.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.42.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.42.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.42.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.42.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.42.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.42.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.42.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.42.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.42.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.42.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.42.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.42.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.42.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.42.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.42.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.42.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.42.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.42.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.42.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.42.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.42.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.42.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.42.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.42.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.42.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.42.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.42.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.42.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.42.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.42.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.42.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.42.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.42.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.42.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.42.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.42.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.42.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.42.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.42.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.42.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.42.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.42.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.42.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.42.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.42.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.42.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.42.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.42.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.42.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.42.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.42.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.42.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.42.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.42.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.42.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.42.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.42.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.42.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.42.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.42.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.42.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.42.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.42.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.42.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.42.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.42.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.42.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.42.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.42.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.42.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.42.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.42.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.42.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.42.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.42.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.42.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.42.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.42.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.42.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.42.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.42.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.42.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.42.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.42.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.42.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.42.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.42.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.42.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.42.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.42.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.42.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.42.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.42.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.42.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.42.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.42.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.42.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.42.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.42.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.42.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.42.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.42.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.42.94:80\n+ echo\n"
Apr 29 23:21:23.524: INFO: stdout: "up-down-3-w9wsr\nup-down-3-qdfpp\nup-down-3-w9wsr\nup-down-3-w9wsr\nup-down-3-qdfpp\nup-down-3-7m88r\nup-down-3-qdfpp\nup-down-3-7m88r\nup-down-3-qdfpp\nup-down-3-qdfpp\nup-down-3-qdfpp\nup-down-3-w9wsr\nup-down-3-w9wsr\nup-down-3-qdfpp\nup-down-3-7m88r\nup-down-3-qdfpp\nup-down-3-qdfpp\nup-down-3-7m88r\nup-down-3-7m88r\nup-down-3-7m88r\nup-down-3-w9wsr\nup-down-3-w9wsr\nup-down-3-w9wsr\nup-down-3-qdfpp\nup-down-3-w9wsr\nup-down-3-w9wsr\nup-down-3-qdfpp\nup-down-3-7m88r\nup-down-3-w9wsr\nup-down-3-7m88r\nup-down-3-w9wsr\nup-down-3-qdfpp\nup-down-3-7m88r\nup-down-3-w9wsr\nup-down-3-w9wsr\nup-down-3-7m88r\nup-down-3-7m88r\nup-down-3-qdfpp\nup-down-3-w9wsr\nup-down-3-7m88r\nup-down-3-w9wsr\nup-down-3-w9wsr\nup-down-3-7m88r\nup-down-3-7m88r\nup-down-3-7m88r\nup-down-3-qdfpp\nup-down-3-w9wsr\nup-down-3-qdfpp\nup-down-3-w9wsr\nup-down-3-qdfpp\nup-down-3-w9wsr\nup-down-3-7m88r\nup-down-3-7m88r\nup-down-3-qdfpp\nup-down-3-7m88r\nup-down-3-7m88r\nup-down-3-qdfpp\nup-down-3-w9wsr\nup-down-3-w9wsr\nup-down-3-7m88r\nup-down-3-7m88r\nup-down-3-7m88r\nup-down-3-7m88r\nup-down-3-qdfpp\nup-down-3-qdfpp\nup-down-3-qdfpp\nup-down-3-7m88r\nup-down-3-7m88r\nup-down-3-w9wsr\nup-down-3-7m88r\nup-down-3-w9wsr\nup-down-3-w9wsr\nup-down-3-w9wsr\nup-down-3-7m88r\nup-down-3-7m88r\nup-down-3-7m88r\nup-down-3-qdfpp\nup-down-3-qdfpp\nup-down-3-w9wsr\nup-down-3-qdfpp\nup-down-3-7m88r\nup-down-3-7m88r\nup-down-3-w9wsr\nup-down-3-7m88r\nup-down-3-qdfpp\nup-down-3-7m88r\nup-down-3-qdfpp\nup-down-3-7m88r\nup-down-3-qdfpp\nup-down-3-qdfpp\nup-down-3-w9wsr\nup-down-3-7m88r\nup-down-3-w9wsr\nup-down-3-7m88r\nup-down-3-qdfpp\nup-down-3-w9wsr\nup-down-3-qdfpp\nup-down-3-7m88r\nup-down-3-7m88r\nup-down-3-qdfpp\nup-down-3-qdfpp\nup-down-3-w9wsr\nup-down-3-qdfpp\nup-down-3-w9wsr\nup-down-3-7m88r\nup-down-3-7m88r\nup-down-3-w9wsr\nup-down-3-w9wsr\nup-down-3-w9wsr\nup-down-3-7m88r\nup-down-3-w9wsr\nup-down-3-7m88r\nup-down-3-7m88r\nup-down-3-qdfpp\nup-down-3-qdfpp\nup-down-3-qdfpp\nup-down-3-qdfpp\nup-down-3-7m88r\nup-down-3-qdfpp\nup-down-3-qdfpp\nup-down-3-7m88r\nup-down-3-w9wsr\nup-down-3-7m88r\nup-down-3-7m88r\nup-down-3-qdfpp\nup-down-3-qdfpp\nup-down-3-w9wsr\nup-down-3-7m88r\nup-down-3-w9wsr\nup-down-3-qdfpp\nup-down-3-7m88r\nup-down-3-7m88r\nup-down-3-w9wsr\nup-down-3-w9wsr\nup-down-3-7m88r\nup-down-3-qdfpp\nup-down-3-qdfpp\nup-down-3-w9wsr\nup-down-3-w9wsr\nup-down-3-7m88r\nup-down-3-w9wsr\nup-down-3-qdfpp\nup-down-3-w9wsr\nup-down-3-w9wsr\nup-down-3-qdfpp\nup-down-3-qdfpp\nup-down-3-w9wsr\nup-down-3-qdfpp\nup-down-3-7m88r\nup-down-3-7m88r\n"
Apr 29 23:21:23.525: INFO: Executing cmd "for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.42.94:80 2>&1 || true; echo; done" in pod services-3791/verify-service-up-exec-pod-zjjkw
Apr 29 23:21:23.525: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3791 exec verify-service-up-exec-pod-zjjkw -- /bin/sh -x -c for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.42.94:80 2>&1 || true; echo; done'
Apr 29 23:21:24.075: INFO: stderr: "+ seq 1 150\n+ wget -q -T 1 -O - http://10.233.42.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.42.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.42.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.42.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.42.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.42.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.42.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.42.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.42.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.42.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.42.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.42.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.42.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.42.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.42.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.42.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.42.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.42.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.42.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.42.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.42.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.42.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.42.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.42.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.42.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.42.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.42.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.42.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.42.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.42.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.42.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.42.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.42.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.42.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.42.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.42.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.42.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.42.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.42.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.42.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.42.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.42.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.42.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.42.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.42.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.42.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.42.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.42.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.42.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.42.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.42.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.42.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.42.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.42.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.42.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.42.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.42.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.42.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.42.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.42.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.42.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.42.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.42.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.42.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.42.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.42.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.42.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.42.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.42.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.42.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.42.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.42.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.42.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.42.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.42.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.42.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.42.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.42.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.42.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.42.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.42.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.42.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.42.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.42.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.42.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.42.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.42.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.42.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.42.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.42.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.42.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.42.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.42.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.42.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.42.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.42.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.42.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.42.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.42.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.42.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.42.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.42.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.42.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.42.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.42.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.42.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.42.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.42.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.42.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.42.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.42.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.42.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.42.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.42.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.42.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.42.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.42.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.42.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.42.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.42.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.42.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.42.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.42.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.42.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.42.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.42.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.42.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.42.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.42.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.42.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.42.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.42.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.42.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.42.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.42.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.42.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.42.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.42.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.42.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.42.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.42.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.42.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.42.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.42.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.42.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.42.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.42.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.42.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.42.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.42.94:80\n+ echo\n"
Apr 29 23:21:24.076: INFO: stdout: "up-down-3-qdfpp\nup-down-3-w9wsr\nup-down-3-w9wsr\nup-down-3-w9wsr\nup-down-3-w9wsr\nup-down-3-w9wsr\nup-down-3-qdfpp\nup-down-3-w9wsr\nup-down-3-w9wsr\nup-down-3-w9wsr\nup-down-3-7m88r\nup-down-3-qdfpp\nup-down-3-w9wsr\nup-down-3-7m88r\nup-down-3-7m88r\nup-down-3-w9wsr\nup-down-3-7m88r\nup-down-3-qdfpp\nup-down-3-w9wsr\nup-down-3-7m88r\nup-down-3-w9wsr\nup-down-3-7m88r\nup-down-3-qdfpp\nup-down-3-7m88r\nup-down-3-7m88r\nup-down-3-w9wsr\nup-down-3-7m88r\nup-down-3-7m88r\nup-down-3-qdfpp\nup-down-3-7m88r\nup-down-3-w9wsr\nup-down-3-qdfpp\nup-down-3-w9wsr\nup-down-3-7m88r\nup-down-3-7m88r\nup-down-3-qdfpp\nup-down-3-qdfpp\nup-down-3-7m88r\nup-down-3-qdfpp\nup-down-3-7m88r\nup-down-3-7m88r\nup-down-3-w9wsr\nup-down-3-qdfpp\nup-down-3-qdfpp\nup-down-3-7m88r\nup-down-3-w9wsr\nup-down-3-7m88r\nup-down-3-w9wsr\nup-down-3-7m88r\nup-down-3-qdfpp\nup-down-3-7m88r\nup-down-3-7m88r\nup-down-3-7m88r\nup-down-3-qdfpp\nup-down-3-w9wsr\nup-down-3-w9wsr\nup-down-3-qdfpp\nup-down-3-qdfpp\nup-down-3-7m88r\nup-down-3-w9wsr\nup-down-3-7m88r\nup-down-3-qdfpp\nup-down-3-w9wsr\nup-down-3-7m88r\nup-down-3-qdfpp\nup-down-3-w9wsr\nup-down-3-7m88r\nup-down-3-w9wsr\nup-down-3-qdfpp\nup-down-3-7m88r\nup-down-3-7m88r\nup-down-3-7m88r\nup-down-3-w9wsr\nup-down-3-w9wsr\nup-down-3-7m88r\nup-down-3-qdfpp\nup-down-3-w9wsr\nup-down-3-w9wsr\nup-down-3-7m88r\nup-down-3-qdfpp\nup-down-3-7m88r\nup-down-3-w9wsr\nup-down-3-qdfpp\nup-down-3-7m88r\nup-down-3-7m88r\nup-down-3-w9wsr\nup-down-3-7m88r\nup-down-3-qdfpp\nup-down-3-qdfpp\nup-down-3-7m88r\nup-down-3-qdfpp\nup-down-3-7m88r\nup-down-3-qdfpp\nup-down-3-w9wsr\nup-down-3-7m88r\nup-down-3-w9wsr\nup-down-3-qdfpp\nup-down-3-w9wsr\nup-down-3-w9wsr\nup-down-3-7m88r\nup-down-3-7m88r\nup-down-3-w9wsr\nup-down-3-7m88r\nup-down-3-w9wsr\nup-down-3-w9wsr\nup-down-3-w9wsr\nup-down-3-qdfpp\nup-down-3-w9wsr\nup-down-3-w9wsr\nup-down-3-w9wsr\nup-down-3-w9wsr\nup-down-3-qdfpp\nup-down-3-w9wsr\nup-down-3-w9wsr\nup-down-3-w9wsr\nup-down-3-7m88r\nup-down-3-w9wsr\nup-down-3-qdfpp\nup-down-3-w9wsr\nup-down-3-qdfpp\nup-down-3-7m88r\nup-down-3-w9wsr\nup-down-3-w9wsr\nup-down-3-qdfpp\nup-down-3-qdfpp\nup-down-3-w9wsr\nup-down-3-qdfpp\nup-down-3-7m88r\nup-down-3-qdfpp\nup-down-3-w9wsr\nup-down-3-qdfpp\nup-down-3-7m88r\nup-down-3-7m88r\nup-down-3-qdfpp\nup-down-3-7m88r\nup-down-3-7m88r\nup-down-3-w9wsr\nup-down-3-7m88r\nup-down-3-7m88r\nup-down-3-w9wsr\nup-down-3-qdfpp\nup-down-3-w9wsr\nup-down-3-7m88r\nup-down-3-qdfpp\nup-down-3-qdfpp\nup-down-3-w9wsr\nup-down-3-7m88r\nup-down-3-qdfpp\nup-down-3-w9wsr\nup-down-3-7m88r\n"
STEP: Deleting pod verify-service-up-host-exec-pod in namespace services-3791
STEP: Deleting pod verify-service-up-exec-pod-zjjkw in namespace services-3791
[AfterEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Apr 29 23:21:24.088: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-3791" for this suite.
[AfterEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750


• [SLOW TEST:131.851 seconds]
[sig-network] Services
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should be able to up and down services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1015
------------------------------
{"msg":"PASSED [sig-network] Services should be able to up and down services","total":-1,"completed":1,"skipped":73,"failed":0}
Apr 29 23:21:24.103: INFO: Running AfterSuite actions on all nodes


[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Apr 29 23:20:06.134: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746
[It] should implement service.kubernetes.io/headless
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1916
STEP: creating service-headless in namespace services-916
STEP: creating service service-headless in namespace services-916
STEP: creating replication controller service-headless in namespace services-916
I0429 23:20:06.168209      23 runners.go:190] Created replication controller with name: service-headless, namespace: services-916, replica count: 3
I0429 23:20:09.219316      23 runners.go:190] service-headless Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0429 23:20:12.220590      23 runners.go:190] service-headless Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0429 23:20:15.221645      23 runners.go:190] service-headless Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
STEP: creating service in namespace services-916
STEP: creating service service-headless-toggled in namespace services-916
STEP: creating replication controller service-headless-toggled in namespace services-916
I0429 23:20:15.233919      23 runners.go:190] Created replication controller with name: service-headless-toggled, namespace: services-916, replica count: 3
I0429 23:20:18.285934      23 runners.go:190] service-headless-toggled Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0429 23:20:21.286264      23 runners.go:190] service-headless-toggled Pods: 3 out of 3 created, 2 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0429 23:20:24.289041      23 runners.go:190] service-headless-toggled Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
STEP: verifying service is up
Apr 29 23:20:24.291: INFO: Creating new host exec pod
Apr 29 23:20:24.303: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Apr 29 23:20:26.306: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Apr 29 23:20:28.307: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Apr 29 23:20:30.307: INFO: The status of Pod verify-service-up-host-exec-pod is Running (Ready = true)
Apr 29 23:20:30.307: INFO: Creating new exec pod
STEP: verifying service has 3 reachable backends
Apr 29 23:20:36.323: INFO: Executing cmd "for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.2.20:80 2>&1 || true; echo; done" in pod services-916/verify-service-up-host-exec-pod
Apr 29 23:20:36.324: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-916 exec verify-service-up-host-exec-pod -- /bin/sh -x -c for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.2.20:80 2>&1 || true; echo; done'
Apr 29 23:20:36.884: INFO: stderr: "+ seq 1 150\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n"
Apr 29 23:20:36.884: INFO: stdout: "service-headless-toggled-z997q\nservice-headless-toggled-8qw2t\nservice-headless-toggled-z997q\nservice-headless-toggled-z997q\nservice-headless-toggled-vsbpf\nservice-headless-toggled-z997q\nservice-headless-toggled-vsbpf\nservice-headless-toggled-8qw2t\nservice-headless-toggled-8qw2t\nservice-headless-toggled-vsbpf\nservice-headless-toggled-z997q\nservice-headless-toggled-8qw2t\nservice-headless-toggled-z997q\nservice-headless-toggled-8qw2t\nservice-headless-toggled-z997q\nservice-headless-toggled-z997q\nservice-headless-toggled-z997q\nservice-headless-toggled-8qw2t\nservice-headless-toggled-8qw2t\nservice-headless-toggled-vsbpf\nservice-headless-toggled-8qw2t\nservice-headless-toggled-8qw2t\nservice-headless-toggled-z997q\nservice-headless-toggled-8qw2t\nservice-headless-toggled-z997q\nservice-headless-toggled-vsbpf\nservice-headless-toggled-z997q\nservice-headless-toggled-8qw2t\nservice-headless-toggled-vsbpf\nservice-headless-toggled-vsbpf\nservice-headless-toggled-z997q\nservice-headless-toggled-z997q\nservice-headless-toggled-vsbpf\nservice-headless-toggled-vsbpf\nservice-headless-toggled-vsbpf\nservice-headless-toggled-z997q\nservice-headless-toggled-vsbpf\nservice-headless-toggled-8qw2t\nservice-headless-toggled-z997q\nservice-headless-toggled-z997q\nservice-headless-toggled-vsbpf\nservice-headless-toggled-z997q\nservice-headless-toggled-8qw2t\nservice-headless-toggled-8qw2t\nservice-headless-toggled-8qw2t\nservice-headless-toggled-8qw2t\nservice-headless-toggled-vsbpf\nservice-headless-toggled-8qw2t\nservice-headless-toggled-z997q\nservice-headless-toggled-8qw2t\nservice-headless-toggled-z997q\nservice-headless-toggled-z997q\nservice-headless-toggled-z997q\nservice-headless-toggled-vsbpf\nservice-headless-toggled-vsbpf\nservice-headless-toggled-z997q\nservice-headless-toggled-z997q\nservice-headless-toggled-8qw2t\nservice-headless-toggled-8qw2t\nservice-headless-toggled-z997q\nservice-headless-toggled-8qw2t\nservice-headless-toggled-vsbpf\nservice-headless-toggled-8qw2t\nservice-headless-toggled-8qw2t\nservice-headless-toggled-z997q\nservice-headless-toggled-z997q\nservice-headless-toggled-vsbpf\nservice-headless-toggled-8qw2t\nservice-headless-toggled-vsbpf\nservice-headless-toggled-vsbpf\nservice-headless-toggled-z997q\nservice-headless-toggled-8qw2t\nservice-headless-toggled-z997q\nservice-headless-toggled-vsbpf\nservice-headless-toggled-z997q\nservice-headless-toggled-vsbpf\nservice-headless-toggled-8qw2t\nservice-headless-toggled-8qw2t\nservice-headless-toggled-vsbpf\nservice-headless-toggled-8qw2t\nservice-headless-toggled-z997q\nservice-headless-toggled-vsbpf\nservice-headless-toggled-8qw2t\nservice-headless-toggled-z997q\nservice-headless-toggled-8qw2t\nservice-headless-toggled-z997q\nservice-headless-toggled-z997q\nservice-headless-toggled-z997q\nservice-headless-toggled-8qw2t\nservice-headless-toggled-vsbpf\nservice-headless-toggled-vsbpf\nservice-headless-toggled-z997q\nservice-headless-toggled-z997q\nservice-headless-toggled-z997q\nservice-headless-toggled-z997q\nservice-headless-toggled-8qw2t\nservice-headless-toggled-z997q\nservice-headless-toggled-vsbpf\nservice-headless-toggled-8qw2t\nservice-headless-toggled-vsbpf\nservice-headless-toggled-vsbpf\nservice-headless-toggled-vsbpf\nservice-headless-toggled-z997q\nservice-headless-toggled-z997q\nservice-headless-toggled-8qw2t\nservice-headless-toggled-z997q\nservice-headless-toggled-z997q\nservice-headless-toggled-vsbpf\nservice-headless-toggled-8qw2t\nservice-headless-toggled-vsbpf\nservice-headless-toggled-8qw2t\nservice-headless-toggled-8qw2t\nservice-headless-toggled-8qw2t\nservice-headless-toggled-8qw2t\nservice-headless-toggled-vsbpf\nservice-headless-toggled-z997q\nservice-headless-toggled-vsbpf\nservice-headless-toggled-vsbpf\nservice-headless-toggled-vsbpf\nservice-headless-toggled-z997q\nservice-headless-toggled-z997q\nservice-headless-toggled-8qw2t\nservice-headless-toggled-vsbpf\nservice-headless-toggled-z997q\nservice-headless-toggled-8qw2t\nservice-headless-toggled-vsbpf\nservice-headless-toggled-z997q\nservice-headless-toggled-z997q\nservice-headless-toggled-vsbpf\nservice-headless-toggled-vsbpf\nservice-headless-toggled-vsbpf\nservice-headless-toggled-vsbpf\nservice-headless-toggled-z997q\nservice-headless-toggled-z997q\nservice-headless-toggled-z997q\nservice-headless-toggled-8qw2t\nservice-headless-toggled-8qw2t\nservice-headless-toggled-vsbpf\nservice-headless-toggled-vsbpf\nservice-headless-toggled-z997q\nservice-headless-toggled-8qw2t\nservice-headless-toggled-8qw2t\nservice-headless-toggled-vsbpf\nservice-headless-toggled-z997q\nservice-headless-toggled-8qw2t\nservice-headless-toggled-vsbpf\nservice-headless-toggled-z997q\nservice-headless-toggled-8qw2t\nservice-headless-toggled-vsbpf\nservice-headless-toggled-z997q\n"
Apr 29 23:20:36.885: INFO: Executing cmd "for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.2.20:80 2>&1 || true; echo; done" in pod services-916/verify-service-up-exec-pod-sjqqj
Apr 29 23:20:36.885: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-916 exec verify-service-up-exec-pod-sjqqj -- /bin/sh -x -c for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.2.20:80 2>&1 || true; echo; done'
Apr 29 23:20:37.542: INFO: stderr: "+ seq 1 150\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n"
Apr 29 23:20:37.542: INFO: stdout: "service-headless-toggled-z997q\nservice-headless-toggled-8qw2t\nservice-headless-toggled-vsbpf\nservice-headless-toggled-8qw2t\nservice-headless-toggled-8qw2t\nservice-headless-toggled-8qw2t\nservice-headless-toggled-8qw2t\nservice-headless-toggled-z997q\nservice-headless-toggled-vsbpf\nservice-headless-toggled-z997q\nservice-headless-toggled-z997q\nservice-headless-toggled-vsbpf\nservice-headless-toggled-vsbpf\nservice-headless-toggled-vsbpf\nservice-headless-toggled-8qw2t\nservice-headless-toggled-vsbpf\nservice-headless-toggled-8qw2t\nservice-headless-toggled-z997q\nservice-headless-toggled-8qw2t\nservice-headless-toggled-z997q\nservice-headless-toggled-z997q\nservice-headless-toggled-z997q\nservice-headless-toggled-vsbpf\nservice-headless-toggled-8qw2t\nservice-headless-toggled-vsbpf\nservice-headless-toggled-8qw2t\nservice-headless-toggled-z997q\nservice-headless-toggled-8qw2t\nservice-headless-toggled-vsbpf\nservice-headless-toggled-8qw2t\nservice-headless-toggled-vsbpf\nservice-headless-toggled-vsbpf\nservice-headless-toggled-vsbpf\nservice-headless-toggled-z997q\nservice-headless-toggled-8qw2t\nservice-headless-toggled-8qw2t\nservice-headless-toggled-8qw2t\nservice-headless-toggled-8qw2t\nservice-headless-toggled-vsbpf\nservice-headless-toggled-8qw2t\nservice-headless-toggled-vsbpf\nservice-headless-toggled-8qw2t\nservice-headless-toggled-z997q\nservice-headless-toggled-8qw2t\nservice-headless-toggled-8qw2t\nservice-headless-toggled-vsbpf\nservice-headless-toggled-z997q\nservice-headless-toggled-z997q\nservice-headless-toggled-8qw2t\nservice-headless-toggled-z997q\nservice-headless-toggled-8qw2t\nservice-headless-toggled-vsbpf\nservice-headless-toggled-vsbpf\nservice-headless-toggled-8qw2t\nservice-headless-toggled-z997q\nservice-headless-toggled-vsbpf\nservice-headless-toggled-8qw2t\nservice-headless-toggled-8qw2t\nservice-headless-toggled-vsbpf\nservice-headless-toggled-8qw2t\nservice-headless-toggled-vsbpf\nservice-headless-toggled-vsbpf\nservice-headless-toggled-8qw2t\nservice-headless-toggled-8qw2t\nservice-headless-toggled-vsbpf\nservice-headless-toggled-vsbpf\nservice-headless-toggled-8qw2t\nservice-headless-toggled-vsbpf\nservice-headless-toggled-z997q\nservice-headless-toggled-z997q\nservice-headless-toggled-z997q\nservice-headless-toggled-z997q\nservice-headless-toggled-8qw2t\nservice-headless-toggled-z997q\nservice-headless-toggled-8qw2t\nservice-headless-toggled-z997q\nservice-headless-toggled-8qw2t\nservice-headless-toggled-8qw2t\nservice-headless-toggled-z997q\nservice-headless-toggled-vsbpf\nservice-headless-toggled-8qw2t\nservice-headless-toggled-8qw2t\nservice-headless-toggled-vsbpf\nservice-headless-toggled-vsbpf\nservice-headless-toggled-z997q\nservice-headless-toggled-z997q\nservice-headless-toggled-z997q\nservice-headless-toggled-z997q\nservice-headless-toggled-z997q\nservice-headless-toggled-z997q\nservice-headless-toggled-vsbpf\nservice-headless-toggled-z997q\nservice-headless-toggled-vsbpf\nservice-headless-toggled-vsbpf\nservice-headless-toggled-vsbpf\nservice-headless-toggled-vsbpf\nservice-headless-toggled-8qw2t\nservice-headless-toggled-8qw2t\nservice-headless-toggled-vsbpf\nservice-headless-toggled-vsbpf\nservice-headless-toggled-z997q\nservice-headless-toggled-z997q\nservice-headless-toggled-8qw2t\nservice-headless-toggled-vsbpf\nservice-headless-toggled-z997q\nservice-headless-toggled-vsbpf\nservice-headless-toggled-8qw2t\nservice-headless-toggled-z997q\nservice-headless-toggled-z997q\nservice-headless-toggled-8qw2t\nservice-headless-toggled-z997q\nservice-headless-toggled-vsbpf\nservice-headless-toggled-8qw2t\nservice-headless-toggled-z997q\nservice-headless-toggled-z997q\nservice-headless-toggled-vsbpf\nservice-headless-toggled-z997q\nservice-headless-toggled-vsbpf\nservice-headless-toggled-vsbpf\nservice-headless-toggled-vsbpf\nservice-headless-toggled-vsbpf\nservice-headless-toggled-8qw2t\nservice-headless-toggled-8qw2t\nservice-headless-toggled-8qw2t\nservice-headless-toggled-8qw2t\nservice-headless-toggled-vsbpf\nservice-headless-toggled-z997q\nservice-headless-toggled-vsbpf\nservice-headless-toggled-z997q\nservice-headless-toggled-8qw2t\nservice-headless-toggled-8qw2t\nservice-headless-toggled-vsbpf\nservice-headless-toggled-z997q\nservice-headless-toggled-vsbpf\nservice-headless-toggled-z997q\nservice-headless-toggled-8qw2t\nservice-headless-toggled-8qw2t\nservice-headless-toggled-8qw2t\nservice-headless-toggled-8qw2t\nservice-headless-toggled-8qw2t\nservice-headless-toggled-z997q\nservice-headless-toggled-z997q\nservice-headless-toggled-8qw2t\nservice-headless-toggled-vsbpf\nservice-headless-toggled-vsbpf\nservice-headless-toggled-vsbpf\nservice-headless-toggled-8qw2t\nservice-headless-toggled-vsbpf\nservice-headless-toggled-8qw2t\nservice-headless-toggled-8qw2t\n"
STEP: Deleting pod verify-service-up-host-exec-pod in namespace services-916
STEP: Deleting pod verify-service-up-exec-pod-sjqqj in namespace services-916
STEP: verifying service-headless is not up
Apr 29 23:20:37.555: INFO: Creating new host exec pod
Apr 29 23:20:37.571: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Apr 29 23:20:39.575: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Apr 29 23:20:41.576: INFO: The status of Pod verify-service-down-host-exec-pod is Running (Ready = true)
Apr 29 23:20:41.576: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-916 exec verify-service-down-host-exec-pod -- /bin/sh -x -c curl -g -s --connect-timeout 2 http://10.233.45.68:80 && echo service-down-failed'
Apr 29 23:20:43.910: INFO: rc: 28
Apr 29 23:20:43.911: INFO: error while kubectl execing "curl -g -s --connect-timeout 2 http://10.233.45.68:80 && echo service-down-failed" in pod services-916/verify-service-down-host-exec-pod: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-916 exec verify-service-down-host-exec-pod -- /bin/sh -x -c curl -g -s --connect-timeout 2 http://10.233.45.68:80 && echo service-down-failed:
Command stdout:

stderr:
+ curl -g -s --connect-timeout 2 http://10.233.45.68:80
command terminated with exit code 28

error:
exit status 28
Output: 
STEP: Deleting pod verify-service-down-host-exec-pod in namespace services-916
STEP: adding service.kubernetes.io/headless label
STEP: verifying service is not up
Apr 29 23:20:43.926: INFO: Creating new host exec pod
Apr 29 23:20:43.938: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Apr 29 23:20:45.944: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Apr 29 23:20:47.941: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Apr 29 23:20:49.943: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Apr 29 23:20:51.942: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Apr 29 23:20:53.944: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Apr 29 23:20:55.940: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Apr 29 23:20:57.942: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Apr 29 23:20:59.945: INFO: The status of Pod verify-service-down-host-exec-pod is Running (Ready = true)
Apr 29 23:20:59.945: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-916 exec verify-service-down-host-exec-pod -- /bin/sh -x -c curl -g -s --connect-timeout 2 http://10.233.2.20:80 && echo service-down-failed'
Apr 29 23:21:02.950: INFO: rc: 28
Apr 29 23:21:02.950: INFO: error while kubectl execing "curl -g -s --connect-timeout 2 http://10.233.2.20:80 && echo service-down-failed" in pod services-916/verify-service-down-host-exec-pod: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-916 exec verify-service-down-host-exec-pod -- /bin/sh -x -c curl -g -s --connect-timeout 2 http://10.233.2.20:80 && echo service-down-failed:
Command stdout:

stderr:
+ curl -g -s --connect-timeout 2 http://10.233.2.20:80
command terminated with exit code 28

error:
exit status 28
Output: 
STEP: Deleting pod verify-service-down-host-exec-pod in namespace services-916
STEP: removing service.kubernetes.io/headless annotation
STEP: verifying service is up
Apr 29 23:21:02.963: INFO: Creating new host exec pod
Apr 29 23:21:02.975: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Apr 29 23:21:04.979: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Apr 29 23:21:06.979: INFO: The status of Pod verify-service-up-host-exec-pod is Running (Ready = true)
Apr 29 23:21:06.979: INFO: Creating new exec pod
STEP: verifying service has 3 reachable backends
Apr 29 23:21:16.996: INFO: Executing cmd "for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.2.20:80 2>&1 || true; echo; done" in pod services-916/verify-service-up-host-exec-pod
Apr 29 23:21:16.996: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-916 exec verify-service-up-host-exec-pod -- /bin/sh -x -c for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.2.20:80 2>&1 || true; echo; done'
Apr 29 23:21:17.450: INFO: stderr: "+ seq 1 150\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n"
Apr 29 23:21:17.451: INFO: stdout: "service-headless-toggled-z997q\nservice-headless-toggled-z997q\nservice-headless-toggled-8qw2t\nservice-headless-toggled-z997q\nservice-headless-toggled-vsbpf\nservice-headless-toggled-vsbpf\nservice-headless-toggled-z997q\nservice-headless-toggled-8qw2t\nservice-headless-toggled-8qw2t\nservice-headless-toggled-vsbpf\nservice-headless-toggled-8qw2t\nservice-headless-toggled-z997q\nservice-headless-toggled-8qw2t\nservice-headless-toggled-vsbpf\nservice-headless-toggled-8qw2t\nservice-headless-toggled-vsbpf\nservice-headless-toggled-vsbpf\nservice-headless-toggled-vsbpf\nservice-headless-toggled-z997q\nservice-headless-toggled-8qw2t\nservice-headless-toggled-8qw2t\nservice-headless-toggled-8qw2t\nservice-headless-toggled-vsbpf\nservice-headless-toggled-vsbpf\nservice-headless-toggled-vsbpf\nservice-headless-toggled-z997q\nservice-headless-toggled-8qw2t\nservice-headless-toggled-8qw2t\nservice-headless-toggled-z997q\nservice-headless-toggled-vsbpf\nservice-headless-toggled-vsbpf\nservice-headless-toggled-vsbpf\nservice-headless-toggled-vsbpf\nservice-headless-toggled-vsbpf\nservice-headless-toggled-vsbpf\nservice-headless-toggled-z997q\nservice-headless-toggled-vsbpf\nservice-headless-toggled-vsbpf\nservice-headless-toggled-vsbpf\nservice-headless-toggled-8qw2t\nservice-headless-toggled-8qw2t\nservice-headless-toggled-vsbpf\nservice-headless-toggled-8qw2t\nservice-headless-toggled-8qw2t\nservice-headless-toggled-z997q\nservice-headless-toggled-8qw2t\nservice-headless-toggled-z997q\nservice-headless-toggled-vsbpf\nservice-headless-toggled-vsbpf\nservice-headless-toggled-z997q\nservice-headless-toggled-z997q\nservice-headless-toggled-8qw2t\nservice-headless-toggled-8qw2t\nservice-headless-toggled-z997q\nservice-headless-toggled-8qw2t\nservice-headless-toggled-vsbpf\nservice-headless-toggled-8qw2t\nservice-headless-toggled-z997q\nservice-headless-toggled-vsbpf\nservice-headless-toggled-z997q\nservice-headless-toggled-z997q\nservice-headless-toggled-vsbpf\nservice-headless-toggled-8qw2t\nservice-headless-toggled-z997q\nservice-headless-toggled-vsbpf\nservice-headless-toggled-z997q\nservice-headless-toggled-8qw2t\nservice-headless-toggled-vsbpf\nservice-headless-toggled-vsbpf\nservice-headless-toggled-vsbpf\nservice-headless-toggled-8qw2t\nservice-headless-toggled-vsbpf\nservice-headless-toggled-z997q\nservice-headless-toggled-8qw2t\nservice-headless-toggled-8qw2t\nservice-headless-toggled-8qw2t\nservice-headless-toggled-z997q\nservice-headless-toggled-8qw2t\nservice-headless-toggled-z997q\nservice-headless-toggled-vsbpf\nservice-headless-toggled-z997q\nservice-headless-toggled-8qw2t\nservice-headless-toggled-vsbpf\nservice-headless-toggled-8qw2t\nservice-headless-toggled-vsbpf\nservice-headless-toggled-z997q\nservice-headless-toggled-8qw2t\nservice-headless-toggled-vsbpf\nservice-headless-toggled-vsbpf\nservice-headless-toggled-vsbpf\nservice-headless-toggled-8qw2t\nservice-headless-toggled-z997q\nservice-headless-toggled-z997q\nservice-headless-toggled-8qw2t\nservice-headless-toggled-z997q\nservice-headless-toggled-vsbpf\nservice-headless-toggled-8qw2t\nservice-headless-toggled-8qw2t\nservice-headless-toggled-z997q\nservice-headless-toggled-vsbpf\nservice-headless-toggled-8qw2t\nservice-headless-toggled-vsbpf\nservice-headless-toggled-8qw2t\nservice-headless-toggled-z997q\nservice-headless-toggled-vsbpf\nservice-headless-toggled-z997q\nservice-headless-toggled-z997q\nservice-headless-toggled-8qw2t\nservice-headless-toggled-8qw2t\nservice-headless-toggled-z997q\nservice-headless-toggled-vsbpf\nservice-headless-toggled-8qw2t\nservice-headless-toggled-vsbpf\nservice-headless-toggled-8qw2t\nservice-headless-toggled-vsbpf\nservice-headless-toggled-8qw2t\nservice-headless-toggled-vsbpf\nservice-headless-toggled-8qw2t\nservice-headless-toggled-8qw2t\nservice-headless-toggled-z997q\nservice-headless-toggled-8qw2t\nservice-headless-toggled-z997q\nservice-headless-toggled-vsbpf\nservice-headless-toggled-vsbpf\nservice-headless-toggled-8qw2t\nservice-headless-toggled-z997q\nservice-headless-toggled-vsbpf\nservice-headless-toggled-z997q\nservice-headless-toggled-vsbpf\nservice-headless-toggled-vsbpf\nservice-headless-toggled-vsbpf\nservice-headless-toggled-vsbpf\nservice-headless-toggled-z997q\nservice-headless-toggled-vsbpf\nservice-headless-toggled-vsbpf\nservice-headless-toggled-8qw2t\nservice-headless-toggled-8qw2t\nservice-headless-toggled-z997q\nservice-headless-toggled-z997q\nservice-headless-toggled-z997q\nservice-headless-toggled-z997q\nservice-headless-toggled-8qw2t\nservice-headless-toggled-z997q\nservice-headless-toggled-vsbpf\nservice-headless-toggled-vsbpf\nservice-headless-toggled-vsbpf\nservice-headless-toggled-8qw2t\nservice-headless-toggled-8qw2t\nservice-headless-toggled-8qw2t\nservice-headless-toggled-8qw2t\n"
Apr 29 23:21:17.451: INFO: Executing cmd "for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.2.20:80 2>&1 || true; echo; done" in pod services-916/verify-service-up-exec-pod-jmc8h
Apr 29 23:21:17.451: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-916 exec verify-service-up-exec-pod-jmc8h -- /bin/sh -x -c for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.2.20:80 2>&1 || true; echo; done'
Apr 29 23:21:17.851: INFO: stderr: "+ seq 1 150\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.2.20:80\n+ echo\n"
Apr 29 23:21:17.851: INFO: stdout: "service-headless-toggled-z997q\nservice-headless-toggled-z997q\nservice-headless-toggled-8qw2t\nservice-headless-toggled-8qw2t\nservice-headless-toggled-z997q\nservice-headless-toggled-vsbpf\nservice-headless-toggled-z997q\nservice-headless-toggled-8qw2t\nservice-headless-toggled-vsbpf\nservice-headless-toggled-z997q\nservice-headless-toggled-z997q\nservice-headless-toggled-vsbpf\nservice-headless-toggled-z997q\nservice-headless-toggled-vsbpf\nservice-headless-toggled-8qw2t\nservice-headless-toggled-8qw2t\nservice-headless-toggled-z997q\nservice-headless-toggled-8qw2t\nservice-headless-toggled-8qw2t\nservice-headless-toggled-8qw2t\nservice-headless-toggled-vsbpf\nservice-headless-toggled-8qw2t\nservice-headless-toggled-vsbpf\nservice-headless-toggled-vsbpf\nservice-headless-toggled-z997q\nservice-headless-toggled-z997q\nservice-headless-toggled-vsbpf\nservice-headless-toggled-vsbpf\nservice-headless-toggled-8qw2t\nservice-headless-toggled-vsbpf\nservice-headless-toggled-z997q\nservice-headless-toggled-vsbpf\nservice-headless-toggled-8qw2t\nservice-headless-toggled-8qw2t\nservice-headless-toggled-8qw2t\nservice-headless-toggled-8qw2t\nservice-headless-toggled-z997q\nservice-headless-toggled-8qw2t\nservice-headless-toggled-8qw2t\nservice-headless-toggled-z997q\nservice-headless-toggled-z997q\nservice-headless-toggled-z997q\nservice-headless-toggled-8qw2t\nservice-headless-toggled-vsbpf\nservice-headless-toggled-vsbpf\nservice-headless-toggled-z997q\nservice-headless-toggled-z997q\nservice-headless-toggled-vsbpf\nservice-headless-toggled-vsbpf\nservice-headless-toggled-vsbpf\nservice-headless-toggled-vsbpf\nservice-headless-toggled-vsbpf\nservice-headless-toggled-8qw2t\nservice-headless-toggled-8qw2t\nservice-headless-toggled-8qw2t\nservice-headless-toggled-8qw2t\nservice-headless-toggled-z997q\nservice-headless-toggled-8qw2t\nservice-headless-toggled-8qw2t\nservice-headless-toggled-vsbpf\nservice-headless-toggled-8qw2t\nservice-headless-toggled-z997q\nservice-headless-toggled-vsbpf\nservice-headless-toggled-vsbpf\nservice-headless-toggled-8qw2t\nservice-headless-toggled-z997q\nservice-headless-toggled-vsbpf\nservice-headless-toggled-vsbpf\nservice-headless-toggled-8qw2t\nservice-headless-toggled-z997q\nservice-headless-toggled-z997q\nservice-headless-toggled-8qw2t\nservice-headless-toggled-vsbpf\nservice-headless-toggled-vsbpf\nservice-headless-toggled-z997q\nservice-headless-toggled-8qw2t\nservice-headless-toggled-z997q\nservice-headless-toggled-z997q\nservice-headless-toggled-z997q\nservice-headless-toggled-vsbpf\nservice-headless-toggled-z997q\nservice-headless-toggled-vsbpf\nservice-headless-toggled-8qw2t\nservice-headless-toggled-z997q\nservice-headless-toggled-vsbpf\nservice-headless-toggled-vsbpf\nservice-headless-toggled-z997q\nservice-headless-toggled-vsbpf\nservice-headless-toggled-z997q\nservice-headless-toggled-vsbpf\nservice-headless-toggled-z997q\nservice-headless-toggled-vsbpf\nservice-headless-toggled-vsbpf\nservice-headless-toggled-z997q\nservice-headless-toggled-z997q\nservice-headless-toggled-z997q\nservice-headless-toggled-vsbpf\nservice-headless-toggled-vsbpf\nservice-headless-toggled-vsbpf\nservice-headless-toggled-8qw2t\nservice-headless-toggled-8qw2t\nservice-headless-toggled-8qw2t\nservice-headless-toggled-vsbpf\nservice-headless-toggled-8qw2t\nservice-headless-toggled-8qw2t\nservice-headless-toggled-vsbpf\nservice-headless-toggled-z997q\nservice-headless-toggled-vsbpf\nservice-headless-toggled-vsbpf\nservice-headless-toggled-8qw2t\nservice-headless-toggled-8qw2t\nservice-headless-toggled-8qw2t\nservice-headless-toggled-8qw2t\nservice-headless-toggled-8qw2t\nservice-headless-toggled-vsbpf\nservice-headless-toggled-vsbpf\nservice-headless-toggled-8qw2t\nservice-headless-toggled-vsbpf\nservice-headless-toggled-vsbpf\nservice-headless-toggled-vsbpf\nservice-headless-toggled-vsbpf\nservice-headless-toggled-vsbpf\nservice-headless-toggled-8qw2t\nservice-headless-toggled-z997q\nservice-headless-toggled-8qw2t\nservice-headless-toggled-z997q\nservice-headless-toggled-8qw2t\nservice-headless-toggled-vsbpf\nservice-headless-toggled-vsbpf\nservice-headless-toggled-8qw2t\nservice-headless-toggled-vsbpf\nservice-headless-toggled-8qw2t\nservice-headless-toggled-8qw2t\nservice-headless-toggled-z997q\nservice-headless-toggled-z997q\nservice-headless-toggled-vsbpf\nservice-headless-toggled-z997q\nservice-headless-toggled-8qw2t\nservice-headless-toggled-8qw2t\nservice-headless-toggled-z997q\nservice-headless-toggled-8qw2t\nservice-headless-toggled-z997q\nservice-headless-toggled-vsbpf\nservice-headless-toggled-z997q\nservice-headless-toggled-vsbpf\nservice-headless-toggled-vsbpf\nservice-headless-toggled-8qw2t\nservice-headless-toggled-8qw2t\nservice-headless-toggled-vsbpf\nservice-headless-toggled-vsbpf\n"
STEP: Deleting pod verify-service-up-host-exec-pod in namespace services-916
STEP: Deleting pod verify-service-up-exec-pod-jmc8h in namespace services-916
STEP: verifying service-headless is still not up
Apr 29 23:21:17.867: INFO: Creating new host exec pod
Apr 29 23:21:17.883: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Apr 29 23:21:19.888: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Apr 29 23:21:21.886: INFO: The status of Pod verify-service-down-host-exec-pod is Running (Ready = true)
Apr 29 23:21:21.886: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-916 exec verify-service-down-host-exec-pod -- /bin/sh -x -c curl -g -s --connect-timeout 2 http://10.233.45.68:80 && echo service-down-failed'
Apr 29 23:21:24.298: INFO: rc: 28
Apr 29 23:21:24.298: INFO: error while kubectl execing "curl -g -s --connect-timeout 2 http://10.233.45.68:80 && echo service-down-failed" in pod services-916/verify-service-down-host-exec-pod: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-916 exec verify-service-down-host-exec-pod -- /bin/sh -x -c curl -g -s --connect-timeout 2 http://10.233.45.68:80 && echo service-down-failed:
Command stdout:

stderr:
+ curl -g -s --connect-timeout 2 http://10.233.45.68:80
command terminated with exit code 28

error:
exit status 28
Output: 
STEP: Deleting pod verify-service-down-host-exec-pod in namespace services-916
[AfterEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Apr 29 23:21:24.305: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-916" for this suite.
[AfterEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750


• [SLOW TEST:78.179 seconds]
[sig-network] Services
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should implement service.kubernetes.io/headless
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1916
------------------------------
{"msg":"PASSED [sig-network] Services should implement service.kubernetes.io/headless","total":-1,"completed":2,"skipped":269,"failed":0}
Apr 29 23:21:24.317: INFO: Running AfterSuite actions on all nodes


[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Apr 29 23:21:02.126: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename nettest
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:83
STEP: Executing a successful http request from the external internet
[It] should function for multiple endpoint-Services with same selector
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:289
STEP: Performing setup for networking test in namespace nettest-8720
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Apr 29 23:21:02.234: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
Apr 29 23:21:02.266: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Apr 29 23:21:04.270: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Apr 29 23:21:06.269: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Apr 29 23:21:08.271: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Apr 29 23:21:10.270: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Apr 29 23:21:12.269: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Apr 29 23:21:14.270: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Apr 29 23:21:16.270: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Apr 29 23:21:18.271: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Apr 29 23:21:20.269: INFO: The status of Pod netserver-0 is Running (Ready = false)
Apr 29 23:21:22.301: INFO: The status of Pod netserver-0 is Running (Ready = false)
Apr 29 23:21:24.271: INFO: The status of Pod netserver-0 is Running (Ready = false)
Apr 29 23:21:26.269: INFO: The status of Pod netserver-0 is Running (Ready = false)
Apr 29 23:21:28.271: INFO: The status of Pod netserver-0 is Running (Ready = false)
Apr 29 23:21:30.268: INFO: The status of Pod netserver-0 is Running (Ready = false)
Apr 29 23:21:32.271: INFO: The status of Pod netserver-0 is Running (Ready = false)
Apr 29 23:21:34.272: INFO: The status of Pod netserver-0 is Running (Ready = true)
Apr 29 23:21:34.277: INFO: The status of Pod netserver-1 is Running (Ready = true)
STEP: Creating test pods
Apr 29 23:21:40.298: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2
STEP: Getting node addresses
Apr 29 23:21:40.298: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
Apr 29 23:21:40.306: INFO: Requires at least 2 nodes (not -1)
[AfterEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Apr 29 23:21:40.307: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "nettest-8720" for this suite.


S [SKIPPING] [38.191 seconds]
[sig-network] Networking
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  Granular Checks: Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:151
    should function for multiple endpoint-Services with same selector [It]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:289

    Requires at least 2 nodes (not -1)

    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/network/utils.go:782
------------------------------
Apr 29 23:21:40.319: INFO: Running AfterSuite actions on all nodes


[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Apr 29 23:19:43.347: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename nettest
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:83
STEP: Executing a successful http request from the external internet
[It] should update endpoints: udp
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:351
STEP: Performing setup for networking test in namespace nettest-3308
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Apr 29 23:19:43.477: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
Apr 29 23:19:43.509: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Apr 29 23:19:45.511: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Apr 29 23:19:47.511: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Apr 29 23:19:49.512: INFO: The status of Pod netserver-0 is Running (Ready = false)
Apr 29 23:19:51.513: INFO: The status of Pod netserver-0 is Running (Ready = false)
Apr 29 23:19:53.514: INFO: The status of Pod netserver-0 is Running (Ready = false)
Apr 29 23:19:55.513: INFO: The status of Pod netserver-0 is Running (Ready = false)
Apr 29 23:19:57.516: INFO: The status of Pod netserver-0 is Running (Ready = false)
Apr 29 23:19:59.513: INFO: The status of Pod netserver-0 is Running (Ready = false)
Apr 29 23:20:01.512: INFO: The status of Pod netserver-0 is Running (Ready = false)
Apr 29 23:20:03.512: INFO: The status of Pod netserver-0 is Running (Ready = true)
Apr 29 23:20:03.517: INFO: The status of Pod netserver-1 is Running (Ready = false)
Apr 29 23:20:05.521: INFO: The status of Pod netserver-1 is Running (Ready = true)
STEP: Creating test pods
Apr 29 23:20:11.563: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2
STEP: Getting node addresses
Apr 29 23:20:11.563: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating the service on top of the pods in kubernetes
Apr 29 23:20:11.583: INFO: Service node-port-service in namespace nettest-3308 found.
Apr 29 23:20:11.596: INFO: Service session-affinity-service in namespace nettest-3308 found.
STEP: Waiting for NodePort service to expose endpoint
Apr 29 23:20:12.598: INFO: Waiting for amount of service:node-port-service endpoints to be 2
STEP: Waiting for Session Affinity service to expose endpoint
Apr 29 23:20:13.605: INFO: Waiting for amount of service:session-affinity-service endpoints to be 2
STEP: dialing(udp) test-container-pod --> 10.233.7.55:90 (config.clusterIP)
Apr 29 23:20:13.610: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.3.96:9080/dial?request=hostname&protocol=udp&host=10.233.7.55&port=90&tries=1'] Namespace:nettest-3308 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Apr 29 23:20:13.610: INFO: >>> kubeConfig: /root/.kube/config
Apr 29 23:20:13.696: INFO: Waiting for responses: map[netserver-0:{}]
Apr 29 23:20:15.701: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.3.96:9080/dial?request=hostname&protocol=udp&host=10.233.7.55&port=90&tries=1'] Namespace:nettest-3308 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Apr 29 23:20:15.701: INFO: >>> kubeConfig: /root/.kube/config
Apr 29 23:20:15.831: INFO: Waiting for responses: map[netserver-0:{}]
Apr 29 23:20:17.836: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.3.96:9080/dial?request=hostname&protocol=udp&host=10.233.7.55&port=90&tries=1'] Namespace:nettest-3308 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Apr 29 23:20:17.836: INFO: >>> kubeConfig: /root/.kube/config
Apr 29 23:20:17.973: INFO: Waiting for responses: map[]
Apr 29 23:20:17.974: INFO: reached 10.233.7.55 after 2/34 tries
STEP: Deleting a pod which, will be replaced with a new endpoint
Apr 29 23:20:17.979: INFO: Waiting for pod netserver-0 to disappear
Apr 29 23:20:17.982: INFO: Pod netserver-0 no longer exists
Apr 29 23:20:18.983: INFO: Waiting for amount of service:node-port-service endpoints to be 1
STEP: dialing(udp) test-container-pod --> 10.233.7.55:90 (config.clusterIP) (endpoint recovery)
Apr 29 23:20:23.989: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.3.96:9080/dial?request=hostname&protocol=udp&host=10.233.7.55&port=90&tries=1'] Namespace:nettest-3308 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Apr 29 23:20:23.989: INFO: >>> kubeConfig: /root/.kube/config
Apr 29 23:20:24.252: INFO: Waiting for responses: map[]
Apr 29 23:20:26.256: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.3.96:9080/dial?request=hostname&protocol=udp&host=10.233.7.55&port=90&tries=1'] Namespace:nettest-3308 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Apr 29 23:20:26.256: INFO: >>> kubeConfig: /root/.kube/config
Apr 29 23:20:26.823: INFO: Waiting for responses: map[]
Apr 29 23:20:28.829: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.3.96:9080/dial?request=hostname&protocol=udp&host=10.233.7.55&port=90&tries=1'] Namespace:nettest-3308 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Apr 29 23:20:28.829: INFO: >>> kubeConfig: /root/.kube/config
Apr 29 23:20:28.984: INFO: Waiting for responses: map[]
Apr 29 23:20:30.988: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.3.96:9080/dial?request=hostname&protocol=udp&host=10.233.7.55&port=90&tries=1'] Namespace:nettest-3308 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Apr 29 23:20:30.988: INFO: >>> kubeConfig: /root/.kube/config
Apr 29 23:20:31.095: INFO: Waiting for responses: map[]
Apr 29 23:20:33.097: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.3.96:9080/dial?request=hostname&protocol=udp&host=10.233.7.55&port=90&tries=1'] Namespace:nettest-3308 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Apr 29 23:20:33.098: INFO: >>> kubeConfig: /root/.kube/config
Apr 29 23:20:33.253: INFO: Waiting for responses: map[]
Apr 29 23:20:35.256: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.3.96:9080/dial?request=hostname&protocol=udp&host=10.233.7.55&port=90&tries=1'] Namespace:nettest-3308 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Apr 29 23:20:35.256: INFO: >>> kubeConfig: /root/.kube/config
Apr 29 23:20:35.390: INFO: Waiting for responses: map[]
Apr 29 23:20:37.396: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.3.96:9080/dial?request=hostname&protocol=udp&host=10.233.7.55&port=90&tries=1'] Namespace:nettest-3308 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Apr 29 23:20:37.396: INFO: >>> kubeConfig: /root/.kube/config
Apr 29 23:20:37.495: INFO: Waiting for responses: map[]
Apr 29 23:20:39.500: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.3.96:9080/dial?request=hostname&protocol=udp&host=10.233.7.55&port=90&tries=1'] Namespace:nettest-3308 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Apr 29 23:20:39.500: INFO: >>> kubeConfig: /root/.kube/config
Apr 29 23:20:39.722: INFO: Waiting for responses: map[]
Apr 29 23:20:41.725: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.3.96:9080/dial?request=hostname&protocol=udp&host=10.233.7.55&port=90&tries=1'] Namespace:nettest-3308 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Apr 29 23:20:41.725: INFO: >>> kubeConfig: /root/.kube/config
Apr 29 23:20:41.889: INFO: Waiting for responses: map[]
Apr 29 23:20:43.892: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.3.96:9080/dial?request=hostname&protocol=udp&host=10.233.7.55&port=90&tries=1'] Namespace:nettest-3308 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Apr 29 23:20:43.893: INFO: >>> kubeConfig: /root/.kube/config
Apr 29 23:20:44.128: INFO: Waiting for responses: map[]
Apr 29 23:20:46.132: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.3.96:9080/dial?request=hostname&protocol=udp&host=10.233.7.55&port=90&tries=1'] Namespace:nettest-3308 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Apr 29 23:20:46.132: INFO: >>> kubeConfig: /root/.kube/config
Apr 29 23:20:46.222: INFO: Waiting for responses: map[]
Apr 29 23:20:48.225: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.3.96:9080/dial?request=hostname&protocol=udp&host=10.233.7.55&port=90&tries=1'] Namespace:nettest-3308 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Apr 29 23:20:48.225: INFO: >>> kubeConfig: /root/.kube/config
Apr 29 23:20:48.320: INFO: Waiting for responses: map[]
Apr 29 23:20:50.323: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.3.96:9080/dial?request=hostname&protocol=udp&host=10.233.7.55&port=90&tries=1'] Namespace:nettest-3308 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Apr 29 23:20:50.323: INFO: >>> kubeConfig: /root/.kube/config
Apr 29 23:20:50.496: INFO: Waiting for responses: map[]
Apr 29 23:20:52.499: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.3.96:9080/dial?request=hostname&protocol=udp&host=10.233.7.55&port=90&tries=1'] Namespace:nettest-3308 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Apr 29 23:20:52.499: INFO: >>> kubeConfig: /root/.kube/config
Apr 29 23:20:52.597: INFO: Waiting for responses: map[]
Apr 29 23:20:54.602: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.3.96:9080/dial?request=hostname&protocol=udp&host=10.233.7.55&port=90&tries=1'] Namespace:nettest-3308 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Apr 29 23:20:54.602: INFO: >>> kubeConfig: /root/.kube/config
Apr 29 23:20:54.794: INFO: Waiting for responses: map[]
Apr 29 23:20:56.798: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.3.96:9080/dial?request=hostname&protocol=udp&host=10.233.7.55&port=90&tries=1'] Namespace:nettest-3308 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Apr 29 23:20:56.798: INFO: >>> kubeConfig: /root/.kube/config
Apr 29 23:20:56.906: INFO: Waiting for responses: map[]
Apr 29 23:20:58.909: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.3.96:9080/dial?request=hostname&protocol=udp&host=10.233.7.55&port=90&tries=1'] Namespace:nettest-3308 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Apr 29 23:20:58.909: INFO: >>> kubeConfig: /root/.kube/config
Apr 29 23:20:59.016: INFO: Waiting for responses: map[]
Apr 29 23:21:01.020: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.3.96:9080/dial?request=hostname&protocol=udp&host=10.233.7.55&port=90&tries=1'] Namespace:nettest-3308 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Apr 29 23:21:01.020: INFO: >>> kubeConfig: /root/.kube/config
Apr 29 23:21:01.107: INFO: Waiting for responses: map[]
Apr 29 23:21:03.111: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.3.96:9080/dial?request=hostname&protocol=udp&host=10.233.7.55&port=90&tries=1'] Namespace:nettest-3308 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Apr 29 23:21:03.111: INFO: >>> kubeConfig: /root/.kube/config
Apr 29 23:21:03.299: INFO: Waiting for responses: map[]
Apr 29 23:21:05.302: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.3.96:9080/dial?request=hostname&protocol=udp&host=10.233.7.55&port=90&tries=1'] Namespace:nettest-3308 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Apr 29 23:21:05.302: INFO: >>> kubeConfig: /root/.kube/config
Apr 29 23:21:09.769: INFO: Waiting for responses: map[]
Apr 29 23:21:11.773: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.3.96:9080/dial?request=hostname&protocol=udp&host=10.233.7.55&port=90&tries=1'] Namespace:nettest-3308 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Apr 29 23:21:11.773: INFO: >>> kubeConfig: /root/.kube/config
Apr 29 23:21:11.930: INFO: Waiting for responses: map[]
Apr 29 23:21:13.936: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.3.96:9080/dial?request=hostname&protocol=udp&host=10.233.7.55&port=90&tries=1'] Namespace:nettest-3308 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Apr 29 23:21:13.936: INFO: >>> kubeConfig: /root/.kube/config
Apr 29 23:21:14.064: INFO: Waiting for responses: map[]
Apr 29 23:21:16.068: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.3.96:9080/dial?request=hostname&protocol=udp&host=10.233.7.55&port=90&tries=1'] Namespace:nettest-3308 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Apr 29 23:21:16.068: INFO: >>> kubeConfig: /root/.kube/config
Apr 29 23:21:16.654: INFO: Waiting for responses: map[]
Apr 29 23:21:18.658: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.3.96:9080/dial?request=hostname&protocol=udp&host=10.233.7.55&port=90&tries=1'] Namespace:nettest-3308 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Apr 29 23:21:18.658: INFO: >>> kubeConfig: /root/.kube/config
Apr 29 23:21:18.756: INFO: Waiting for responses: map[]
Apr 29 23:21:20.761: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.3.96:9080/dial?request=hostname&protocol=udp&host=10.233.7.55&port=90&tries=1'] Namespace:nettest-3308 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Apr 29 23:21:20.761: INFO: >>> kubeConfig: /root/.kube/config
Apr 29 23:21:20.848: INFO: Waiting for responses: map[]
Apr 29 23:21:22.852: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.3.96:9080/dial?request=hostname&protocol=udp&host=10.233.7.55&port=90&tries=1'] Namespace:nettest-3308 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Apr 29 23:21:22.852: INFO: >>> kubeConfig: /root/.kube/config
Apr 29 23:21:22.995: INFO: Waiting for responses: map[]
Apr 29 23:21:24.998: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.3.96:9080/dial?request=hostname&protocol=udp&host=10.233.7.55&port=90&tries=1'] Namespace:nettest-3308 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Apr 29 23:21:24.998: INFO: >>> kubeConfig: /root/.kube/config
Apr 29 23:21:25.350: INFO: Waiting for responses: map[]
Apr 29 23:21:27.353: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.3.96:9080/dial?request=hostname&protocol=udp&host=10.233.7.55&port=90&tries=1'] Namespace:nettest-3308 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Apr 29 23:21:27.353: INFO: >>> kubeConfig: /root/.kube/config
Apr 29 23:21:27.442: INFO: Waiting for responses: map[]
Apr 29 23:21:29.445: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.3.96:9080/dial?request=hostname&protocol=udp&host=10.233.7.55&port=90&tries=1'] Namespace:nettest-3308 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Apr 29 23:21:29.445: INFO: >>> kubeConfig: /root/.kube/config
Apr 29 23:21:29.543: INFO: Waiting for responses: map[]
Apr 29 23:21:31.547: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.3.96:9080/dial?request=hostname&protocol=udp&host=10.233.7.55&port=90&tries=1'] Namespace:nettest-3308 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Apr 29 23:21:31.547: INFO: >>> kubeConfig: /root/.kube/config
Apr 29 23:21:31.633: INFO: Waiting for responses: map[]
Apr 29 23:21:33.635: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.3.96:9080/dial?request=hostname&protocol=udp&host=10.233.7.55&port=90&tries=1'] Namespace:nettest-3308 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Apr 29 23:21:33.635: INFO: >>> kubeConfig: /root/.kube/config
Apr 29 23:21:33.771: INFO: Waiting for responses: map[]
Apr 29 23:21:35.775: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.3.96:9080/dial?request=hostname&protocol=udp&host=10.233.7.55&port=90&tries=1'] Namespace:nettest-3308 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Apr 29 23:21:35.775: INFO: >>> kubeConfig: /root/.kube/config
Apr 29 23:21:36.240: INFO: Waiting for responses: map[]
Apr 29 23:21:38.245: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.3.96:9080/dial?request=hostname&protocol=udp&host=10.233.7.55&port=90&tries=1'] Namespace:nettest-3308 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Apr 29 23:21:38.245: INFO: >>> kubeConfig: /root/.kube/config
Apr 29 23:21:38.482: INFO: Waiting for responses: map[]
Apr 29 23:21:40.486: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.3.96:9080/dial?request=hostname&protocol=udp&host=10.233.7.55&port=90&tries=1'] Namespace:nettest-3308 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Apr 29 23:21:40.486: INFO: >>> kubeConfig: /root/.kube/config
Apr 29 23:21:40.582: INFO: Waiting for responses: map[]
Apr 29 23:21:40.582: INFO: reached 10.233.7.55 after 33/34 tries
[AfterEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Apr 29 23:21:40.583: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "nettest-3308" for this suite.


• [SLOW TEST:117.243 seconds]
[sig-network] Networking
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  Granular Checks: Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:151
    should update endpoints: udp
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:351
------------------------------
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Apr 29 23:21:14.976: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename nettest
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:83
STEP: Executing a successful http request from the external internet
[It] should support basic nodePort: udp functionality
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:387
STEP: Performing setup for networking test in namespace nettest-6821
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Apr 29 23:21:15.091: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
Apr 29 23:21:15.122: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Apr 29 23:21:17.125: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Apr 29 23:21:19.127: INFO: The status of Pod netserver-0 is Running (Ready = false)
Apr 29 23:21:21.125: INFO: The status of Pod netserver-0 is Running (Ready = false)
Apr 29 23:21:23.126: INFO: The status of Pod netserver-0 is Running (Ready = false)
Apr 29 23:21:25.125: INFO: The status of Pod netserver-0 is Running (Ready = false)
Apr 29 23:21:27.126: INFO: The status of Pod netserver-0 is Running (Ready = false)
Apr 29 23:21:29.128: INFO: The status of Pod netserver-0 is Running (Ready = false)
Apr 29 23:21:31.126: INFO: The status of Pod netserver-0 is Running (Ready = false)
Apr 29 23:21:33.126: INFO: The status of Pod netserver-0 is Running (Ready = false)
Apr 29 23:21:35.127: INFO: The status of Pod netserver-0 is Running (Ready = false)
Apr 29 23:21:37.126: INFO: The status of Pod netserver-0 is Running (Ready = true)
Apr 29 23:21:37.132: INFO: The status of Pod netserver-1 is Running (Ready = true)
STEP: Creating test pods
Apr 29 23:21:41.170: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2
STEP: Getting node addresses
Apr 29 23:21:41.170: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
Apr 29 23:21:41.177: INFO: Requires at least 2 nodes (not -1)
[AfterEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Apr 29 23:21:41.178: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "nettest-6821" for this suite.


S [SKIPPING] [26.211 seconds]
[sig-network] Networking
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  Granular Checks: Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:151
    should support basic nodePort: udp functionality [It]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:387

    Requires at least 2 nodes (not -1)

    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/network/utils.go:782
------------------------------
Apr 29 23:21:41.189: INFO: Running AfterSuite actions on all nodes


[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Apr 29 23:21:03.093: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746
[It] should implement service.kubernetes.io/service-proxy-name
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1865
STEP: creating service-disabled in namespace services-4689
STEP: creating service service-proxy-disabled in namespace services-4689
STEP: creating replication controller service-proxy-disabled in namespace services-4689
I0429 23:21:03.121157      26 runners.go:190] Created replication controller with name: service-proxy-disabled, namespace: services-4689, replica count: 3
I0429 23:21:06.172504      26 runners.go:190] service-proxy-disabled Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0429 23:21:09.174671      26 runners.go:190] service-proxy-disabled Pods: 3 out of 3 created, 1 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0429 23:21:12.175241      26 runners.go:190] service-proxy-disabled Pods: 3 out of 3 created, 1 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0429 23:21:15.176466      26 runners.go:190] service-proxy-disabled Pods: 3 out of 3 created, 1 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0429 23:21:18.177175      26 runners.go:190] service-proxy-disabled Pods: 3 out of 3 created, 2 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0429 23:21:21.178372      26 runners.go:190] service-proxy-disabled Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
STEP: creating service in namespace services-4689
STEP: creating service service-proxy-toggled in namespace services-4689
STEP: creating replication controller service-proxy-toggled in namespace services-4689
I0429 23:21:21.193514      26 runners.go:190] Created replication controller with name: service-proxy-toggled, namespace: services-4689, replica count: 3
I0429 23:21:24.245075      26 runners.go:190] service-proxy-toggled Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0429 23:21:27.246360      26 runners.go:190] service-proxy-toggled Pods: 3 out of 3 created, 2 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0429 23:21:30.246631      26 runners.go:190] service-proxy-toggled Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
STEP: verifying service is up
Apr 29 23:21:30.248: INFO: Creating new host exec pod
Apr 29 23:21:30.261: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Apr 29 23:21:32.264: INFO: The status of Pod verify-service-up-host-exec-pod is Running (Ready = true)
Apr 29 23:21:32.264: INFO: Creating new exec pod
STEP: verifying service has 3 reachable backends
Apr 29 23:21:40.279: INFO: Executing cmd "for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.39.247:80 2>&1 || true; echo; done" in pod services-4689/verify-service-up-host-exec-pod
Apr 29 23:21:40.280: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4689 exec verify-service-up-host-exec-pod -- /bin/sh -x -c for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.39.247:80 2>&1 || true; echo; done'
Apr 29 23:21:40.677: INFO: stderr: "+ seq 1 150\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n"
Apr 29 23:21:40.677: INFO: stdout: "service-proxy-toggled-fvjmv\nservice-proxy-toggled-sv7rc\nservice-proxy-toggled-fvjmv\nservice-proxy-toggled-vnmpg\nservice-proxy-toggled-fvjmv\nservice-proxy-toggled-fvjmv\nservice-proxy-toggled-vnmpg\nservice-proxy-toggled-fvjmv\nservice-proxy-toggled-fvjmv\nservice-proxy-toggled-sv7rc\nservice-proxy-toggled-sv7rc\nservice-proxy-toggled-fvjmv\nservice-proxy-toggled-sv7rc\nservice-proxy-toggled-sv7rc\nservice-proxy-toggled-fvjmv\nservice-proxy-toggled-sv7rc\nservice-proxy-toggled-fvjmv\nservice-proxy-toggled-vnmpg\nservice-proxy-toggled-vnmpg\nservice-proxy-toggled-vnmpg\nservice-proxy-toggled-sv7rc\nservice-proxy-toggled-vnmpg\nservice-proxy-toggled-fvjmv\nservice-proxy-toggled-fvjmv\nservice-proxy-toggled-vnmpg\nservice-proxy-toggled-sv7rc\nservice-proxy-toggled-vnmpg\nservice-proxy-toggled-fvjmv\nservice-proxy-toggled-fvjmv\nservice-proxy-toggled-vnmpg\nservice-proxy-toggled-fvjmv\nservice-proxy-toggled-vnmpg\nservice-proxy-toggled-vnmpg\nservice-proxy-toggled-vnmpg\nservice-proxy-toggled-fvjmv\nservice-proxy-toggled-vnmpg\nservice-proxy-toggled-sv7rc\nservice-proxy-toggled-vnmpg\nservice-proxy-toggled-sv7rc\nservice-proxy-toggled-sv7rc\nservice-proxy-toggled-fvjmv\nservice-proxy-toggled-fvjmv\nservice-proxy-toggled-sv7rc\nservice-proxy-toggled-vnmpg\nservice-proxy-toggled-sv7rc\nservice-proxy-toggled-sv7rc\nservice-proxy-toggled-fvjmv\nservice-proxy-toggled-fvjmv\nservice-proxy-toggled-fvjmv\nservice-proxy-toggled-sv7rc\nservice-proxy-toggled-vnmpg\nservice-proxy-toggled-sv7rc\nservice-proxy-toggled-fvjmv\nservice-proxy-toggled-sv7rc\nservice-proxy-toggled-vnmpg\nservice-proxy-toggled-fvjmv\nservice-proxy-toggled-vnmpg\nservice-proxy-toggled-vnmpg\nservice-proxy-toggled-fvjmv\nservice-proxy-toggled-sv7rc\nservice-proxy-toggled-fvjmv\nservice-proxy-toggled-sv7rc\nservice-proxy-toggled-vnmpg\nservice-proxy-toggled-sv7rc\nservice-proxy-toggled-sv7rc\nservice-proxy-toggled-vnmpg\nservice-proxy-toggled-sv7rc\nservice-proxy-toggled-sv7rc\nservice-proxy-toggled-sv7rc\nservice-proxy-toggled-sv7rc\nservice-proxy-toggled-sv7rc\nservice-proxy-toggled-sv7rc\nservice-proxy-toggled-vnmpg\nservice-proxy-toggled-sv7rc\nservice-proxy-toggled-sv7rc\nservice-proxy-toggled-vnmpg\nservice-proxy-toggled-fvjmv\nservice-proxy-toggled-fvjmv\nservice-proxy-toggled-sv7rc\nservice-proxy-toggled-vnmpg\nservice-proxy-toggled-sv7rc\nservice-proxy-toggled-fvjmv\nservice-proxy-toggled-fvjmv\nservice-proxy-toggled-fvjmv\nservice-proxy-toggled-sv7rc\nservice-proxy-toggled-vnmpg\nservice-proxy-toggled-sv7rc\nservice-proxy-toggled-vnmpg\nservice-proxy-toggled-fvjmv\nservice-proxy-toggled-fvjmv\nservice-proxy-toggled-vnmpg\nservice-proxy-toggled-vnmpg\nservice-proxy-toggled-fvjmv\nservice-proxy-toggled-vnmpg\nservice-proxy-toggled-fvjmv\nservice-proxy-toggled-fvjmv\nservice-proxy-toggled-vnmpg\nservice-proxy-toggled-fvjmv\nservice-proxy-toggled-sv7rc\nservice-proxy-toggled-sv7rc\nservice-proxy-toggled-sv7rc\nservice-proxy-toggled-fvjmv\nservice-proxy-toggled-fvjmv\nservice-proxy-toggled-vnmpg\nservice-proxy-toggled-vnmpg\nservice-proxy-toggled-fvjmv\nservice-proxy-toggled-vnmpg\nservice-proxy-toggled-vnmpg\nservice-proxy-toggled-sv7rc\nservice-proxy-toggled-sv7rc\nservice-proxy-toggled-sv7rc\nservice-proxy-toggled-sv7rc\nservice-proxy-toggled-fvjmv\nservice-proxy-toggled-fvjmv\nservice-proxy-toggled-sv7rc\nservice-proxy-toggled-vnmpg\nservice-proxy-toggled-fvjmv\nservice-proxy-toggled-sv7rc\nservice-proxy-toggled-vnmpg\nservice-proxy-toggled-sv7rc\nservice-proxy-toggled-vnmpg\nservice-proxy-toggled-sv7rc\nservice-proxy-toggled-fvjmv\nservice-proxy-toggled-fvjmv\nservice-proxy-toggled-vnmpg\nservice-proxy-toggled-vnmpg\nservice-proxy-toggled-vnmpg\nservice-proxy-toggled-fvjmv\nservice-proxy-toggled-vnmpg\nservice-proxy-toggled-fvjmv\nservice-proxy-toggled-sv7rc\nservice-proxy-toggled-sv7rc\nservice-proxy-toggled-fvjmv\nservice-proxy-toggled-fvjmv\nservice-proxy-toggled-fvjmv\nservice-proxy-toggled-sv7rc\nservice-proxy-toggled-vnmpg\nservice-proxy-toggled-sv7rc\nservice-proxy-toggled-sv7rc\nservice-proxy-toggled-sv7rc\nservice-proxy-toggled-vnmpg\nservice-proxy-toggled-fvjmv\nservice-proxy-toggled-vnmpg\nservice-proxy-toggled-sv7rc\nservice-proxy-toggled-sv7rc\nservice-proxy-toggled-fvjmv\nservice-proxy-toggled-vnmpg\nservice-proxy-toggled-vnmpg\nservice-proxy-toggled-fvjmv\nservice-proxy-toggled-vnmpg\n"
Apr 29 23:21:40.678: INFO: Executing cmd "for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.39.247:80 2>&1 || true; echo; done" in pod services-4689/verify-service-up-exec-pod-hhwsg
Apr 29 23:21:40.678: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4689 exec verify-service-up-exec-pod-hhwsg -- /bin/sh -x -c for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.39.247:80 2>&1 || true; echo; done'
Apr 29 23:21:41.050: INFO: stderr: "+ seq 1 150\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n"
Apr 29 23:21:41.051: INFO: stdout: "service-proxy-toggled-fvjmv\nservice-proxy-toggled-vnmpg\nservice-proxy-toggled-fvjmv\nservice-proxy-toggled-sv7rc\nservice-proxy-toggled-fvjmv\nservice-proxy-toggled-sv7rc\nservice-proxy-toggled-fvjmv\nservice-proxy-toggled-vnmpg\nservice-proxy-toggled-sv7rc\nservice-proxy-toggled-fvjmv\nservice-proxy-toggled-vnmpg\nservice-proxy-toggled-fvjmv\nservice-proxy-toggled-vnmpg\nservice-proxy-toggled-sv7rc\nservice-proxy-toggled-sv7rc\nservice-proxy-toggled-sv7rc\nservice-proxy-toggled-sv7rc\nservice-proxy-toggled-sv7rc\nservice-proxy-toggled-vnmpg\nservice-proxy-toggled-sv7rc\nservice-proxy-toggled-vnmpg\nservice-proxy-toggled-fvjmv\nservice-proxy-toggled-vnmpg\nservice-proxy-toggled-sv7rc\nservice-proxy-toggled-vnmpg\nservice-proxy-toggled-vnmpg\nservice-proxy-toggled-vnmpg\nservice-proxy-toggled-sv7rc\nservice-proxy-toggled-fvjmv\nservice-proxy-toggled-vnmpg\nservice-proxy-toggled-fvjmv\nservice-proxy-toggled-sv7rc\nservice-proxy-toggled-fvjmv\nservice-proxy-toggled-sv7rc\nservice-proxy-toggled-sv7rc\nservice-proxy-toggled-fvjmv\nservice-proxy-toggled-sv7rc\nservice-proxy-toggled-sv7rc\nservice-proxy-toggled-fvjmv\nservice-proxy-toggled-vnmpg\nservice-proxy-toggled-vnmpg\nservice-proxy-toggled-vnmpg\nservice-proxy-toggled-sv7rc\nservice-proxy-toggled-fvjmv\nservice-proxy-toggled-vnmpg\nservice-proxy-toggled-fvjmv\nservice-proxy-toggled-vnmpg\nservice-proxy-toggled-vnmpg\nservice-proxy-toggled-sv7rc\nservice-proxy-toggled-vnmpg\nservice-proxy-toggled-fvjmv\nservice-proxy-toggled-sv7rc\nservice-proxy-toggled-fvjmv\nservice-proxy-toggled-sv7rc\nservice-proxy-toggled-sv7rc\nservice-proxy-toggled-sv7rc\nservice-proxy-toggled-sv7rc\nservice-proxy-toggled-vnmpg\nservice-proxy-toggled-fvjmv\nservice-proxy-toggled-sv7rc\nservice-proxy-toggled-fvjmv\nservice-proxy-toggled-fvjmv\nservice-proxy-toggled-vnmpg\nservice-proxy-toggled-fvjmv\nservice-proxy-toggled-vnmpg\nservice-proxy-toggled-sv7rc\nservice-proxy-toggled-sv7rc\nservice-proxy-toggled-fvjmv\nservice-proxy-toggled-vnmpg\nservice-proxy-toggled-vnmpg\nservice-proxy-toggled-fvjmv\nservice-proxy-toggled-vnmpg\nservice-proxy-toggled-sv7rc\nservice-proxy-toggled-sv7rc\nservice-proxy-toggled-vnmpg\nservice-proxy-toggled-vnmpg\nservice-proxy-toggled-sv7rc\nservice-proxy-toggled-fvjmv\nservice-proxy-toggled-sv7rc\nservice-proxy-toggled-sv7rc\nservice-proxy-toggled-sv7rc\nservice-proxy-toggled-sv7rc\nservice-proxy-toggled-fvjmv\nservice-proxy-toggled-sv7rc\nservice-proxy-toggled-sv7rc\nservice-proxy-toggled-vnmpg\nservice-proxy-toggled-fvjmv\nservice-proxy-toggled-sv7rc\nservice-proxy-toggled-vnmpg\nservice-proxy-toggled-fvjmv\nservice-proxy-toggled-vnmpg\nservice-proxy-toggled-sv7rc\nservice-proxy-toggled-vnmpg\nservice-proxy-toggled-sv7rc\nservice-proxy-toggled-vnmpg\nservice-proxy-toggled-fvjmv\nservice-proxy-toggled-vnmpg\nservice-proxy-toggled-fvjmv\nservice-proxy-toggled-vnmpg\nservice-proxy-toggled-fvjmv\nservice-proxy-toggled-sv7rc\nservice-proxy-toggled-fvjmv\nservice-proxy-toggled-sv7rc\nservice-proxy-toggled-vnmpg\nservice-proxy-toggled-vnmpg\nservice-proxy-toggled-sv7rc\nservice-proxy-toggled-vnmpg\nservice-proxy-toggled-fvjmv\nservice-proxy-toggled-fvjmv\nservice-proxy-toggled-vnmpg\nservice-proxy-toggled-sv7rc\nservice-proxy-toggled-vnmpg\nservice-proxy-toggled-fvjmv\nservice-proxy-toggled-vnmpg\nservice-proxy-toggled-sv7rc\nservice-proxy-toggled-vnmpg\nservice-proxy-toggled-sv7rc\nservice-proxy-toggled-vnmpg\nservice-proxy-toggled-fvjmv\nservice-proxy-toggled-vnmpg\nservice-proxy-toggled-sv7rc\nservice-proxy-toggled-vnmpg\nservice-proxy-toggled-sv7rc\nservice-proxy-toggled-vnmpg\nservice-proxy-toggled-fvjmv\nservice-proxy-toggled-fvjmv\nservice-proxy-toggled-vnmpg\nservice-proxy-toggled-fvjmv\nservice-proxy-toggled-sv7rc\nservice-proxy-toggled-fvjmv\nservice-proxy-toggled-fvjmv\nservice-proxy-toggled-vnmpg\nservice-proxy-toggled-vnmpg\nservice-proxy-toggled-sv7rc\nservice-proxy-toggled-sv7rc\nservice-proxy-toggled-vnmpg\nservice-proxy-toggled-vnmpg\nservice-proxy-toggled-vnmpg\nservice-proxy-toggled-sv7rc\nservice-proxy-toggled-vnmpg\nservice-proxy-toggled-vnmpg\nservice-proxy-toggled-sv7rc\nservice-proxy-toggled-vnmpg\nservice-proxy-toggled-vnmpg\nservice-proxy-toggled-vnmpg\nservice-proxy-toggled-vnmpg\nservice-proxy-toggled-sv7rc\nservice-proxy-toggled-vnmpg\nservice-proxy-toggled-sv7rc\nservice-proxy-toggled-sv7rc\n"
STEP: Deleting pod verify-service-up-host-exec-pod in namespace services-4689
STEP: Deleting pod verify-service-up-exec-pod-hhwsg in namespace services-4689
STEP: verifying service-disabled is not up
Apr 29 23:21:41.068: INFO: Creating new host exec pod
Apr 29 23:21:41.082: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Apr 29 23:21:43.087: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Apr 29 23:21:45.085: INFO: The status of Pod verify-service-down-host-exec-pod is Running (Ready = true)
Apr 29 23:21:45.085: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4689 exec verify-service-down-host-exec-pod -- /bin/sh -x -c curl -g -s --connect-timeout 2 http://10.233.61.234:80 && echo service-down-failed'
Apr 29 23:21:47.517: INFO: rc: 28
Apr 29 23:21:47.517: INFO: error while kubectl execing "curl -g -s --connect-timeout 2 http://10.233.61.234:80 && echo service-down-failed" in pod services-4689/verify-service-down-host-exec-pod: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4689 exec verify-service-down-host-exec-pod -- /bin/sh -x -c curl -g -s --connect-timeout 2 http://10.233.61.234:80 && echo service-down-failed:
Command stdout:

stderr:
+ curl -g -s --connect-timeout 2 http://10.233.61.234:80
command terminated with exit code 28

error:
exit status 28
Output: 
STEP: Deleting pod verify-service-down-host-exec-pod in namespace services-4689
STEP: adding service-proxy-name label
STEP: verifying service is not up
Apr 29 23:21:47.531: INFO: Creating new host exec pod
Apr 29 23:21:47.544: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Apr 29 23:21:49.548: INFO: The status of Pod verify-service-down-host-exec-pod is Running (Ready = true)
Apr 29 23:21:49.548: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4689 exec verify-service-down-host-exec-pod -- /bin/sh -x -c curl -g -s --connect-timeout 2 http://10.233.39.247:80 && echo service-down-failed'
Apr 29 23:21:51.796: INFO: rc: 28
Apr 29 23:21:51.796: INFO: error while kubectl execing "curl -g -s --connect-timeout 2 http://10.233.39.247:80 && echo service-down-failed" in pod services-4689/verify-service-down-host-exec-pod: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4689 exec verify-service-down-host-exec-pod -- /bin/sh -x -c curl -g -s --connect-timeout 2 http://10.233.39.247:80 && echo service-down-failed:
Command stdout:

stderr:
+ curl -g -s --connect-timeout 2 http://10.233.39.247:80
command terminated with exit code 28

error:
exit status 28
Output: 
STEP: Deleting pod verify-service-down-host-exec-pod in namespace services-4689
STEP: removing service-proxy-name annotation
STEP: verifying service is up
Apr 29 23:21:51.812: INFO: Creating new host exec pod
Apr 29 23:21:51.825: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Apr 29 23:21:53.831: INFO: The status of Pod verify-service-up-host-exec-pod is Running (Ready = true)
Apr 29 23:21:53.831: INFO: Creating new exec pod
STEP: verifying service has 3 reachable backends
Apr 29 23:21:57.849: INFO: Executing cmd "for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.39.247:80 2>&1 || true; echo; done" in pod services-4689/verify-service-up-host-exec-pod
Apr 29 23:21:57.850: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4689 exec verify-service-up-host-exec-pod -- /bin/sh -x -c for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.39.247:80 2>&1 || true; echo; done'
Apr 29 23:21:58.209: INFO: stderr: "+ seq 1 150\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n"
Apr 29 23:21:58.210: INFO: stdout: "service-proxy-toggled-fvjmv\nservice-proxy-toggled-sv7rc\nservice-proxy-toggled-vnmpg\nservice-proxy-toggled-fvjmv\nservice-proxy-toggled-fvjmv\nservice-proxy-toggled-sv7rc\nservice-proxy-toggled-fvjmv\nservice-proxy-toggled-fvjmv\nservice-proxy-toggled-vnmpg\nservice-proxy-toggled-vnmpg\nservice-proxy-toggled-fvjmv\nservice-proxy-toggled-vnmpg\nservice-proxy-toggled-fvjmv\nservice-proxy-toggled-sv7rc\nservice-proxy-toggled-sv7rc\nservice-proxy-toggled-sv7rc\nservice-proxy-toggled-fvjmv\nservice-proxy-toggled-sv7rc\nservice-proxy-toggled-sv7rc\nservice-proxy-toggled-vnmpg\nservice-proxy-toggled-sv7rc\nservice-proxy-toggled-sv7rc\nservice-proxy-toggled-sv7rc\nservice-proxy-toggled-sv7rc\nservice-proxy-toggled-fvjmv\nservice-proxy-toggled-vnmpg\nservice-proxy-toggled-fvjmv\nservice-proxy-toggled-sv7rc\nservice-proxy-toggled-sv7rc\nservice-proxy-toggled-sv7rc\nservice-proxy-toggled-sv7rc\nservice-proxy-toggled-vnmpg\nservice-proxy-toggled-sv7rc\nservice-proxy-toggled-vnmpg\nservice-proxy-toggled-sv7rc\nservice-proxy-toggled-vnmpg\nservice-proxy-toggled-sv7rc\nservice-proxy-toggled-fvjmv\nservice-proxy-toggled-sv7rc\nservice-proxy-toggled-vnmpg\nservice-proxy-toggled-fvjmv\nservice-proxy-toggled-vnmpg\nservice-proxy-toggled-fvjmv\nservice-proxy-toggled-fvjmv\nservice-proxy-toggled-vnmpg\nservice-proxy-toggled-fvjmv\nservice-proxy-toggled-fvjmv\nservice-proxy-toggled-fvjmv\nservice-proxy-toggled-sv7rc\nservice-proxy-toggled-sv7rc\nservice-proxy-toggled-sv7rc\nservice-proxy-toggled-vnmpg\nservice-proxy-toggled-sv7rc\nservice-proxy-toggled-fvjmv\nservice-proxy-toggled-fvjmv\nservice-proxy-toggled-vnmpg\nservice-proxy-toggled-fvjmv\nservice-proxy-toggled-vnmpg\nservice-proxy-toggled-vnmpg\nservice-proxy-toggled-sv7rc\nservice-proxy-toggled-vnmpg\nservice-proxy-toggled-fvjmv\nservice-proxy-toggled-fvjmv\nservice-proxy-toggled-sv7rc\nservice-proxy-toggled-sv7rc\nservice-proxy-toggled-fvjmv\nservice-proxy-toggled-vnmpg\nservice-proxy-toggled-vnmpg\nservice-proxy-toggled-vnmpg\nservice-proxy-toggled-fvjmv\nservice-proxy-toggled-vnmpg\nservice-proxy-toggled-fvjmv\nservice-proxy-toggled-fvjmv\nservice-proxy-toggled-vnmpg\nservice-proxy-toggled-sv7rc\nservice-proxy-toggled-vnmpg\nservice-proxy-toggled-fvjmv\nservice-proxy-toggled-vnmpg\nservice-proxy-toggled-vnmpg\nservice-proxy-toggled-fvjmv\nservice-proxy-toggled-vnmpg\nservice-proxy-toggled-vnmpg\nservice-proxy-toggled-sv7rc\nservice-proxy-toggled-sv7rc\nservice-proxy-toggled-sv7rc\nservice-proxy-toggled-sv7rc\nservice-proxy-toggled-vnmpg\nservice-proxy-toggled-fvjmv\nservice-proxy-toggled-sv7rc\nservice-proxy-toggled-fvjmv\nservice-proxy-toggled-vnmpg\nservice-proxy-toggled-vnmpg\nservice-proxy-toggled-fvjmv\nservice-proxy-toggled-fvjmv\nservice-proxy-toggled-sv7rc\nservice-proxy-toggled-fvjmv\nservice-proxy-toggled-vnmpg\nservice-proxy-toggled-vnmpg\nservice-proxy-toggled-vnmpg\nservice-proxy-toggled-sv7rc\nservice-proxy-toggled-fvjmv\nservice-proxy-toggled-fvjmv\nservice-proxy-toggled-vnmpg\nservice-proxy-toggled-fvjmv\nservice-proxy-toggled-vnmpg\nservice-proxy-toggled-sv7rc\nservice-proxy-toggled-fvjmv\nservice-proxy-toggled-vnmpg\nservice-proxy-toggled-sv7rc\nservice-proxy-toggled-fvjmv\nservice-proxy-toggled-fvjmv\nservice-proxy-toggled-fvjmv\nservice-proxy-toggled-sv7rc\nservice-proxy-toggled-vnmpg\nservice-proxy-toggled-fvjmv\nservice-proxy-toggled-fvjmv\nservice-proxy-toggled-sv7rc\nservice-proxy-toggled-fvjmv\nservice-proxy-toggled-fvjmv\nservice-proxy-toggled-sv7rc\nservice-proxy-toggled-vnmpg\nservice-proxy-toggled-fvjmv\nservice-proxy-toggled-fvjmv\nservice-proxy-toggled-sv7rc\nservice-proxy-toggled-fvjmv\nservice-proxy-toggled-vnmpg\nservice-proxy-toggled-sv7rc\nservice-proxy-toggled-sv7rc\nservice-proxy-toggled-vnmpg\nservice-proxy-toggled-fvjmv\nservice-proxy-toggled-sv7rc\nservice-proxy-toggled-sv7rc\nservice-proxy-toggled-vnmpg\nservice-proxy-toggled-fvjmv\nservice-proxy-toggled-fvjmv\nservice-proxy-toggled-fvjmv\nservice-proxy-toggled-vnmpg\nservice-proxy-toggled-fvjmv\nservice-proxy-toggled-vnmpg\nservice-proxy-toggled-sv7rc\nservice-proxy-toggled-sv7rc\nservice-proxy-toggled-vnmpg\nservice-proxy-toggled-fvjmv\nservice-proxy-toggled-vnmpg\nservice-proxy-toggled-vnmpg\nservice-proxy-toggled-fvjmv\nservice-proxy-toggled-fvjmv\nservice-proxy-toggled-sv7rc\nservice-proxy-toggled-sv7rc\nservice-proxy-toggled-vnmpg\n"
Apr 29 23:21:58.210: INFO: Executing cmd "for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.39.247:80 2>&1 || true; echo; done" in pod services-4689/verify-service-up-exec-pod-zgkz2
Apr 29 23:21:58.210: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4689 exec verify-service-up-exec-pod-zgkz2 -- /bin/sh -x -c for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.39.247:80 2>&1 || true; echo; done'
Apr 29 23:21:58.623: INFO: stderr: "+ seq 1 150\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.39.247:80\n+ echo\n"
Apr 29 23:21:58.624: INFO: stdout: "service-proxy-toggled-vnmpg\nservice-proxy-toggled-vnmpg\nservice-proxy-toggled-sv7rc\nservice-proxy-toggled-sv7rc\nservice-proxy-toggled-fvjmv\nservice-proxy-toggled-fvjmv\nservice-proxy-toggled-vnmpg\nservice-proxy-toggled-fvjmv\nservice-proxy-toggled-vnmpg\nservice-proxy-toggled-sv7rc\nservice-proxy-toggled-vnmpg\nservice-proxy-toggled-vnmpg\nservice-proxy-toggled-sv7rc\nservice-proxy-toggled-fvjmv\nservice-proxy-toggled-sv7rc\nservice-proxy-toggled-vnmpg\nservice-proxy-toggled-fvjmv\nservice-proxy-toggled-vnmpg\nservice-proxy-toggled-sv7rc\nservice-proxy-toggled-sv7rc\nservice-proxy-toggled-fvjmv\nservice-proxy-toggled-sv7rc\nservice-proxy-toggled-sv7rc\nservice-proxy-toggled-sv7rc\nservice-proxy-toggled-vnmpg\nservice-proxy-toggled-sv7rc\nservice-proxy-toggled-fvjmv\nservice-proxy-toggled-fvjmv\nservice-proxy-toggled-fvjmv\nservice-proxy-toggled-sv7rc\nservice-proxy-toggled-vnmpg\nservice-proxy-toggled-vnmpg\nservice-proxy-toggled-vnmpg\nservice-proxy-toggled-sv7rc\nservice-proxy-toggled-fvjmv\nservice-proxy-toggled-sv7rc\nservice-proxy-toggled-vnmpg\nservice-proxy-toggled-sv7rc\nservice-proxy-toggled-fvjmv\nservice-proxy-toggled-sv7rc\nservice-proxy-toggled-sv7rc\nservice-proxy-toggled-vnmpg\nservice-proxy-toggled-fvjmv\nservice-proxy-toggled-sv7rc\nservice-proxy-toggled-vnmpg\nservice-proxy-toggled-fvjmv\nservice-proxy-toggled-fvjmv\nservice-proxy-toggled-sv7rc\nservice-proxy-toggled-fvjmv\nservice-proxy-toggled-sv7rc\nservice-proxy-toggled-vnmpg\nservice-proxy-toggled-sv7rc\nservice-proxy-toggled-sv7rc\nservice-proxy-toggled-fvjmv\nservice-proxy-toggled-sv7rc\nservice-proxy-toggled-fvjmv\nservice-proxy-toggled-sv7rc\nservice-proxy-toggled-vnmpg\nservice-proxy-toggled-fvjmv\nservice-proxy-toggled-sv7rc\nservice-proxy-toggled-sv7rc\nservice-proxy-toggled-sv7rc\nservice-proxy-toggled-vnmpg\nservice-proxy-toggled-fvjmv\nservice-proxy-toggled-vnmpg\nservice-proxy-toggled-sv7rc\nservice-proxy-toggled-vnmpg\nservice-proxy-toggled-fvjmv\nservice-proxy-toggled-sv7rc\nservice-proxy-toggled-vnmpg\nservice-proxy-toggled-vnmpg\nservice-proxy-toggled-sv7rc\nservice-proxy-toggled-fvjmv\nservice-proxy-toggled-fvjmv\nservice-proxy-toggled-sv7rc\nservice-proxy-toggled-sv7rc\nservice-proxy-toggled-sv7rc\nservice-proxy-toggled-vnmpg\nservice-proxy-toggled-sv7rc\nservice-proxy-toggled-fvjmv\nservice-proxy-toggled-vnmpg\nservice-proxy-toggled-vnmpg\nservice-proxy-toggled-fvjmv\nservice-proxy-toggled-fvjmv\nservice-proxy-toggled-fvjmv\nservice-proxy-toggled-vnmpg\nservice-proxy-toggled-vnmpg\nservice-proxy-toggled-sv7rc\nservice-proxy-toggled-fvjmv\nservice-proxy-toggled-fvjmv\nservice-proxy-toggled-sv7rc\nservice-proxy-toggled-sv7rc\nservice-proxy-toggled-fvjmv\nservice-proxy-toggled-fvjmv\nservice-proxy-toggled-sv7rc\nservice-proxy-toggled-fvjmv\nservice-proxy-toggled-fvjmv\nservice-proxy-toggled-sv7rc\nservice-proxy-toggled-fvjmv\nservice-proxy-toggled-fvjmv\nservice-proxy-toggled-vnmpg\nservice-proxy-toggled-sv7rc\nservice-proxy-toggled-sv7rc\nservice-proxy-toggled-sv7rc\nservice-proxy-toggled-sv7rc\nservice-proxy-toggled-vnmpg\nservice-proxy-toggled-sv7rc\nservice-proxy-toggled-vnmpg\nservice-proxy-toggled-vnmpg\nservice-proxy-toggled-fvjmv\nservice-proxy-toggled-vnmpg\nservice-proxy-toggled-sv7rc\nservice-proxy-toggled-vnmpg\nservice-proxy-toggled-sv7rc\nservice-proxy-toggled-vnmpg\nservice-proxy-toggled-vnmpg\nservice-proxy-toggled-vnmpg\nservice-proxy-toggled-fvjmv\nservice-proxy-toggled-sv7rc\nservice-proxy-toggled-sv7rc\nservice-proxy-toggled-sv7rc\nservice-proxy-toggled-sv7rc\nservice-proxy-toggled-fvjmv\nservice-proxy-toggled-fvjmv\nservice-proxy-toggled-fvjmv\nservice-proxy-toggled-vnmpg\nservice-proxy-toggled-fvjmv\nservice-proxy-toggled-vnmpg\nservice-proxy-toggled-fvjmv\nservice-proxy-toggled-vnmpg\nservice-proxy-toggled-sv7rc\nservice-proxy-toggled-sv7rc\nservice-proxy-toggled-sv7rc\nservice-proxy-toggled-fvjmv\nservice-proxy-toggled-fvjmv\nservice-proxy-toggled-sv7rc\nservice-proxy-toggled-vnmpg\nservice-proxy-toggled-fvjmv\nservice-proxy-toggled-sv7rc\nservice-proxy-toggled-fvjmv\nservice-proxy-toggled-sv7rc\nservice-proxy-toggled-sv7rc\nservice-proxy-toggled-sv7rc\nservice-proxy-toggled-vnmpg\nservice-proxy-toggled-sv7rc\nservice-proxy-toggled-vnmpg\nservice-proxy-toggled-sv7rc\nservice-proxy-toggled-fvjmv\nservice-proxy-toggled-sv7rc\nservice-proxy-toggled-fvjmv\n"
STEP: Deleting pod verify-service-up-host-exec-pod in namespace services-4689
STEP: Deleting pod verify-service-up-exec-pod-zgkz2 in namespace services-4689
STEP: verifying service-disabled is still not up
Apr 29 23:21:58.640: INFO: Creating new host exec pod
Apr 29 23:21:58.652: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Apr 29 23:22:00.656: INFO: The status of Pod verify-service-down-host-exec-pod is Running (Ready = true)
Apr 29 23:22:00.656: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4689 exec verify-service-down-host-exec-pod -- /bin/sh -x -c curl -g -s --connect-timeout 2 http://10.233.61.234:80 && echo service-down-failed'
Apr 29 23:22:02.923: INFO: rc: 28
Apr 29 23:22:02.923: INFO: error while kubectl execing "curl -g -s --connect-timeout 2 http://10.233.61.234:80 && echo service-down-failed" in pod services-4689/verify-service-down-host-exec-pod: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4689 exec verify-service-down-host-exec-pod -- /bin/sh -x -c curl -g -s --connect-timeout 2 http://10.233.61.234:80 && echo service-down-failed:
Command stdout:

stderr:
+ curl -g -s --connect-timeout 2 http://10.233.61.234:80
command terminated with exit code 28

error:
exit status 28
Output: 
STEP: Deleting pod verify-service-down-host-exec-pod in namespace services-4689
[AfterEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Apr 29 23:22:02.931: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-4689" for this suite.
[AfterEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750


• [SLOW TEST:59.847 seconds]
[sig-network] Services
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should implement service.kubernetes.io/service-proxy-name
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1865
------------------------------
{"msg":"PASSED [sig-network] Services should implement service.kubernetes.io/service-proxy-name","total":-1,"completed":2,"skipped":188,"failed":1,"failures":["[sig-network] Conntrack should be able to preserve UDP traffic when server pod cycles for a NodePort service"]}
Apr 29 23:22:02.944: INFO: Running AfterSuite actions on all nodes


[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Apr 29 23:19:51.173: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746
[It] should be able to update service type to NodePort listening on same port number but different protocols
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1211
STEP: creating a TCP service nodeport-update-service with type=ClusterIP in namespace services-7318
Apr 29 23:19:51.196: INFO: Service Port TCP: 80
STEP: changing the TCP service to type=NodePort
STEP: creating replication controller nodeport-update-service in namespace services-7318
I0429 23:19:51.208100      32 runners.go:190] Created replication controller with name: nodeport-update-service, namespace: services-7318, replica count: 2
I0429 23:19:54.259463      32 runners.go:190] nodeport-update-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0429 23:19:57.259615      32 runners.go:190] nodeport-update-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0429 23:20:00.260704      32 runners.go:190] nodeport-update-service Pods: 2 out of 2 created, 1 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0429 23:20:03.261826      32 runners.go:190] nodeport-update-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Apr 29 23:20:03.261: INFO: Creating new exec pod
Apr 29 23:20:10.284: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7318 exec execpod7jqfj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 nodeport-update-service 80'
Apr 29 23:20:10.780: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 nodeport-update-service 80\nConnection to nodeport-update-service 80 port [tcp/http] succeeded!\n"
Apr 29 23:20:10.780: INFO: stdout: "nodeport-update-service-f8mb5"
Apr 29 23:20:10.780: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7318 exec execpod7jqfj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.233.26.221 80'
Apr 29 23:20:11.420: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 10.233.26.221 80\nConnection to 10.233.26.221 80 port [tcp/http] succeeded!\n"
Apr 29 23:20:11.420: INFO: stdout: ""
Apr 29 23:20:12.421: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7318 exec execpod7jqfj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.233.26.221 80'
Apr 29 23:20:12.805: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 10.233.26.221 80\nConnection to 10.233.26.221 80 port [tcp/http] succeeded!\n"
Apr 29 23:20:12.805: INFO: stdout: ""
Apr 29 23:20:13.422: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7318 exec execpod7jqfj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.233.26.221 80'
Apr 29 23:20:13.695: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 10.233.26.221 80\nConnection to 10.233.26.221 80 port [tcp/http] succeeded!\n"
Apr 29 23:20:13.695: INFO: stdout: "nodeport-update-service-f8mb5"
Apr 29 23:20:13.695: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7318 exec execpod7jqfj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30885'
Apr 29 23:20:13.965: INFO: rc: 1
Apr 29 23:20:13.966: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7318 exec execpod7jqfj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30885:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30885
nc: connect to 10.10.190.207 port 30885 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Apr 29 23:20:14.968: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7318 exec execpod7jqfj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30885'
Apr 29 23:20:15.422: INFO: rc: 1
Apr 29 23:20:15.422: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7318 exec execpod7jqfj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30885:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30885
nc: connect to 10.10.190.207 port 30885 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Apr 29 23:20:15.966: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7318 exec execpod7jqfj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30885'
Apr 29 23:20:16.215: INFO: rc: 1
Apr 29 23:20:16.215: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7318 exec execpod7jqfj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30885:
Command stdout:

stderr:
+ nc -v -t -w 2 10.10.190.207 30885
+ echo hostName
nc: connect to 10.10.190.207 port 30885 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Apr 29 23:20:16.967: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7318 exec execpod7jqfj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30885'
Apr 29 23:20:17.205: INFO: rc: 1
Apr 29 23:20:17.205: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7318 exec execpod7jqfj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30885:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30885
nc: connect to 10.10.190.207 port 30885 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Apr 29 23:20:17.966: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7318 exec execpod7jqfj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30885'
Apr 29 23:20:18.222: INFO: rc: 1
Apr 29 23:20:18.222: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7318 exec execpod7jqfj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30885:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30885
nc: connect to 10.10.190.207 port 30885 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Apr 29 23:20:18.968: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7318 exec execpod7jqfj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30885'
Apr 29 23:20:19.207: INFO: rc: 1
Apr 29 23:20:19.207: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7318 exec execpod7jqfj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30885:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30885
nc: connect to 10.10.190.207 port 30885 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Apr 29 23:20:19.968: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7318 exec execpod7jqfj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30885'
Apr 29 23:20:20.273: INFO: rc: 1
Apr 29 23:20:20.273: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7318 exec execpod7jqfj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30885:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30885
nc: connect to 10.10.190.207 port 30885 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Apr 29 23:20:20.967: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7318 exec execpod7jqfj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30885'
Apr 29 23:20:21.211: INFO: rc: 1
Apr 29 23:20:21.211: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7318 exec execpod7jqfj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30885:
Command stdout:

stderr:
+ nc -v -t -w 2 10.10.190.207 30885
+ echo hostName
nc: connect to 10.10.190.207 port 30885 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Apr 29 23:20:21.967: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7318 exec execpod7jqfj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30885'
Apr 29 23:20:22.235: INFO: rc: 1
Apr 29 23:20:22.235: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7318 exec execpod7jqfj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30885:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30885
nc: connect to 10.10.190.207 port 30885 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Apr 29 23:20:22.967: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7318 exec execpod7jqfj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30885'
Apr 29 23:20:23.274: INFO: rc: 1
Apr 29 23:20:23.274: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7318 exec execpod7jqfj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30885:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30885
nc: connect to 10.10.190.207 port 30885 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Apr 29 23:20:23.967: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7318 exec execpod7jqfj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30885'
Apr 29 23:20:24.346: INFO: rc: 1
Apr 29 23:20:24.346: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7318 exec execpod7jqfj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30885:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30885
nc: connect to 10.10.190.207 port 30885 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Apr 29 23:20:24.966: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7318 exec execpod7jqfj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30885'
Apr 29 23:20:26.442: INFO: rc: 1
Apr 29 23:20:26.442: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7318 exec execpod7jqfj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30885:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30885
nc: connect to 10.10.190.207 port 30885 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Apr 29 23:20:26.966: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7318 exec execpod7jqfj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30885'
Apr 29 23:20:27.214: INFO: rc: 1
Apr 29 23:20:27.214: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7318 exec execpod7jqfj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30885:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30885
nc: connect to 10.10.190.207 port 30885 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Apr 29 23:20:27.967: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7318 exec execpod7jqfj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30885'
Apr 29 23:20:28.887: INFO: rc: 1
Apr 29 23:20:28.887: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7318 exec execpod7jqfj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30885:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30885
nc: connect to 10.10.190.207 port 30885 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Apr 29 23:20:28.967: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7318 exec execpod7jqfj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30885'
Apr 29 23:20:29.262: INFO: rc: 1
Apr 29 23:20:29.263: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7318 exec execpod7jqfj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30885:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30885
nc: connect to 10.10.190.207 port 30885 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Apr 29 23:20:29.967: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7318 exec execpod7jqfj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30885'
Apr 29 23:20:30.209: INFO: rc: 1
Apr 29 23:20:30.209: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7318 exec execpod7jqfj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30885:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30885
nc: connect to 10.10.190.207 port 30885 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Apr 29 23:20:30.967: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7318 exec execpod7jqfj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30885'
Apr 29 23:20:31.327: INFO: rc: 1
Apr 29 23:20:31.327: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7318 exec execpod7jqfj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30885:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30885
nc: connect to 10.10.190.207 port 30885 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Apr 29 23:20:31.966: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7318 exec execpod7jqfj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30885'
Apr 29 23:20:32.234: INFO: rc: 1
Apr 29 23:20:32.234: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7318 exec execpod7jqfj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30885:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30885
nc: connect to 10.10.190.207 port 30885 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Apr 29 23:20:32.966: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7318 exec execpod7jqfj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30885'
Apr 29 23:20:33.291: INFO: rc: 1
Apr 29 23:20:33.291: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7318 exec execpod7jqfj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30885:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30885
nc: connect to 10.10.190.207 port 30885 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Apr 29 23:20:33.966: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7318 exec execpod7jqfj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30885'
Apr 29 23:20:34.619: INFO: rc: 1
Apr 29 23:20:34.619: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7318 exec execpod7jqfj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30885:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30885
nc: connect to 10.10.190.207 port 30885 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Apr 29 23:20:34.966: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7318 exec execpod7jqfj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30885'
Apr 29 23:20:35.376: INFO: rc: 1
Apr 29 23:20:35.376: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7318 exec execpod7jqfj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30885:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30885
nc: connect to 10.10.190.207 port 30885 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Apr 29 23:20:35.966: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7318 exec execpod7jqfj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30885'
Apr 29 23:20:36.217: INFO: rc: 1
Apr 29 23:20:36.217: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7318 exec execpod7jqfj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30885:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30885
nc: connect to 10.10.190.207 port 30885 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Apr 29 23:20:36.966: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7318 exec execpod7jqfj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30885'
Apr 29 23:20:37.409: INFO: rc: 1
Apr 29 23:20:37.409: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7318 exec execpod7jqfj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30885:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30885
nc: connect to 10.10.190.207 port 30885 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Apr 29 23:20:37.966: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7318 exec execpod7jqfj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30885'
Apr 29 23:20:38.294: INFO: rc: 1
Apr 29 23:20:38.294: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7318 exec execpod7jqfj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30885:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30885
nc: connect to 10.10.190.207 port 30885 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Apr 29 23:20:38.967: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7318 exec execpod7jqfj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30885'
Apr 29 23:20:39.239: INFO: rc: 1
Apr 29 23:20:39.239: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7318 exec execpod7jqfj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30885:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30885
nc: connect to 10.10.190.207 port 30885 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Apr 29 23:20:39.967: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7318 exec execpod7jqfj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30885'
Apr 29 23:20:40.347: INFO: rc: 1
Apr 29 23:20:40.347: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7318 exec execpod7jqfj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30885:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30885
nc: connect to 10.10.190.207 port 30885 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Apr 29 23:20:40.967: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7318 exec execpod7jqfj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30885'
Apr 29 23:20:41.222: INFO: rc: 1
Apr 29 23:20:41.222: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7318 exec execpod7jqfj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30885:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30885
nc: connect to 10.10.190.207 port 30885 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Apr 29 23:20:41.967: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7318 exec execpod7jqfj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30885'
Apr 29 23:20:42.213: INFO: rc: 1
Apr 29 23:20:42.213: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7318 exec execpod7jqfj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30885:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30885
nc: connect to 10.10.190.207 port 30885 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Apr 29 23:20:42.966: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7318 exec execpod7jqfj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30885'
Apr 29 23:20:43.661: INFO: rc: 1
Apr 29 23:20:43.661: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7318 exec execpod7jqfj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30885:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30885
nc: connect to 10.10.190.207 port 30885 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Apr 29 23:20:43.967: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7318 exec execpod7jqfj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30885'
Apr 29 23:20:44.320: INFO: rc: 1
Apr 29 23:20:44.320: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7318 exec execpod7jqfj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30885:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30885
nc: connect to 10.10.190.207 port 30885 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Apr 29 23:20:44.967: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7318 exec execpod7jqfj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30885'
Apr 29 23:20:45.242: INFO: rc: 1
Apr 29 23:20:45.242: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7318 exec execpod7jqfj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30885:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30885
nc: connect to 10.10.190.207 port 30885 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Apr 29 23:20:45.968: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7318 exec execpod7jqfj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30885'
Apr 29 23:20:46.231: INFO: rc: 1
Apr 29 23:20:46.231: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7318 exec execpod7jqfj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30885:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30885
nc: connect to 10.10.190.207 port 30885 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Apr 29 23:20:46.966: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7318 exec execpod7jqfj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30885'
Apr 29 23:20:47.227: INFO: rc: 1
Apr 29 23:20:47.227: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7318 exec execpod7jqfj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30885:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30885
nc: connect to 10.10.190.207 port 30885 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Apr 29 23:20:47.968: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7318 exec execpod7jqfj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30885'
Apr 29 23:20:48.427: INFO: rc: 1
Apr 29 23:20:48.427: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7318 exec execpod7jqfj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30885:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30885
nc: connect to 10.10.190.207 port 30885 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Apr 29 23:20:48.967: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7318 exec execpod7jqfj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30885'
Apr 29 23:20:49.898: INFO: rc: 1
Apr 29 23:20:49.898: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7318 exec execpod7jqfj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30885:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30885
nc: connect to 10.10.190.207 port 30885 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Apr 29 23:20:49.966: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7318 exec execpod7jqfj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30885'
Apr 29 23:20:50.221: INFO: rc: 1
Apr 29 23:20:50.222: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7318 exec execpod7jqfj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30885:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30885
nc: connect to 10.10.190.207 port 30885 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Apr 29 23:20:50.968: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7318 exec execpod7jqfj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30885'
Apr 29 23:20:51.437: INFO: rc: 1
Apr 29 23:20:51.437: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7318 exec execpod7jqfj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30885:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30885
nc: connect to 10.10.190.207 port 30885 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Apr 29 23:20:51.966: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7318 exec execpod7jqfj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30885'
Apr 29 23:20:52.454: INFO: rc: 1
Apr 29 23:20:52.455: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7318 exec execpod7jqfj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30885:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30885
nc: connect to 10.10.190.207 port 30885 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Apr 29 23:20:52.966: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7318 exec execpod7jqfj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30885'
Apr 29 23:20:53.262: INFO: rc: 1
Apr 29 23:20:53.262: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7318 exec execpod7jqfj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30885:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30885
nc: connect to 10.10.190.207 port 30885 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Apr 29 23:20:53.967: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7318 exec execpod7jqfj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30885'
Apr 29 23:20:54.305: INFO: rc: 1
Apr 29 23:20:54.305: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7318 exec execpod7jqfj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30885:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30885
nc: connect to 10.10.190.207 port 30885 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Apr 29 23:20:54.966: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7318 exec execpod7jqfj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30885'
Apr 29 23:20:55.228: INFO: rc: 1
Apr 29 23:20:55.228: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7318 exec execpod7jqfj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30885:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30885
nc: connect to 10.10.190.207 port 30885 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Apr 29 23:20:55.968: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7318 exec execpod7jqfj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30885'
Apr 29 23:20:56.455: INFO: rc: 1
Apr 29 23:20:56.455: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7318 exec execpod7jqfj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30885:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30885
nc: connect to 10.10.190.207 port 30885 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Apr 29 23:20:56.967: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7318 exec execpod7jqfj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30885'
Apr 29 23:20:57.229: INFO: rc: 1
Apr 29 23:20:57.230: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7318 exec execpod7jqfj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30885:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30885
nc: connect to 10.10.190.207 port 30885 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Apr 29 23:20:57.968: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7318 exec execpod7jqfj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30885'
Apr 29 23:20:58.455: INFO: rc: 1
Apr 29 23:20:58.455: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7318 exec execpod7jqfj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30885:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30885
nc: connect to 10.10.190.207 port 30885 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Apr 29 23:20:58.967: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7318 exec execpod7jqfj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30885'
Apr 29 23:20:59.233: INFO: rc: 1
Apr 29 23:20:59.233: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7318 exec execpod7jqfj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30885:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30885
nc: connect to 10.10.190.207 port 30885 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Apr 29 23:20:59.968: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7318 exec execpod7jqfj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30885'
Apr 29 23:21:00.214: INFO: rc: 1
Apr 29 23:21:00.214: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7318 exec execpod7jqfj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30885:
Command stdout:

stderr:
+ nc -v -t -w 2 10.10.190.207 30885
+ echo hostName
nc: connect to 10.10.190.207 port 30885 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Apr 29 23:21:00.968: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7318 exec execpod7jqfj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30885'
Apr 29 23:21:01.212: INFO: rc: 1
Apr 29 23:21:01.212: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7318 exec execpod7jqfj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30885:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30885
nc: connect to 10.10.190.207 port 30885 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Apr 29 23:21:01.966: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7318 exec execpod7jqfj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30885'
Apr 29 23:21:02.209: INFO: rc: 1
Apr 29 23:21:02.209: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7318 exec execpod7jqfj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30885:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30885
nc: connect to 10.10.190.207 port 30885 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Apr 29 23:21:02.967: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7318 exec execpod7jqfj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30885'
Apr 29 23:21:03.252: INFO: rc: 1
Apr 29 23:21:03.252: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7318 exec execpod7jqfj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30885:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30885
nc: connect to 10.10.190.207 port 30885 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Apr 29 23:21:03.966: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7318 exec execpod7jqfj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30885'
Apr 29 23:21:04.306: INFO: rc: 1
Apr 29 23:21:04.307: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7318 exec execpod7jqfj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30885:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30885
nc: connect to 10.10.190.207 port 30885 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Apr 29 23:21:04.968: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7318 exec execpod7jqfj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30885'
Apr 29 23:21:05.437: INFO: rc: 1
Apr 29 23:21:05.437: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7318 exec execpod7jqfj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30885:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30885
nc: connect to 10.10.190.207 port 30885 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Apr 29 23:21:05.966: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7318 exec execpod7jqfj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30885'
Apr 29 23:21:06.210: INFO: rc: 1
Apr 29 23:21:06.210: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7318 exec execpod7jqfj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30885:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30885
nc: connect to 10.10.190.207 port 30885 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Apr 29 23:21:06.967: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7318 exec execpod7jqfj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30885'
Apr 29 23:21:07.225: INFO: rc: 1
Apr 29 23:21:07.225: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7318 exec execpod7jqfj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30885:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30885
nc: connect to 10.10.190.207 port 30885 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Apr 29 23:21:07.966: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7318 exec execpod7jqfj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30885'
Apr 29 23:21:08.438: INFO: rc: 1
Apr 29 23:21:08.438: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7318 exec execpod7jqfj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30885:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30885
nc: connect to 10.10.190.207 port 30885 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Apr 29 23:21:08.966: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7318 exec execpod7jqfj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30885'
Apr 29 23:21:09.250: INFO: rc: 1
Apr 29 23:21:09.250: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7318 exec execpod7jqfj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30885:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30885
nc: connect to 10.10.190.207 port 30885 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Apr 29 23:21:09.967: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7318 exec execpod7jqfj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30885'
Apr 29 23:21:10.231: INFO: rc: 1
Apr 29 23:21:10.231: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7318 exec execpod7jqfj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30885:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30885
nc: connect to 10.10.190.207 port 30885 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Apr 29 23:21:10.967: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7318 exec execpod7jqfj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30885'
Apr 29 23:21:11.286: INFO: rc: 1
Apr 29 23:21:11.286: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7318 exec execpod7jqfj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30885:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30885
nc: connect to 10.10.190.207 port 30885 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Apr 29 23:21:11.966: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7318 exec execpod7jqfj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30885'
Apr 29 23:21:12.298: INFO: rc: 1
Apr 29 23:21:12.298: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7318 exec execpod7jqfj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30885:
Command stdout:

stderr:
+ nc -v -t -w 2 10.10.190.207 30885
+ echo hostName
nc: connect to 10.10.190.207 port 30885 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Apr 29 23:21:12.966: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7318 exec execpod7jqfj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30885'
Apr 29 23:21:13.369: INFO: rc: 1
Apr 29 23:21:13.369: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7318 exec execpod7jqfj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30885:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30885
nc: connect to 10.10.190.207 port 30885 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Apr 29 23:21:13.967: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7318 exec execpod7jqfj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30885'
Apr 29 23:21:14.223: INFO: rc: 1
Apr 29 23:21:14.224: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7318 exec execpod7jqfj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30885:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30885
nc: connect to 10.10.190.207 port 30885 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Apr 29 23:21:14.967: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7318 exec execpod7jqfj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30885'
Apr 29 23:21:15.221: INFO: rc: 1
Apr 29 23:21:15.221: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7318 exec execpod7jqfj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30885:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30885
nc: connect to 10.10.190.207 port 30885 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Apr 29 23:21:15.967: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7318 exec execpod7jqfj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30885'
Apr 29 23:21:16.447: INFO: rc: 1
Apr 29 23:21:16.447: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7318 exec execpod7jqfj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30885:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30885
nc: connect to 10.10.190.207 port 30885 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Apr 29 23:21:16.967: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7318 exec execpod7jqfj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30885'
Apr 29 23:21:17.223: INFO: rc: 1
Apr 29 23:21:17.223: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7318 exec execpod7jqfj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30885:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30885
nc: connect to 10.10.190.207 port 30885 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Apr 29 23:21:17.967: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7318 exec execpod7jqfj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30885'
Apr 29 23:21:18.198: INFO: rc: 1
Apr 29 23:21:18.198: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7318 exec execpod7jqfj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30885:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30885
nc: connect to 10.10.190.207 port 30885 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Apr 29 23:21:18.966: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7318 exec execpod7jqfj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30885'
Apr 29 23:21:19.242: INFO: rc: 1
Apr 29 23:21:19.242: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7318 exec execpod7jqfj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30885:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30885
nc: connect to 10.10.190.207 port 30885 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Apr 29 23:21:19.968: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7318 exec execpod7jqfj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30885'
Apr 29 23:21:20.339: INFO: rc: 1
Apr 29 23:21:20.339: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7318 exec execpod7jqfj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30885:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30885
nc: connect to 10.10.190.207 port 30885 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Apr 29 23:21:20.966: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7318 exec execpod7jqfj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30885'
Apr 29 23:21:21.231: INFO: rc: 1
Apr 29 23:21:21.231: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7318 exec execpod7jqfj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30885:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30885
nc: connect to 10.10.190.207 port 30885 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Apr 29 23:21:21.967: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7318 exec execpod7jqfj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30885'
Apr 29 23:21:22.475: INFO: rc: 1
Apr 29 23:21:22.475: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7318 exec execpod7jqfj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30885:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30885
nc: connect to 10.10.190.207 port 30885 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Apr 29 23:21:22.966: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7318 exec execpod7jqfj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30885'
Apr 29 23:21:23.411: INFO: rc: 1
Apr 29 23:21:23.412: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7318 exec execpod7jqfj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30885:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30885
nc: connect to 10.10.190.207 port 30885 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Apr 29 23:21:23.966: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7318 exec execpod7jqfj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30885'
Apr 29 23:21:24.646: INFO: rc: 1
Apr 29 23:21:24.646: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7318 exec execpod7jqfj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30885:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30885
nc: connect to 10.10.190.207 port 30885 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Apr 29 23:21:24.967: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7318 exec execpod7jqfj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30885'
Apr 29 23:21:26.388: INFO: rc: 1
Apr 29 23:21:26.388: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7318 exec execpod7jqfj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30885:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30885
nc: connect to 10.10.190.207 port 30885 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Apr 29 23:21:26.967: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7318 exec execpod7jqfj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30885'
Apr 29 23:21:27.205: INFO: rc: 1
Apr 29 23:21:27.205: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7318 exec execpod7jqfj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30885:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30885
nc: connect to 10.10.190.207 port 30885 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Apr 29 23:21:27.967: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7318 exec execpod7jqfj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30885'
Apr 29 23:21:28.212: INFO: rc: 1
Apr 29 23:21:28.212: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7318 exec execpod7jqfj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30885:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30885
nc: connect to 10.10.190.207 port 30885 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Apr 29 23:21:28.966: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7318 exec execpod7jqfj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30885'
Apr 29 23:21:29.196: INFO: rc: 1
Apr 29 23:21:29.196: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7318 exec execpod7jqfj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30885:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30885
nc: connect to 10.10.190.207 port 30885 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Apr 29 23:21:29.966: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7318 exec execpod7jqfj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30885'
Apr 29 23:21:30.210: INFO: rc: 1
Apr 29 23:21:30.210: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7318 exec execpod7jqfj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30885:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30885
nc: connect to 10.10.190.207 port 30885 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Apr 29 23:21:30.968: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7318 exec execpod7jqfj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30885'
Apr 29 23:21:31.255: INFO: rc: 1
Apr 29 23:21:31.255: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7318 exec execpod7jqfj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30885:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30885
nc: connect to 10.10.190.207 port 30885 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Apr 29 23:21:31.966: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7318 exec execpod7jqfj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30885'
Apr 29 23:21:32.211: INFO: rc: 1
Apr 29 23:21:32.211: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7318 exec execpod7jqfj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30885:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30885
nc: connect to 10.10.190.207 port 30885 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Apr 29 23:21:32.966: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7318 exec execpod7jqfj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30885'
Apr 29 23:21:33.854: INFO: rc: 1
Apr 29 23:21:33.854: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7318 exec execpod7jqfj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30885:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30885
nc: connect to 10.10.190.207 port 30885 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Apr 29 23:21:33.967: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7318 exec execpod7jqfj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30885'
Apr 29 23:21:34.272: INFO: rc: 1
Apr 29 23:21:34.272: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7318 exec execpod7jqfj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30885:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30885
nc: connect to 10.10.190.207 port 30885 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Apr 29 23:21:34.968: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7318 exec execpod7jqfj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30885'
Apr 29 23:21:36.161: INFO: rc: 1
Apr 29 23:21:36.161: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7318 exec execpod7jqfj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30885:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30885
nc: connect to 10.10.190.207 port 30885 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Apr 29 23:21:36.966: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7318 exec execpod7jqfj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30885'
Apr 29 23:21:37.738: INFO: rc: 1
Apr 29 23:21:37.738: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7318 exec execpod7jqfj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30885:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30885
nc: connect to 10.10.190.207 port 30885 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Apr 29 23:21:37.966: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7318 exec execpod7jqfj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30885'
Apr 29 23:21:38.261: INFO: rc: 1
Apr 29 23:21:38.261: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7318 exec execpod7jqfj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30885:
Command stdout:

stderr:
+ nc -v -t -w 2+  10.10.190.207 30885
echo hostName
nc: connect to 10.10.190.207 port 30885 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Apr 29 23:21:38.967: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7318 exec execpod7jqfj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30885'
Apr 29 23:21:39.195: INFO: rc: 1
Apr 29 23:21:39.195: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7318 exec execpod7jqfj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30885:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30885
nc: connect to 10.10.190.207 port 30885 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Apr 29 23:21:39.966: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7318 exec execpod7jqfj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30885'
Apr 29 23:21:40.207: INFO: rc: 1
Apr 29 23:21:40.207: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7318 exec execpod7jqfj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30885:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30885
nc: connect to 10.10.190.207 port 30885 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Apr 29 23:21:40.966: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7318 exec execpod7jqfj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30885'
Apr 29 23:21:41.256: INFO: rc: 1
Apr 29 23:21:41.256: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7318 exec execpod7jqfj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30885:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30885
nc: connect to 10.10.190.207 port 30885 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Apr 29 23:21:41.967: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7318 exec execpod7jqfj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30885'
Apr 29 23:21:42.220: INFO: rc: 1
Apr 29 23:21:42.220: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7318 exec execpod7jqfj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30885:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30885
nc: connect to 10.10.190.207 port 30885 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Apr 29 23:21:42.967: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7318 exec execpod7jqfj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30885'
Apr 29 23:21:43.212: INFO: rc: 1
Apr 29 23:21:43.212: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7318 exec execpod7jqfj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30885:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30885
nc: connect to 10.10.190.207 port 30885 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Apr 29 23:21:43.968: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7318 exec execpod7jqfj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30885'
Apr 29 23:21:44.211: INFO: rc: 1
Apr 29 23:21:44.211: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7318 exec execpod7jqfj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30885:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30885
nc: connect to 10.10.190.207 port 30885 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Apr 29 23:21:44.966: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7318 exec execpod7jqfj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30885'
Apr 29 23:21:45.216: INFO: rc: 1
Apr 29 23:21:45.216: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7318 exec execpod7jqfj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30885:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30885
nc: connect to 10.10.190.207 port 30885 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Apr 29 23:21:45.967: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7318 exec execpod7jqfj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30885'
Apr 29 23:21:46.384: INFO: rc: 1
Apr 29 23:21:46.384: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7318 exec execpod7jqfj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30885:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30885
nc: connect to 10.10.190.207 port 30885 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Apr 29 23:21:46.966: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7318 exec execpod7jqfj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30885'
Apr 29 23:21:47.235: INFO: rc: 1
Apr 29 23:21:47.235: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7318 exec execpod7jqfj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30885:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30885
nc: connect to 10.10.190.207 port 30885 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Apr 29 23:21:47.967: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7318 exec execpod7jqfj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30885'
Apr 29 23:21:48.228: INFO: rc: 1
Apr 29 23:21:48.228: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7318 exec execpod7jqfj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30885:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30885
nc: connect to 10.10.190.207 port 30885 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Apr 29 23:21:48.966: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7318 exec execpod7jqfj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30885'
Apr 29 23:21:49.220: INFO: rc: 1
Apr 29 23:21:49.220: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7318 exec execpod7jqfj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30885:
Command stdout:

stderr:
+ nc -v -t -w 2 10.10.190.207 30885
+ echo hostName
nc: connect to 10.10.190.207 port 30885 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Apr 29 23:21:49.967: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7318 exec execpod7jqfj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30885'
Apr 29 23:21:50.198: INFO: rc: 1
Apr 29 23:21:50.198: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7318 exec execpod7jqfj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30885:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30885
nc: connect to 10.10.190.207 port 30885 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Apr 29 23:21:50.966: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7318 exec execpod7jqfj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30885'
Apr 29 23:21:51.217: INFO: rc: 1
Apr 29 23:21:51.217: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7318 exec execpod7jqfj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30885:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30885
nc: connect to 10.10.190.207 port 30885 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Apr 29 23:21:51.966: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7318 exec execpod7jqfj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30885'
Apr 29 23:21:52.265: INFO: rc: 1
Apr 29 23:21:52.265: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7318 exec execpod7jqfj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30885:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30885
nc: connect to 10.10.190.207 port 30885 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Apr 29 23:21:52.966: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7318 exec execpod7jqfj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30885'
Apr 29 23:21:53.213: INFO: rc: 1
Apr 29 23:21:53.213: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7318 exec execpod7jqfj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30885:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30885
nc: connect to 10.10.190.207 port 30885 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Apr 29 23:21:53.966: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7318 exec execpod7jqfj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30885'
Apr 29 23:21:54.223: INFO: rc: 1
Apr 29 23:21:54.223: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7318 exec execpod7jqfj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30885:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30885
nc: connect to 10.10.190.207 port 30885 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Apr 29 23:21:54.968: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7318 exec execpod7jqfj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30885'
Apr 29 23:21:55.199: INFO: rc: 1
Apr 29 23:21:55.199: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7318 exec execpod7jqfj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30885:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30885
nc: connect to 10.10.190.207 port 30885 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Apr 29 23:21:55.968: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7318 exec execpod7jqfj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30885'
Apr 29 23:21:56.547: INFO: rc: 1
Apr 29 23:21:56.547: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7318 exec execpod7jqfj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30885:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30885
nc: connect to 10.10.190.207 port 30885 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Apr 29 23:21:56.966: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7318 exec execpod7jqfj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30885'
Apr 29 23:21:57.204: INFO: rc: 1
Apr 29 23:21:57.204: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7318 exec execpod7jqfj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30885:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30885
nc: connect to 10.10.190.207 port 30885 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Apr 29 23:21:57.967: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7318 exec execpod7jqfj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30885'
Apr 29 23:21:58.197: INFO: rc: 1
Apr 29 23:21:58.197: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7318 exec execpod7jqfj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30885:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30885
nc: connect to 10.10.190.207 port 30885 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Apr 29 23:21:58.967: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7318 exec execpod7jqfj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30885'
Apr 29 23:21:59.205: INFO: rc: 1
Apr 29 23:21:59.205: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7318 exec execpod7jqfj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30885:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30885
nc: connect to 10.10.190.207 port 30885 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Apr 29 23:21:59.966: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7318 exec execpod7jqfj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30885'
Apr 29 23:22:00.196: INFO: rc: 1
Apr 29 23:22:00.196: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7318 exec execpod7jqfj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30885:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30885
nc: connect to 10.10.190.207 port 30885 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Apr 29 23:22:00.966: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7318 exec execpod7jqfj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30885'
Apr 29 23:22:01.233: INFO: rc: 1
Apr 29 23:22:01.233: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7318 exec execpod7jqfj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30885:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30885
nc: connect to 10.10.190.207 port 30885 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Apr 29 23:22:01.966: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7318 exec execpod7jqfj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30885'
Apr 29 23:22:02.213: INFO: rc: 1
Apr 29 23:22:02.213: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7318 exec execpod7jqfj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30885:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30885
nc: connect to 10.10.190.207 port 30885 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Apr 29 23:22:02.966: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7318 exec execpod7jqfj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30885'
Apr 29 23:22:03.196: INFO: rc: 1
Apr 29 23:22:03.196: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7318 exec execpod7jqfj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30885:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30885
nc: connect to 10.10.190.207 port 30885 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Apr 29 23:22:03.966: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7318 exec execpod7jqfj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30885'
Apr 29 23:22:04.211: INFO: rc: 1
Apr 29 23:22:04.211: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7318 exec execpod7jqfj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30885:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30885
nc: connect to 10.10.190.207 port 30885 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Apr 29 23:22:04.966: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7318 exec execpod7jqfj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30885'
Apr 29 23:22:05.219: INFO: rc: 1
Apr 29 23:22:05.219: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7318 exec execpod7jqfj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30885:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30885
nc: connect to 10.10.190.207 port 30885 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Apr 29 23:22:05.968: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7318 exec execpod7jqfj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30885'
Apr 29 23:22:06.210: INFO: rc: 1
Apr 29 23:22:06.210: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7318 exec execpod7jqfj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30885:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30885
nc: connect to 10.10.190.207 port 30885 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Apr 29 23:22:06.966: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7318 exec execpod7jqfj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30885'
Apr 29 23:22:07.210: INFO: rc: 1
Apr 29 23:22:07.210: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7318 exec execpod7jqfj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30885:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30885
nc: connect to 10.10.190.207 port 30885 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Apr 29 23:22:07.966: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7318 exec execpod7jqfj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30885'
Apr 29 23:22:08.209: INFO: rc: 1
Apr 29 23:22:08.209: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7318 exec execpod7jqfj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30885:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30885
nc: connect to 10.10.190.207 port 30885 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Apr 29 23:22:08.968: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7318 exec execpod7jqfj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30885'
Apr 29 23:22:09.244: INFO: rc: 1
Apr 29 23:22:09.244: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7318 exec execpod7jqfj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30885:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30885
nc: connect to 10.10.190.207 port 30885 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Apr 29 23:22:09.967: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7318 exec execpod7jqfj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30885'
Apr 29 23:22:10.291: INFO: rc: 1
Apr 29 23:22:10.291: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7318 exec execpod7jqfj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30885:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30885
nc: connect to 10.10.190.207 port 30885 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Apr 29 23:22:10.967: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7318 exec execpod7jqfj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30885'
Apr 29 23:22:11.242: INFO: rc: 1
Apr 29 23:22:11.242: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7318 exec execpod7jqfj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30885:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30885
nc: connect to 10.10.190.207 port 30885 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Apr 29 23:22:11.966: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7318 exec execpod7jqfj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30885'
Apr 29 23:22:12.206: INFO: rc: 1
Apr 29 23:22:12.207: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7318 exec execpod7jqfj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30885:
Command stdout:

stderr:
+ nc -v -t -w 2 10.10.190.207 30885
+ echo hostName
nc: connect to 10.10.190.207 port 30885 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Apr 29 23:22:12.966: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7318 exec execpod7jqfj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30885'
Apr 29 23:22:13.213: INFO: rc: 1
Apr 29 23:22:13.213: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7318 exec execpod7jqfj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30885:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30885
nc: connect to 10.10.190.207 port 30885 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Apr 29 23:22:13.966: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7318 exec execpod7jqfj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30885'
Apr 29 23:22:14.187: INFO: rc: 1
Apr 29 23:22:14.187: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7318 exec execpod7jqfj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30885:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30885
nc: connect to 10.10.190.207 port 30885 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Apr 29 23:22:14.188: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7318 exec execpod7jqfj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30885'
Apr 29 23:22:14.424: INFO: rc: 1
Apr 29 23:22:14.424: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7318 exec execpod7jqfj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30885:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30885
nc: connect to 10.10.190.207 port 30885 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Apr 29 23:22:14.424: FAIL: Unexpected error:
    <*errors.errorString | 0xc003e1ea30>: {
        s: "service is not reachable within 2m0s timeout on endpoint 10.10.190.207:30885 over TCP protocol",
    }
    service is not reachable within 2m0s timeout on endpoint 10.10.190.207:30885 over TCP protocol
occurred

Full Stack Trace
k8s.io/kubernetes/test/e2e/network.glob..func24.13()
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1245 +0x431
k8s.io/kubernetes/test/e2e.RunE2ETests(0xc0011a0d80)
	_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x36c
k8s.io/kubernetes/test/e2e.TestE2E(0xc0011a0d80)
	_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:144 +0x2b
testing.tRunner(0xc0011a0d80, 0x70f99e8)
	/usr/local/go/src/testing/testing.go:1193 +0xef
created by testing.(*T).Run
	/usr/local/go/src/testing/testing.go:1238 +0x2b3
Apr 29 23:22:14.425: INFO: Cleaning up the updating NodePorts test service
[AfterEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
STEP: Collecting events from namespace "services-7318".
STEP: Found 17 events.
Apr 29 23:22:14.453: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for execpod7jqfj: { } Scheduled: Successfully assigned services-7318/execpod7jqfj to node2
Apr 29 23:22:14.453: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for nodeport-update-service-f8mb5: { } Scheduled: Successfully assigned services-7318/nodeport-update-service-f8mb5 to node2
Apr 29 23:22:14.453: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for nodeport-update-service-xrvlw: { } Scheduled: Successfully assigned services-7318/nodeport-update-service-xrvlw to node2
Apr 29 23:22:14.453: INFO: At 2022-04-29 23:19:51 +0000 UTC - event for nodeport-update-service: {replication-controller } SuccessfulCreate: Created pod: nodeport-update-service-xrvlw
Apr 29 23:22:14.453: INFO: At 2022-04-29 23:19:51 +0000 UTC - event for nodeport-update-service: {replication-controller } SuccessfulCreate: Created pod: nodeport-update-service-f8mb5
Apr 29 23:22:14.453: INFO: At 2022-04-29 23:19:53 +0000 UTC - event for nodeport-update-service-f8mb5: {kubelet node2} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/agnhost:2.32"
Apr 29 23:22:14.453: INFO: At 2022-04-29 23:19:54 +0000 UTC - event for nodeport-update-service-f8mb5: {kubelet node2} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/agnhost:2.32" in 406.43786ms
Apr 29 23:22:14.453: INFO: At 2022-04-29 23:19:54 +0000 UTC - event for nodeport-update-service-f8mb5: {kubelet node2} Started: Started container nodeport-update-service
Apr 29 23:22:14.453: INFO: At 2022-04-29 23:19:54 +0000 UTC - event for nodeport-update-service-f8mb5: {kubelet node2} Created: Created container nodeport-update-service
Apr 29 23:22:14.453: INFO: At 2022-04-29 23:19:54 +0000 UTC - event for nodeport-update-service-xrvlw: {kubelet node2} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/agnhost:2.32"
Apr 29 23:22:14.453: INFO: At 2022-04-29 23:19:56 +0000 UTC - event for nodeport-update-service-xrvlw: {kubelet node2} Started: Started container nodeport-update-service
Apr 29 23:22:14.453: INFO: At 2022-04-29 23:19:56 +0000 UTC - event for nodeport-update-service-xrvlw: {kubelet node2} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/agnhost:2.32" in 1.68608187s
Apr 29 23:22:14.453: INFO: At 2022-04-29 23:19:56 +0000 UTC - event for nodeport-update-service-xrvlw: {kubelet node2} Created: Created container nodeport-update-service
Apr 29 23:22:14.453: INFO: At 2022-04-29 23:20:05 +0000 UTC - event for execpod7jqfj: {kubelet node2} Created: Created container agnhost-container
Apr 29 23:22:14.453: INFO: At 2022-04-29 23:20:05 +0000 UTC - event for execpod7jqfj: {kubelet node2} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/agnhost:2.32"
Apr 29 23:22:14.453: INFO: At 2022-04-29 23:20:05 +0000 UTC - event for execpod7jqfj: {kubelet node2} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/agnhost:2.32" in 331.878004ms
Apr 29 23:22:14.453: INFO: At 2022-04-29 23:20:06 +0000 UTC - event for execpod7jqfj: {kubelet node2} Started: Started container agnhost-container
Apr 29 23:22:14.456: INFO: POD                            NODE   PHASE    GRACE  CONDITIONS
Apr 29 23:22:14.456: INFO: execpod7jqfj                   node2  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-04-29 23:20:03 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2022-04-29 23:20:06 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-04-29 23:20:06 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-04-29 23:20:03 +0000 UTC  }]
Apr 29 23:22:14.456: INFO: nodeport-update-service-f8mb5  node2  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-04-29 23:19:51 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2022-04-29 23:19:56 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-04-29 23:19:56 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-04-29 23:19:51 +0000 UTC  }]
Apr 29 23:22:14.456: INFO: nodeport-update-service-xrvlw  node2  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-04-29 23:19:51 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2022-04-29 23:19:58 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-04-29 23:19:58 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-04-29 23:19:51 +0000 UTC  }]
Apr 29 23:22:14.456: INFO: 
Apr 29 23:22:14.460: INFO: 
Logging node info for node master1
Apr 29 23:22:14.463: INFO: Node Info: &Node{ObjectMeta:{master1    c968c2e7-7594-4f6e-b85d-932008e8124f 76760 0 2022-04-29 19:57:18 +0000 UTC   map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master1 kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.202 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/master.version:v0.8.2 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  [{kubeadm Update v1 2022-04-29 19:57:21 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}}} {flanneld Update v1 2022-04-29 20:00:08 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2022-04-29 20:00:09 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.0.0/24\"":{}},"f:taints":{}}}} {kubelet Update v1 2022-04-29 20:05:31 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}} {nfd-master Update v1 2022-04-29 20:08:11 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/master.version":{}}}}}]},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{451201003520 0} {} 440625980Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234739200 0} {} 196518300Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{406080902496 0} {} 406080902496 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324575232 0} {} 195629468Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-04-29 20:03:15 +0000 UTC,LastTransitionTime:2022-04-29 20:03:15 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-04-29 23:22:08 +0000 UTC,LastTransitionTime:2022-04-29 19:57:15 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-04-29 23:22:08 +0000 UTC,LastTransitionTime:2022-04-29 19:57:15 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-04-29 23:22:08 +0000 UTC,LastTransitionTime:2022-04-29 19:57:15 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-04-29 23:22:08 +0000 UTC,LastTransitionTime:2022-04-29 20:00:09 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.202,},NodeAddress{Type:Hostname,Address:master1,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:c3419fad4d2d4c5c9574e5b11ef92b4b,SystemUUID:00ACFB60-0631-E711-906E-0017A4403562,BootID:5e0f934f-c777-4827-ade6-efec15a825ef,KernelVersion:3.10.0-1160.62.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.14,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:727675348,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 sirot/netperf-latest:latest],SizeBytes:282025213,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:125930239,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[k8s.gcr.io/nfd/node-feature-discovery@sha256:74a1cbd82354f148277e20cdce25d57816e355a896bc67f67a0f722164b16945 k8s.gcr.io/nfd/node-feature-discovery:v0.8.2],SizeBytes:108486428,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e k8s.gcr.io/coredns/coredns:v1.8.0],SizeBytes:42454755,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[localhost:30500/tasextender@sha256:f09acec459e39fddbd00d2ff6975dd7715ddae0b47f70ed62d6f52e6be7e3f22 tasextender:latest localhost:30500/tasextender:0.4],SizeBytes:28910791,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[registry@sha256:1cd9409a311350c3072fe510b52046f104416376c126a479cef9a4dfe692cf57 registry:2.7.0],SizeBytes:24191168,},ContainerImage{Names:[nginx@sha256:b92d3b942c8b84da889ac3dc6e83bd20ffb8cd2d8298eba92c8b0bf88d52f03e nginx:1.20.1-alpine],SizeBytes:22721538,},ContainerImage{Names:[aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 aquasec/kube-bench:0.3.1],SizeBytes:19301876,},ContainerImage{Names:[@ :],SizeBytes:5577654,},ContainerImage{Names:[alpine@sha256:c0e9560cda118f9ec63ddefb4a173a2b2a0347082d7dff7dc14272e7841a5b5a alpine:3.12.1],SizeBytes:5573013,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},}
Apr 29 23:22:14.463: INFO: 
Logging kubelet events for node master1
Apr 29 23:22:14.465: INFO: 
Logging pods the kubelet thinks is on node master1
Apr 29 23:22:14.490: INFO: coredns-8474476ff8-59qm6 started at 2022-04-29 20:00:39 +0000 UTC (0+1 container statuses recorded)
Apr 29 23:22:14.490: INFO: 	Container coredns ready: true, restart count 1
Apr 29 23:22:14.490: INFO: container-registry-65d7c44b96-np5nk started at 2022-04-29 20:04:54 +0000 UTC (0+2 container statuses recorded)
Apr 29 23:22:14.490: INFO: 	Container docker-registry ready: true, restart count 0
Apr 29 23:22:14.490: INFO: 	Container nginx ready: true, restart count 0
Apr 29 23:22:14.490: INFO: kube-proxy-9s46x started at 2022-04-29 19:59:08 +0000 UTC (0+1 container statuses recorded)
Apr 29 23:22:14.490: INFO: 	Container kube-proxy ready: true, restart count 1
Apr 29 23:22:14.490: INFO: kube-flannel-cskzh started at 2022-04-29 20:00:03 +0000 UTC (1+1 container statuses recorded)
Apr 29 23:22:14.490: INFO: 	Init container install-cni ready: true, restart count 0
Apr 29 23:22:14.490: INFO: 	Container kube-flannel ready: true, restart count 1
Apr 29 23:22:14.490: INFO: kube-multus-ds-amd64-w54d6 started at 2022-04-29 20:00:12 +0000 UTC (0+1 container statuses recorded)
Apr 29 23:22:14.490: INFO: 	Container kube-multus ready: true, restart count 1
Apr 29 23:22:14.490: INFO: node-feature-discovery-controller-cff799f9f-zpv5m started at 2022-04-29 20:08:04 +0000 UTC (0+1 container statuses recorded)
Apr 29 23:22:14.490: INFO: 	Container nfd-controller ready: true, restart count 0
Apr 29 23:22:14.490: INFO: node-exporter-svkqv started at 2022-04-29 20:13:28 +0000 UTC (0+2 container statuses recorded)
Apr 29 23:22:14.490: INFO: 	Container kube-rbac-proxy ready: true, restart count 0
Apr 29 23:22:14.490: INFO: 	Container node-exporter ready: true, restart count 0
Apr 29 23:22:14.490: INFO: kube-apiserver-master1 started at 2022-04-29 20:02:53 +0000 UTC (0+1 container statuses recorded)
Apr 29 23:22:14.490: INFO: 	Container kube-apiserver ready: true, restart count 0
Apr 29 23:22:14.490: INFO: kube-controller-manager-master1 started at 2022-04-29 20:02:53 +0000 UTC (0+1 container statuses recorded)
Apr 29 23:22:14.490: INFO: 	Container kube-controller-manager ready: true, restart count 2
Apr 29 23:22:14.490: INFO: kube-scheduler-master1 started at 2022-04-29 20:16:35 +0000 UTC (0+1 container statuses recorded)
Apr 29 23:22:14.490: INFO: 	Container kube-scheduler ready: true, restart count 1
Apr 29 23:22:14.587: INFO: 
Latency metrics for node master1
Apr 29 23:22:14.587: INFO: 
Logging node info for node master2
Apr 29 23:22:14.590: INFO: Node Info: &Node{ObjectMeta:{master2    5b362581-f2d5-419c-a0b0-3aad7bec82f9 76768 0 2022-04-29 19:57:49 +0000 UTC   map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master2 kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.203 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  [{kubeadm Update v1 2022-04-29 19:57:50 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}}} {flanneld Update v1 2022-04-29 20:00:08 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2022-04-29 20:00:09 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}},"f:taints":{}}}} {kubelet Update v1 2022-04-29 20:10:51 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:capacity":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.1.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{451201003520 0} {} 440625980Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234739200 0} {} 196518300Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{406080902496 0} {} 406080902496 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324575232 0} {} 195629468Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-04-29 20:03:15 +0000 UTC,LastTransitionTime:2022-04-29 20:03:15 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-04-29 23:22:11 +0000 UTC,LastTransitionTime:2022-04-29 19:57:49 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-04-29 23:22:11 +0000 UTC,LastTransitionTime:2022-04-29 19:57:49 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-04-29 23:22:11 +0000 UTC,LastTransitionTime:2022-04-29 19:57:49 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-04-29 23:22:11 +0000 UTC,LastTransitionTime:2022-04-29 20:03:15 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.203,},NodeAddress{Type:Hostname,Address:master2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:d055250c7e194b8a9a572c232266a800,SystemUUID:00A0DE53-E51D-E711-906E-0017A4403562,BootID:fb9f32a4-f021-45dd-bddf-6f1d5ae9abae,KernelVersion:3.10.0-1160.62.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.14,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:727675348,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 sirot/netperf-latest:latest],SizeBytes:282025213,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:125930239,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[quay.io/prometheus-operator/prometheus-operator@sha256:850c86bfeda4389bc9c757a9fd17ca5a090ea6b424968178d4467492cfa13921 quay.io/prometheus-operator/prometheus-operator:v0.44.1],SizeBytes:42617274,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e k8s.gcr.io/coredns/coredns:v1.8.0],SizeBytes:42454755,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},}
Apr 29 23:22:14.590: INFO: 
Logging kubelet events for node master2
Apr 29 23:22:14.593: INFO: 
Logging pods the kubelet thinks is on node master2
Apr 29 23:22:14.606: INFO: kube-apiserver-master2 started at 2022-04-29 20:02:53 +0000 UTC (0+1 container statuses recorded)
Apr 29 23:22:14.606: INFO: 	Container kube-apiserver ready: true, restart count 0
Apr 29 23:22:14.606: INFO: kube-proxy-4dnjw started at 2022-04-29 19:59:08 +0000 UTC (0+1 container statuses recorded)
Apr 29 23:22:14.606: INFO: 	Container kube-proxy ready: true, restart count 2
Apr 29 23:22:14.606: INFO: kube-flannel-q2wgv started at 2022-04-29 20:00:03 +0000 UTC (1+1 container statuses recorded)
Apr 29 23:22:14.606: INFO: 	Init container install-cni ready: true, restart count 0
Apr 29 23:22:14.606: INFO: 	Container kube-flannel ready: true, restart count 1
Apr 29 23:22:14.606: INFO: kube-multus-ds-amd64-txslv started at 2022-04-29 20:00:12 +0000 UTC (0+1 container statuses recorded)
Apr 29 23:22:14.606: INFO: 	Container kube-multus ready: true, restart count 1
Apr 29 23:22:14.606: INFO: kube-controller-manager-master2 started at 2022-04-29 20:02:53 +0000 UTC (0+1 container statuses recorded)
Apr 29 23:22:14.606: INFO: 	Container kube-controller-manager ready: true, restart count 1
Apr 29 23:22:14.606: INFO: kube-scheduler-master2 started at 2022-04-29 20:02:53 +0000 UTC (0+1 container statuses recorded)
Apr 29 23:22:14.606: INFO: 	Container kube-scheduler ready: true, restart count 3
Apr 29 23:22:14.606: INFO: dns-autoscaler-7df78bfcfb-csfp5 started at 2022-04-29 20:00:43 +0000 UTC (0+1 container statuses recorded)
Apr 29 23:22:14.606: INFO: 	Container autoscaler ready: true, restart count 1
Apr 29 23:22:14.606: INFO: coredns-8474476ff8-bg2wr started at 2022-04-29 20:00:45 +0000 UTC (0+1 container statuses recorded)
Apr 29 23:22:14.606: INFO: 	Container coredns ready: true, restart count 2
Apr 29 23:22:14.606: INFO: prometheus-operator-585ccfb458-q8r6q started at 2022-04-29 20:13:20 +0000 UTC (0+2 container statuses recorded)
Apr 29 23:22:14.606: INFO: 	Container kube-rbac-proxy ready: true, restart count 0
Apr 29 23:22:14.606: INFO: 	Container prometheus-operator ready: true, restart count 0
Apr 29 23:22:14.606: INFO: node-exporter-9rgc2 started at 2022-04-29 20:13:28 +0000 UTC (0+2 container statuses recorded)
Apr 29 23:22:14.606: INFO: 	Container kube-rbac-proxy ready: true, restart count 0
Apr 29 23:22:14.606: INFO: 	Container node-exporter ready: true, restart count 0
Apr 29 23:22:14.690: INFO: 
Latency metrics for node master2
Apr 29 23:22:14.690: INFO: 
Logging node info for node master3
Apr 29 23:22:14.692: INFO: Node Info: &Node{ObjectMeta:{master3    1096e515-b559-4c90-b0f7-3398537b5f9e 76766 0 2022-04-29 19:58:00 +0000 UTC   map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master3 kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.204 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  [{kubeadm Update v1 2022-04-29 19:58:01 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}}} {flanneld Update v1 2022-04-29 20:00:08 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2022-04-29 20:00:09 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.2.0/24\"":{}},"f:taints":{}}}} {kubelet Update v1 2022-04-29 20:10:51 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:capacity":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.2.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{451201003520 0} {} 440625980Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234743296 0} {}  BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{406080902496 0} {} 406080902496 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324579328 0} {}  BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-04-29 20:03:16 +0000 UTC,LastTransitionTime:2022-04-29 20:03:16 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-04-29 23:22:11 +0000 UTC,LastTransitionTime:2022-04-29 19:58:00 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-04-29 23:22:11 +0000 UTC,LastTransitionTime:2022-04-29 19:58:00 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-04-29 23:22:11 +0000 UTC,LastTransitionTime:2022-04-29 19:58:00 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-04-29 23:22:11 +0000 UTC,LastTransitionTime:2022-04-29 20:00:09 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.204,},NodeAddress{Type:Hostname,Address:master3,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:8955b376e6314525a9e533e277f5f4fb,SystemUUID:008B1444-141E-E711-906E-0017A4403562,BootID:6ffefaf4-8a5c-4288-a6a9-78ef35aa67ef,KernelVersion:3.10.0-1160.62.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.14,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:727675348,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:125930239,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e k8s.gcr.io/coredns/coredns:v1.8.0],SizeBytes:42454755,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},}
Apr 29 23:22:14.693: INFO: 
Logging kubelet events for node master3
Apr 29 23:22:14.696: INFO: 
Logging pods the kubelet thinks is on node master3
Apr 29 23:22:14.708: INFO: kube-proxy-gs7qh started at 2022-04-29 19:59:08 +0000 UTC (0+1 container statuses recorded)
Apr 29 23:22:14.708: INFO: 	Container kube-proxy ready: true, restart count 2
Apr 29 23:22:14.708: INFO: kube-flannel-g8w9b started at 2022-04-29 20:00:03 +0000 UTC (1+1 container statuses recorded)
Apr 29 23:22:14.708: INFO: 	Init container install-cni ready: true, restart count 0
Apr 29 23:22:14.708: INFO: 	Container kube-flannel ready: true, restart count 2
Apr 29 23:22:14.708: INFO: kube-multus-ds-amd64-lxrlj started at 2022-04-29 20:00:12 +0000 UTC (0+1 container statuses recorded)
Apr 29 23:22:14.708: INFO: 	Container kube-multus ready: true, restart count 1
Apr 29 23:22:14.708: INFO: node-exporter-gdq6v started at 2022-04-29 20:13:28 +0000 UTC (0+2 container statuses recorded)
Apr 29 23:22:14.708: INFO: 	Container kube-rbac-proxy ready: true, restart count 0
Apr 29 23:22:14.708: INFO: 	Container node-exporter ready: true, restart count 0
Apr 29 23:22:14.708: INFO: kube-apiserver-master3 started at 2022-04-29 19:58:29 +0000 UTC (0+1 container statuses recorded)
Apr 29 23:22:14.708: INFO: 	Container kube-apiserver ready: true, restart count 0
Apr 29 23:22:14.708: INFO: kube-controller-manager-master3 started at 2022-04-29 20:06:45 +0000 UTC (0+1 container statuses recorded)
Apr 29 23:22:14.708: INFO: 	Container kube-controller-manager ready: true, restart count 3
Apr 29 23:22:14.708: INFO: kube-scheduler-master3 started at 2022-04-29 20:06:45 +0000 UTC (0+1 container statuses recorded)
Apr 29 23:22:14.708: INFO: 	Container kube-scheduler ready: true, restart count 2
Apr 29 23:22:14.782: INFO: 
Latency metrics for node master3
Apr 29 23:22:14.782: INFO: 
Logging node info for node node1
Apr 29 23:22:14.786: INFO: Node Info: &Node{ObjectMeta:{node1    6842a10e-614a-46f0-b405-bc18936b0017 76763 0 2022-04-29 19:59:05 +0000 UTC   map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux cmk.intel.com/cmk-node:true feature.node.kubernetes.io/cpu-cpuid.ADX:true feature.node.kubernetes.io/cpu-cpuid.AESNI:true feature.node.kubernetes.io/cpu-cpuid.AVX:true feature.node.kubernetes.io/cpu-cpuid.AVX2:true feature.node.kubernetes.io/cpu-cpuid.AVX512BW:true feature.node.kubernetes.io/cpu-cpuid.AVX512CD:true feature.node.kubernetes.io/cpu-cpuid.AVX512DQ:true feature.node.kubernetes.io/cpu-cpuid.AVX512F:true feature.node.kubernetes.io/cpu-cpuid.AVX512VL:true feature.node.kubernetes.io/cpu-cpuid.FMA3:true feature.node.kubernetes.io/cpu-cpuid.HLE:true feature.node.kubernetes.io/cpu-cpuid.IBPB:true feature.node.kubernetes.io/cpu-cpuid.MPX:true feature.node.kubernetes.io/cpu-cpuid.RTM:true feature.node.kubernetes.io/cpu-cpuid.SSE4:true feature.node.kubernetes.io/cpu-cpuid.SSE42:true feature.node.kubernetes.io/cpu-cpuid.STIBP:true feature.node.kubernetes.io/cpu-cpuid.VMX:true feature.node.kubernetes.io/cpu-cstate.enabled:true feature.node.kubernetes.io/cpu-hardware_multithreading:true feature.node.kubernetes.io/cpu-pstate.status:active feature.node.kubernetes.io/cpu-pstate.turbo:true feature.node.kubernetes.io/cpu-rdt.RDTCMT:true feature.node.kubernetes.io/cpu-rdt.RDTL3CA:true feature.node.kubernetes.io/cpu-rdt.RDTMBA:true feature.node.kubernetes.io/cpu-rdt.RDTMBM:true feature.node.kubernetes.io/cpu-rdt.RDTMON:true feature.node.kubernetes.io/kernel-config.NO_HZ:true feature.node.kubernetes.io/kernel-config.NO_HZ_FULL:true feature.node.kubernetes.io/kernel-selinux.enabled:true feature.node.kubernetes.io/kernel-version.full:3.10.0-1160.62.1.el7.x86_64 feature.node.kubernetes.io/kernel-version.major:3 feature.node.kubernetes.io/kernel-version.minor:10 feature.node.kubernetes.io/kernel-version.revision:0 feature.node.kubernetes.io/memory-numa:true feature.node.kubernetes.io/network-sriov.capable:true feature.node.kubernetes.io/network-sriov.configured:true feature.node.kubernetes.io/pci-0300_1a03.present:true feature.node.kubernetes.io/storage-nonrotationaldisk:true feature.node.kubernetes.io/system-os_release.ID:centos feature.node.kubernetes.io/system-os_release.VERSION_ID:7 feature.node.kubernetes.io/system-os_release.VERSION_ID.major:7 kubernetes.io/arch:amd64 kubernetes.io/hostname:node1 kubernetes.io/os:linux] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.207 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/extended-resources: nfd.node.kubernetes.io/feature-labels:cpu-cpuid.ADX,cpu-cpuid.AESNI,cpu-cpuid.AVX,cpu-cpuid.AVX2,cpu-cpuid.AVX512BW,cpu-cpuid.AVX512CD,cpu-cpuid.AVX512DQ,cpu-cpuid.AVX512F,cpu-cpuid.AVX512VL,cpu-cpuid.FMA3,cpu-cpuid.HLE,cpu-cpuid.IBPB,cpu-cpuid.MPX,cpu-cpuid.RTM,cpu-cpuid.SSE4,cpu-cpuid.SSE42,cpu-cpuid.STIBP,cpu-cpuid.VMX,cpu-cstate.enabled,cpu-hardware_multithreading,cpu-pstate.status,cpu-pstate.turbo,cpu-rdt.RDTCMT,cpu-rdt.RDTL3CA,cpu-rdt.RDTMBA,cpu-rdt.RDTMBM,cpu-rdt.RDTMON,kernel-config.NO_HZ,kernel-config.NO_HZ_FULL,kernel-selinux.enabled,kernel-version.full,kernel-version.major,kernel-version.minor,kernel-version.revision,memory-numa,network-sriov.capable,network-sriov.configured,pci-0300_1a03.present,storage-nonrotationaldisk,system-os_release.ID,system-os_release.VERSION_ID,system-os_release.VERSION_ID.major nfd.node.kubernetes.io/worker.version:v0.8.2 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  [{kube-controller-manager Update v1 2022-04-29 19:59:05 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.3.0/24\"":{}}}}} {kubeadm Update v1 2022-04-29 19:59:05 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {flanneld Update v1 2022-04-29 20:00:09 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {nfd-master Update v1 2022-04-29 20:08:12 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/extended-resources":{},"f:nfd.node.kubernetes.io/feature-labels":{},"f:nfd.node.kubernetes.io/worker.version":{}},"f:labels":{"f:feature.node.kubernetes.io/cpu-cpuid.ADX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AESNI":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX2":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512BW":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512CD":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512DQ":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512F":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512VL":{},"f:feature.node.kubernetes.io/cpu-cpuid.FMA3":{},"f:feature.node.kubernetes.io/cpu-cpuid.HLE":{},"f:feature.node.kubernetes.io/cpu-cpuid.IBPB":{},"f:feature.node.kubernetes.io/cpu-cpuid.MPX":{},"f:feature.node.kubernetes.io/cpu-cpuid.RTM":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE4":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE42":{},"f:feature.node.kubernetes.io/cpu-cpuid.STIBP":{},"f:feature.node.kubernetes.io/cpu-cpuid.VMX":{},"f:feature.node.kubernetes.io/cpu-cstate.enabled":{},"f:feature.node.kubernetes.io/cpu-hardware_multithreading":{},"f:feature.node.kubernetes.io/cpu-pstate.status":{},"f:feature.node.kubernetes.io/cpu-pstate.turbo":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTCMT":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTL3CA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBM":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMON":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ_FULL":{},"f:feature.node.kubernetes.io/kernel-selinux.enabled":{},"f:feature.node.kubernetes.io/kernel-version.full":{},"f:feature.node.kubernetes.io/kernel-version.major":{},"f:feature.node.kubernetes.io/kernel-version.minor":{},"f:feature.node.kubernetes.io/kernel-version.revision":{},"f:feature.node.kubernetes.io/memory-numa":{},"f:feature.node.kubernetes.io/network-sriov.capable":{},"f:feature.node.kubernetes.io/network-sriov.configured":{},"f:feature.node.kubernetes.io/pci-0300_1a03.present":{},"f:feature.node.kubernetes.io/storage-nonrotationaldisk":{},"f:feature.node.kubernetes.io/system-os_release.ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID.major":{}}}}} {Swagger-Codegen Update v1 2022-04-29 20:11:46 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:cmk.intel.com/cmk-node":{}}},"f:status":{"f:capacity":{"f:cmk.intel.com/exclusive-cores":{}}}}} {kubelet Update v1 2022-04-29 22:27:01 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:cmk.intel.com/exclusive-cores":{},"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:capacity":{"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.3.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.3.0/24],},Status:NodeStatus{Capacity:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{451201003520 0} {} 440625980Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{201269608448 0} {}  BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{77 0} {} 77 DecimalSI},ephemeral-storage: {{406080902496 0} {} 406080902496 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{178884608000 0} {}  BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-04-29 20:02:57 +0000 UTC,LastTransitionTime:2022-04-29 20:02:57 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-04-29 23:22:10 +0000 UTC,LastTransitionTime:2022-04-29 19:59:05 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-04-29 23:22:10 +0000 UTC,LastTransitionTime:2022-04-29 19:59:05 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-04-29 23:22:10 +0000 UTC,LastTransitionTime:2022-04-29 19:59:05 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-04-29 23:22:10 +0000 UTC,LastTransitionTime:2022-04-29 20:00:14 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.207,},NodeAddress{Type:Hostname,Address:node1,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:2a0958eb1b3044f2963c9e5f2e902173,SystemUUID:00CDA902-D022-E711-906E-0017A4403562,BootID:fc6a2d14-7726-4aec-9428-6617632ddcbe,KernelVersion:3.10.0-1160.62.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.14,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[opnfv/barometer-collectd@sha256:f30e965aa6195e6ac4ca2410f5a15e3704c92e4afa5208178ca22a7911975d66],SizeBytes:1075575763,},ContainerImage{Names:[@ :],SizeBytes:1003954967,},ContainerImage{Names:[localhost:30500/cmk@sha256:cfef1b50441378a7b326a606756a12e664a435cc215d910f7aa9415cfde56361 cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:727675348,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[golang@sha256:db2475a1dbb2149508e5db31d7d77a75e6600d54be645f37681f03f2762169ba golang:alpine3.12],SizeBytes:301186719,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[k8s.gcr.io/etcd@sha256:4ad90a11b55313b182afc186b9876c8e891531b8db4c9bf1541953021618d0e2 k8s.gcr.io/etcd:3.4.13-0],SizeBytes:253392289,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[grafana/grafana@sha256:ba39bf5131dcc0464134a3ff0e26e8c6380415249fa725e5f619176601255172 grafana/grafana:7.5.4],SizeBytes:203572842,},ContainerImage{Names:[quay.io/prometheus/prometheus@sha256:b899dbd1b9017b9a379f76ce5b40eead01a62762c4f2057eacef945c3c22d210 quay.io/prometheus/prometheus:v2.22.1],SizeBytes:168344243,},ContainerImage{Names:[nginx@sha256:a05b0cdd4fc1be3b224ba9662ebdf98fe44c09c0c9215b45f84344c12867002e nginx:1.21.1],SizeBytes:133175493,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:716d2f68314c5c4ddd5ecdb45183fcb4ed8019015982c1321571f863989b70b0 k8s.gcr.io/e2e-test-images/httpd:2.4.39-1],SizeBytes:126894770,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:125930239,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:1f36a24cfb5e0c3f725d7565a867c2384282fcbeccc77b07b423c9da95763a9a k8s.gcr.io/e2e-test-images/nautilus:1.4],SizeBytes:121748345,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[k8s.gcr.io/nfd/node-feature-discovery@sha256:74a1cbd82354f148277e20cdce25d57816e355a896bc67f67a0f722164b16945 k8s.gcr.io/nfd/node-feature-discovery:v0.8.2],SizeBytes:108486428,},ContainerImage{Names:[directxman12/k8s-prometheus-adapter@sha256:2b09a571757a12c0245f2f1a74db4d1b9386ff901cf57f5ce48a0a682bd0e3af directxman12/k8s-prometheus-adapter:v0.8.2],SizeBytes:68230450,},ContainerImage{Names:[k8s.gcr.io/build-image/debian-iptables@sha256:160595fccf5ad4e41cc0a7acf56027802bf1a2310e704f6505baf0f88746e277 k8s.gcr.io/build-image/debian-iptables:buster-v1.6.7],SizeBytes:60182103,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/sample-apiserver@sha256:e7fddbaac4c3451da2365ab90bad149d32f11409738034e41e0f460927f7c276 k8s.gcr.io/e2e-test-images/sample-apiserver:1.17.4],SizeBytes:58172101,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[localhost:30500/sriov-device-plugin@sha256:145b4fe543408db530a0d8880c681aaa0e3df9b949467d93bcecf42e8625a181 nfvpe/sriov-device-plugin:latest localhost:30500/sriov-device-plugin:v3.3.2],SizeBytes:42676189,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[localhost:30500/tasextender@sha256:f09acec459e39fddbd00d2ff6975dd7715ddae0b47f70ed62d6f52e6be7e3f22 localhost:30500/tasextender:0.4],SizeBytes:28910791,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[prom/collectd-exporter@sha256:73fbda4d24421bff3b741c27efc36f1b6fbe7c57c378d56d4ff78101cd556654],SizeBytes:17463681,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:16032814,},ContainerImage{Names:[quay.io/prometheus-operator/prometheus-config-reloader@sha256:4dee0fcf1820355ddd6986c1317b555693776c731315544a99d6cc59a7e34ce9 quay.io/prometheus-operator/prometheus-config-reloader:v0.44.1],SizeBytes:13433274,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:11443478,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonewprivs@sha256:8ac1264691820febacf3aea5d152cbde6d10685731ec14966a9401c6f47a68ac k8s.gcr.io/e2e-test-images/nonewprivs:1.3],SizeBytes:7107254,},ContainerImage{Names:[appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172cead6502fec989198b98d8bb appropriate/curl:edge],SizeBytes:5654234,},ContainerImage{Names:[alpine@sha256:c75ac27b49326926b803b9ed43bf088bc220d22556de1bc5f72d742c91398f69 alpine:3.12],SizeBytes:5581590,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:1154361,},ContainerImage{Names:[busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 busybox:1.28],SizeBytes:1146369,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},}
Apr 29 23:22:14.786: INFO: 
Logging kubelet events for node node1
Apr 29 23:22:14.788: INFO: 
Logging pods the kubelet thinks is on node node1
Apr 29 23:22:14.807: INFO: nginx-proxy-node1 started at 2022-04-29 19:59:05 +0000 UTC (0+1 container statuses recorded)
Apr 29 23:22:14.807: INFO: 	Container nginx-proxy ready: true, restart count 2
Apr 29 23:22:14.807: INFO: cmk-f5znp started at 2022-04-29 20:12:25 +0000 UTC (0+2 container statuses recorded)
Apr 29 23:22:14.807: INFO: 	Container nodereport ready: true, restart count 0
Apr 29 23:22:14.807: INFO: 	Container reconcile ready: true, restart count 0
Apr 29 23:22:14.807: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-2fslq started at 2022-04-29 20:09:17 +0000 UTC (0+1 container statuses recorded)
Apr 29 23:22:14.807: INFO: 	Container kube-sriovdp ready: true, restart count 0
Apr 29 23:22:14.807: INFO: cmk-init-discover-node1-gxlbt started at 2022-04-29 20:11:43 +0000 UTC (0+3 container statuses recorded)
Apr 29 23:22:14.807: INFO: 	Container discover ready: false, restart count 0
Apr 29 23:22:14.807: INFO: 	Container init ready: false, restart count 0
Apr 29 23:22:14.807: INFO: 	Container install ready: false, restart count 0
Apr 29 23:22:14.807: INFO: prometheus-k8s-0 started at 2022-04-29 20:13:38 +0000 UTC (0+4 container statuses recorded)
Apr 29 23:22:14.807: INFO: 	Container config-reloader ready: true, restart count 0
Apr 29 23:22:14.807: INFO: 	Container custom-metrics-apiserver ready: true, restart count 0
Apr 29 23:22:14.807: INFO: 	Container grafana ready: true, restart count 0
Apr 29 23:22:14.807: INFO: 	Container prometheus ready: true, restart count 1
Apr 29 23:22:14.807: INFO: tas-telemetry-aware-scheduling-84ff454dfb-khdw5 started at 2022-04-29 20:16:34 +0000 UTC (0+1 container statuses recorded)
Apr 29 23:22:14.807: INFO: 	Container tas-extender ready: true, restart count 0
Apr 29 23:22:14.807: INFO: kube-proxy-v9tgj started at 2022-04-29 19:59:08 +0000 UTC (0+1 container statuses recorded)
Apr 29 23:22:14.807: INFO: 	Container kube-proxy ready: true, restart count 2
Apr 29 23:22:14.807: INFO: node-exporter-c8777 started at 2022-04-29 20:13:28 +0000 UTC (0+2 container statuses recorded)
Apr 29 23:22:14.807: INFO: 	Container kube-rbac-proxy ready: true, restart count 0
Apr 29 23:22:14.807: INFO: 	Container node-exporter ready: true, restart count 0
Apr 29 23:22:14.807: INFO: kube-flannel-47phs started at 2022-04-29 20:00:03 +0000 UTC (1+1 container statuses recorded)
Apr 29 23:22:14.807: INFO: 	Init container install-cni ready: true, restart count 2
Apr 29 23:22:14.807: INFO: 	Container kube-flannel ready: true, restart count 2
Apr 29 23:22:14.807: INFO: collectd-ccgw2 started at 2022-04-29 20:17:24 +0000 UTC (0+3 container statuses recorded)
Apr 29 23:22:14.807: INFO: 	Container collectd ready: true, restart count 0
Apr 29 23:22:14.807: INFO: 	Container collectd-exporter ready: true, restart count 0
Apr 29 23:22:14.807: INFO: 	Container rbac-proxy ready: true, restart count 0
Apr 29 23:22:14.807: INFO: kube-multus-ds-amd64-kkz4q started at 2022-04-29 20:00:12 +0000 UTC (0+1 container statuses recorded)
Apr 29 23:22:14.807: INFO: 	Container kube-multus ready: true, restart count 1
Apr 29 23:22:14.807: INFO: node-feature-discovery-worker-kbl9s started at 2022-04-29 20:08:04 +0000 UTC (0+1 container statuses recorded)
Apr 29 23:22:14.807: INFO: 	Container nfd-worker ready: true, restart count 0
Apr 29 23:22:14.807: INFO: kubernetes-dashboard-785dcbb76d-d2k5n started at 2022-04-29 20:00:45 +0000 UTC (0+1 container statuses recorded)
Apr 29 23:22:14.807: INFO: 	Container kubernetes-dashboard ready: true, restart count 1
Apr 29 23:22:14.807: INFO: kubernetes-metrics-scraper-5558854cb-g47c2 started at 2022-04-29 20:00:45 +0000 UTC (0+1 container statuses recorded)
Apr 29 23:22:14.807: INFO: 	Container kubernetes-metrics-scraper ready: true, restart count 1
Apr 29 23:22:15.009: INFO: 
Latency metrics for node node1
Apr 29 23:22:15.009: INFO: 
Logging node info for node node2
Apr 29 23:22:15.013: INFO: Node Info: &Node{ObjectMeta:{node2    2f399869-e81b-465d-97b4-806b6186d34a 76769 0 2022-04-29 19:59:05 +0000 UTC   map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux cmk.intel.com/cmk-node:true feature.node.kubernetes.io/cpu-cpuid.ADX:true feature.node.kubernetes.io/cpu-cpuid.AESNI:true feature.node.kubernetes.io/cpu-cpuid.AVX:true feature.node.kubernetes.io/cpu-cpuid.AVX2:true feature.node.kubernetes.io/cpu-cpuid.AVX512BW:true feature.node.kubernetes.io/cpu-cpuid.AVX512CD:true feature.node.kubernetes.io/cpu-cpuid.AVX512DQ:true feature.node.kubernetes.io/cpu-cpuid.AVX512F:true feature.node.kubernetes.io/cpu-cpuid.AVX512VL:true feature.node.kubernetes.io/cpu-cpuid.FMA3:true feature.node.kubernetes.io/cpu-cpuid.HLE:true feature.node.kubernetes.io/cpu-cpuid.IBPB:true feature.node.kubernetes.io/cpu-cpuid.MPX:true feature.node.kubernetes.io/cpu-cpuid.RTM:true feature.node.kubernetes.io/cpu-cpuid.SSE4:true feature.node.kubernetes.io/cpu-cpuid.SSE42:true feature.node.kubernetes.io/cpu-cpuid.STIBP:true feature.node.kubernetes.io/cpu-cpuid.VMX:true feature.node.kubernetes.io/cpu-cstate.enabled:true feature.node.kubernetes.io/cpu-hardware_multithreading:true feature.node.kubernetes.io/cpu-pstate.status:active feature.node.kubernetes.io/cpu-pstate.turbo:true feature.node.kubernetes.io/cpu-rdt.RDTCMT:true feature.node.kubernetes.io/cpu-rdt.RDTL3CA:true feature.node.kubernetes.io/cpu-rdt.RDTMBA:true feature.node.kubernetes.io/cpu-rdt.RDTMBM:true feature.node.kubernetes.io/cpu-rdt.RDTMON:true feature.node.kubernetes.io/kernel-config.NO_HZ:true feature.node.kubernetes.io/kernel-config.NO_HZ_FULL:true feature.node.kubernetes.io/kernel-selinux.enabled:true feature.node.kubernetes.io/kernel-version.full:3.10.0-1160.62.1.el7.x86_64 feature.node.kubernetes.io/kernel-version.major:3 feature.node.kubernetes.io/kernel-version.minor:10 feature.node.kubernetes.io/kernel-version.revision:0 feature.node.kubernetes.io/memory-numa:true feature.node.kubernetes.io/network-sriov.capable:true feature.node.kubernetes.io/network-sriov.configured:true feature.node.kubernetes.io/pci-0300_1a03.present:true feature.node.kubernetes.io/storage-nonrotationaldisk:true feature.node.kubernetes.io/system-os_release.ID:centos feature.node.kubernetes.io/system-os_release.VERSION_ID:7 feature.node.kubernetes.io/system-os_release.VERSION_ID.major:7 kubernetes.io/arch:amd64 kubernetes.io/hostname:node2 kubernetes.io/os:linux] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.208 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/extended-resources: nfd.node.kubernetes.io/feature-labels:cpu-cpuid.ADX,cpu-cpuid.AESNI,cpu-cpuid.AVX,cpu-cpuid.AVX2,cpu-cpuid.AVX512BW,cpu-cpuid.AVX512CD,cpu-cpuid.AVX512DQ,cpu-cpuid.AVX512F,cpu-cpuid.AVX512VL,cpu-cpuid.FMA3,cpu-cpuid.HLE,cpu-cpuid.IBPB,cpu-cpuid.MPX,cpu-cpuid.RTM,cpu-cpuid.SSE4,cpu-cpuid.SSE42,cpu-cpuid.STIBP,cpu-cpuid.VMX,cpu-cstate.enabled,cpu-hardware_multithreading,cpu-pstate.status,cpu-pstate.turbo,cpu-rdt.RDTCMT,cpu-rdt.RDTL3CA,cpu-rdt.RDTMBA,cpu-rdt.RDTMBM,cpu-rdt.RDTMON,kernel-config.NO_HZ,kernel-config.NO_HZ_FULL,kernel-selinux.enabled,kernel-version.full,kernel-version.major,kernel-version.minor,kernel-version.revision,memory-numa,network-sriov.capable,network-sriov.configured,pci-0300_1a03.present,storage-nonrotationaldisk,system-os_release.ID,system-os_release.VERSION_ID,system-os_release.VERSION_ID.major nfd.node.kubernetes.io/worker.version:v0.8.2 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  [{kube-controller-manager Update v1 2022-04-29 19:59:05 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.4.0/24\"":{}}}}} {kubeadm Update v1 2022-04-29 19:59:05 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {flanneld Update v1 2022-04-29 20:00:08 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {nfd-master Update v1 2022-04-29 20:08:12 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/extended-resources":{},"f:nfd.node.kubernetes.io/feature-labels":{},"f:nfd.node.kubernetes.io/worker.version":{}},"f:labels":{"f:feature.node.kubernetes.io/cpu-cpuid.ADX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AESNI":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX2":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512BW":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512CD":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512DQ":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512F":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512VL":{},"f:feature.node.kubernetes.io/cpu-cpuid.FMA3":{},"f:feature.node.kubernetes.io/cpu-cpuid.HLE":{},"f:feature.node.kubernetes.io/cpu-cpuid.IBPB":{},"f:feature.node.kubernetes.io/cpu-cpuid.MPX":{},"f:feature.node.kubernetes.io/cpu-cpuid.RTM":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE4":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE42":{},"f:feature.node.kubernetes.io/cpu-cpuid.STIBP":{},"f:feature.node.kubernetes.io/cpu-cpuid.VMX":{},"f:feature.node.kubernetes.io/cpu-cstate.enabled":{},"f:feature.node.kubernetes.io/cpu-hardware_multithreading":{},"f:feature.node.kubernetes.io/cpu-pstate.status":{},"f:feature.node.kubernetes.io/cpu-pstate.turbo":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTCMT":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTL3CA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBM":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMON":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ_FULL":{},"f:feature.node.kubernetes.io/kernel-selinux.enabled":{},"f:feature.node.kubernetes.io/kernel-version.full":{},"f:feature.node.kubernetes.io/kernel-version.major":{},"f:feature.node.kubernetes.io/kernel-version.minor":{},"f:feature.node.kubernetes.io/kernel-version.revision":{},"f:feature.node.kubernetes.io/memory-numa":{},"f:feature.node.kubernetes.io/network-sriov.capable":{},"f:feature.node.kubernetes.io/network-sriov.configured":{},"f:feature.node.kubernetes.io/pci-0300_1a03.present":{},"f:feature.node.kubernetes.io/storage-nonrotationaldisk":{},"f:feature.node.kubernetes.io/system-os_release.ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID.major":{}}}}} {Swagger-Codegen Update v1 2022-04-29 20:12:09 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:cmk.intel.com/cmk-node":{}}},"f:status":{"f:capacity":{"f:cmk.intel.com/exclusive-cores":{}}}}} {kubelet Update v1 2022-04-29 22:27:03 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:cmk.intel.com/exclusive-cores":{},"f:ephemeral-storage":{},"f:example.com/fakecpu":{},"f:intel.com/intel_sriov_netdevice":{}},"f:capacity":{"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}} {e2e.test Update v1 2022-04-29 22:53:59 +0000 UTC FieldsV1 {"f:status":{"f:capacity":{"f:example.com/fakecpu":{}}}}}]},Spec:NodeSpec{PodCIDR:10.244.4.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.4.0/24],},Status:NodeStatus{Capacity:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{451201003520 0} {} 440625980Ki BinarySI},example.com/fakecpu: {{1 3} {} 1k DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{201269608448 0} {}  BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{77 0} {} 77 DecimalSI},ephemeral-storage: {{406080902496 0} {} 406080902496 DecimalSI},example.com/fakecpu: {{1 3} {} 1k DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{178884608000 0} {}  BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-04-29 20:03:12 +0000 UTC,LastTransitionTime:2022-04-29 20:03:12 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-04-29 23:22:11 +0000 UTC,LastTransitionTime:2022-04-29 19:59:05 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-04-29 23:22:11 +0000 UTC,LastTransitionTime:2022-04-29 19:59:05 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-04-29 23:22:11 +0000 UTC,LastTransitionTime:2022-04-29 19:59:05 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-04-29 23:22:11 +0000 UTC,LastTransitionTime:2022-04-29 20:03:19 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.208,},NodeAddress{Type:Hostname,Address:node2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:22c763056cc24e6ba6e8bbadb5113d3d,SystemUUID:80B3CD56-852F-E711-906E-0017A4403562,BootID:8ca050bd-5d8a-4c59-8e02-41e26864aa92,KernelVersion:3.10.0-1160.62.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.14,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[opnfv/barometer-collectd@sha256:f30e965aa6195e6ac4ca2410f5a15e3704c92e4afa5208178ca22a7911975d66],SizeBytes:1075575763,},ContainerImage{Names:[localhost:30500/cmk@sha256:cfef1b50441378a7b326a606756a12e664a435cc215d910f7aa9415cfde56361 localhost:30500/cmk:v1.5.1],SizeBytes:727675348,},ContainerImage{Names:[cmk:v1.5.1],SizeBytes:727675348,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[aquasec/kube-hunter@sha256:2be6820bc1d7e0f57193a9a27d5a3e16b2fd93c53747b03ce8ca48c6fc323781 aquasec/kube-hunter:0.3.1],SizeBytes:347611549,},ContainerImage{Names:[sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 sirot/netperf-latest:latest],SizeBytes:282025213,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:702a992280fb7c3303e84a5801acbb4c9c7fcf48cffe0e9c8be3f0c60f74cf89 k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.4],SizeBytes:253371792,},ContainerImage{Names:[nginx@sha256:a05b0cdd4fc1be3b224ba9662ebdf98fe44c09c0c9215b45f84344c12867002e nginx:1.21.1],SizeBytes:133175493,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:716d2f68314c5c4ddd5ecdb45183fcb4ed8019015982c1321571f863989b70b0 k8s.gcr.io/e2e-test-images/httpd:2.4.39-1],SizeBytes:126894770,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:125930239,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:1f36a24cfb5e0c3f725d7565a867c2384282fcbeccc77b07b423c9da95763a9a k8s.gcr.io/e2e-test-images/nautilus:1.4],SizeBytes:121748345,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[k8s.gcr.io/nfd/node-feature-discovery@sha256:74a1cbd82354f148277e20cdce25d57816e355a896bc67f67a0f722164b16945 k8s.gcr.io/nfd/node-feature-discovery:v0.8.2],SizeBytes:108486428,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/regression-issue-74839@sha256:b4f1d8d61bdad84bd50442d161d5460e4019d53e989b64220fdbc62fc87d76bf k8s.gcr.io/e2e-test-images/regression-issue-74839:1.2],SizeBytes:44576952,},ContainerImage{Names:[localhost:30500/sriov-device-plugin@sha256:145b4fe543408db530a0d8880c681aaa0e3df9b949467d93bcecf42e8625a181 localhost:30500/sriov-device-plugin:v3.3.2],SizeBytes:42676189,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 aquasec/kube-bench:0.3.1],SizeBytes:19301876,},ContainerImage{Names:[prom/collectd-exporter@sha256:73fbda4d24421bff3b741c27efc36f1b6fbe7c57c378d56d4ff78101cd556654],SizeBytes:17463681,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:16032814,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:11443478,},ContainerImage{Names:[appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172cead6502fec989198b98d8bb appropriate/curl:edge],SizeBytes:5654234,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:1154361,},ContainerImage{Names:[busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 busybox:1.28],SizeBytes:1146369,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},}
Apr 29 23:22:15.013: INFO: 
Logging kubelet events for node node2
Apr 29 23:22:15.015: INFO: 
Logging pods the kubelet thinks is on node node2
Apr 29 23:22:15.026: INFO: node-feature-discovery-worker-jtjjb started at 2022-04-29 20:08:04 +0000 UTC (0+1 container statuses recorded)
Apr 29 23:22:15.026: INFO: 	Container nfd-worker ready: true, restart count 0
Apr 29 23:22:15.026: INFO: kube-multus-ds-amd64-7slcd started at 2022-04-29 20:00:12 +0000 UTC (0+1 container statuses recorded)
Apr 29 23:22:15.026: INFO: 	Container kube-multus ready: true, restart count 1
Apr 29 23:22:15.026: INFO: cmk-init-discover-node2-csdn7 started at 2022-04-29 20:12:03 +0000 UTC (0+3 container statuses recorded)
Apr 29 23:22:15.026: INFO: 	Container discover ready: false, restart count 0
Apr 29 23:22:15.026: INFO: 	Container init ready: false, restart count 0
Apr 29 23:22:15.026: INFO: 	Container install ready: false, restart count 0
Apr 29 23:22:15.026: INFO: collectd-zxs8j started at 2022-04-29 20:17:24 +0000 UTC (0+3 container statuses recorded)
Apr 29 23:22:15.026: INFO: 	Container collectd ready: true, restart count 0
Apr 29 23:22:15.026: INFO: 	Container collectd-exporter ready: true, restart count 0
Apr 29 23:22:15.026: INFO: 	Container rbac-proxy ready: true, restart count 0
Apr 29 23:22:15.026: INFO: nginx-proxy-node2 started at 2022-04-29 19:59:05 +0000 UTC (0+1 container statuses recorded)
Apr 29 23:22:15.026: INFO: 	Container nginx-proxy ready: true, restart count 2
Apr 29 23:22:15.026: INFO: nodeport-update-service-xrvlw started at 2022-04-29 23:19:51 +0000 UTC (0+1 container statuses recorded)
Apr 29 23:22:15.026: INFO: 	Container nodeport-update-service ready: true, restart count 0
Apr 29 23:22:15.026: INFO: kube-proxy-k6tv2 started at 2022-04-29 19:59:08 +0000 UTC (0+1 container statuses recorded)
Apr 29 23:22:15.026: INFO: 	Container kube-proxy ready: true, restart count 2
Apr 29 23:22:15.026: INFO: kube-flannel-dbcj8 started at 2022-04-29 20:00:03 +0000 UTC (1+1 container statuses recorded)
Apr 29 23:22:15.026: INFO: 	Init container install-cni ready: true, restart count 2
Apr 29 23:22:15.026: INFO: 	Container kube-flannel ready: true, restart count 3
Apr 29 23:22:15.026: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-zfdv5 started at 2022-04-29 20:09:17 +0000 UTC (0+1 container statuses recorded)
Apr 29 23:22:15.026: INFO: 	Container kube-sriovdp ready: true, restart count 0
Apr 29 23:22:15.026: INFO: cmk-74bh9 started at 2022-04-29 20:12:25 +0000 UTC (0+2 container statuses recorded)
Apr 29 23:22:15.026: INFO: 	Container nodereport ready: true, restart count 0
Apr 29 23:22:15.026: INFO: 	Container reconcile ready: true, restart count 0
Apr 29 23:22:15.026: INFO: node-exporter-tlpmt started at 2022-04-29 20:13:28 +0000 UTC (0+2 container statuses recorded)
Apr 29 23:22:15.026: INFO: 	Container kube-rbac-proxy ready: true, restart count 0
Apr 29 23:22:15.026: INFO: 	Container node-exporter ready: true, restart count 0
Apr 29 23:22:15.026: INFO: nodeport-update-service-f8mb5 started at 2022-04-29 23:19:51 +0000 UTC (0+1 container statuses recorded)
Apr 29 23:22:15.026: INFO: 	Container nodeport-update-service ready: true, restart count 0
Apr 29 23:22:15.026: INFO: execpod7jqfj started at 2022-04-29 23:20:03 +0000 UTC (0+1 container statuses recorded)
Apr 29 23:22:15.026: INFO: 	Container agnhost-container ready: true, restart count 0
Apr 29 23:22:15.026: INFO: cmk-webhook-6c9d5f8578-b9mdv started at 2022-04-29 20:12:26 +0000 UTC (0+1 container statuses recorded)
Apr 29 23:22:15.026: INFO: 	Container cmk-webhook ready: true, restart count 0
Apr 29 23:22:15.171: INFO: 
Latency metrics for node node2
Apr 29 23:22:15.171: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-7318" for this suite.
[AfterEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750


• Failure [144.005 seconds]
[sig-network] Services
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should be able to update service type to NodePort listening on same port number but different protocols [It]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1211

  Apr 29 23:22:14.424: Unexpected error:
      <*errors.errorString | 0xc003e1ea30>: {
          s: "service is not reachable within 2m0s timeout on endpoint 10.10.190.207:30885 over TCP protocol",
      }
      service is not reachable within 2m0s timeout on endpoint 10.10.190.207:30885 over TCP protocol
  occurred

  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1245
------------------------------
{"msg":"FAILED [sig-network] Services should be able to update service type to NodePort listening on same port number but different protocols","total":-1,"completed":1,"skipped":1173,"failed":1,"failures":["[sig-network] Services should be able to update service type to NodePort listening on same port number but different protocols"]}
Apr 29 23:22:15.185: INFO: Running AfterSuite actions on all nodes


{"msg":"PASSED [sig-network] Networking Granular Checks: Services should update endpoints: udp","total":-1,"completed":3,"skipped":85,"failed":0}
Apr 29 23:21:40.593: INFO: Running AfterSuite actions on all nodes
Apr 29 23:22:15.250: INFO: Running AfterSuite actions on node 1
Apr 29 23:22:15.250: INFO: Skipping dumping logs from cluster



Summarizing 2 Failures:

[Fail] [sig-network] Conntrack [It] should be able to preserve UDP traffic when server pod cycles for a NodePort service 
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes/runner.go:113

[Fail] [sig-network] Services [It] should be able to update service type to NodePort listening on same port number but different protocols 
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1245

Ran 27 of 5773 Specs in 183.544 seconds
FAIL! -- 25 Passed | 2 Failed | 0 Pending | 5746 Skipped


Ginkgo ran 1 suite in 3m5.240674212s
Test Suite Failed