Running Suite: Kubernetes e2e suite =================================== Random Seed: 1654898260 - Will randomize all specs Will run 5773 specs Running in parallel across 10 nodes Jun 10 21:57:41.743: INFO: >>> kubeConfig: /root/.kube/config Jun 10 21:57:41.748: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Jun 10 21:57:41.775: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Jun 10 21:57:41.843: INFO: The status of Pod cmk-init-discover-node1-hlbt6 is Succeeded, skipping waiting Jun 10 21:57:41.843: INFO: The status of Pod cmk-init-discover-node2-jxvbr is Succeeded, skipping waiting Jun 10 21:57:41.843: INFO: 40 / 42 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Jun 10 21:57:41.843: INFO: expected 8 pod replicas in namespace 'kube-system', 8 are Running and Ready. Jun 10 21:57:41.843: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Jun 10 21:57:41.858: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'cmk' (0 seconds elapsed) Jun 10 21:57:41.858: INFO: 5 / 5 pods ready in namespace 'kube-system' in daemonset 'kube-flannel' (0 seconds elapsed) Jun 10 21:57:41.858: INFO: 0 / 0 pods ready in namespace 'kube-system' in daemonset 'kube-flannel-ds-arm' (0 seconds elapsed) Jun 10 21:57:41.858: INFO: 0 / 0 pods ready in namespace 'kube-system' in daemonset 'kube-flannel-ds-arm64' (0 seconds elapsed) Jun 10 21:57:41.858: INFO: 0 / 0 pods ready in namespace 'kube-system' in daemonset 'kube-flannel-ds-ppc64le' (0 seconds elapsed) Jun 10 21:57:41.858: INFO: 0 / 0 pods ready in namespace 'kube-system' in daemonset 'kube-flannel-ds-s390x' (0 seconds elapsed) Jun 10 21:57:41.858: INFO: 5 / 5 pods ready in namespace 'kube-system' in daemonset 'kube-multus-ds-amd64' (0 seconds elapsed) Jun 10 21:57:41.858: INFO: 5 / 5 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Jun 10 21:57:41.858: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'node-feature-discovery-worker' (0 seconds elapsed) Jun 10 21:57:41.858: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'sriov-net-dp-kube-sriov-device-plugin-amd64' (0 seconds elapsed) Jun 10 21:57:41.858: INFO: e2e test version: v1.21.9 Jun 10 21:57:41.859: INFO: kube-apiserver version: v1.21.1 Jun 10 21:57:41.860: INFO: >>> kubeConfig: /root/.kube/config Jun 10 21:57:41.866: INFO: Cluster IP family: ipv4 Jun 10 21:57:41.861: INFO: >>> kubeConfig: /root/.kube/config Jun 10 21:57:41.882: INFO: Cluster IP family: ipv4 Jun 10 21:57:41.862: INFO: >>> kubeConfig: /root/.kube/config Jun 10 21:57:41.884: INFO: Cluster IP family: ipv4 SSSSSSSSSSS ------------------------------ Jun 10 21:57:41.875: INFO: >>> kubeConfig: /root/.kube/config Jun 10 21:57:41.896: INFO: Cluster IP family: ipv4 SSSSSSSSSSS ------------------------------ Jun 10 21:57:41.885: INFO: >>> kubeConfig: /root/.kube/config Jun 10 21:57:41.906: INFO: Cluster IP family: ipv4 SSSSSSSSSSS ------------------------------ Jun 10 21:57:41.895: INFO: >>> kubeConfig: /root/.kube/config Jun 10 21:57:41.915: INFO: Cluster IP family: ipv4 S ------------------------------ Jun 10 21:57:41.893: INFO: >>> kubeConfig: /root/.kube/config Jun 10 21:57:41.917: INFO: Cluster IP family: ipv4 Jun 10 21:57:41.896: INFO: >>> kubeConfig: /root/.kube/config Jun 10 21:57:41.918: INFO: Cluster IP family: ipv4 SSSSSSSSSSSSSS ------------------------------ Jun 10 21:57:41.902: INFO: >>> kubeConfig: /root/.kube/config Jun 10 21:57:41.927: INFO: Cluster IP family: ipv4 SS ------------------------------ Jun 10 21:57:41.908: INFO: >>> kubeConfig: /root/.kube/config Jun 10 21:57:41.929: INFO: Cluster IP family: ipv4 SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 10 21:57:41.936: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services W0610 21:57:41.960430 27 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Jun 10 21:57:41.960: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Jun 10 21:57:41.962: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746 [It] should find a service from listing all namespaces [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: fetching services [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 10 21:57:41.968: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-8448" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750 •SSSS ------------------------------ {"msg":"PASSED [sig-network] Services should find a service from listing all namespaces [Conformance]","total":-1,"completed":1,"skipped":6,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 10 21:57:41.981: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/rc.go:54 [It] should surface a failure condition on a common issue like exceeded quota [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Jun 10 21:57:42.002: INFO: Creating quota "condition-test" that allows only two pods to run in the current namespace STEP: Creating rc "condition-test" that asks for more than the allowed pod quota STEP: Checking rc "condition-test" has the desired failure condition set STEP: Scaling down rc "condition-test" to satisfy pod quota Jun 10 21:57:44.028: INFO: Updating replication controller "condition-test" STEP: Checking rc "condition-test" has no failure condition set [AfterEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 10 21:57:45.037: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-235" for this suite. • ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance]","total":-1,"completed":2,"skipped":7,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Docker Containers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 10 21:57:41.870: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers W0610 21:57:41.896262 28 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Jun 10 21:57:41.896: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Jun 10 21:57:41.900: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [It] should use the image defaults if command and args are blank [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [AfterEach] [sig-node] Docker Containers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 10 21:57:47.937: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-1208" for this suite. • [SLOW TEST:6.075 seconds] [sig-node] Docker Containers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should use the image defaults if command and args are blank [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":0,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 10 21:57:41.923: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl W0610 21:57:41.949278 35 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Jun 10 21:57:41.949: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Jun 10 21:57:41.951: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:241 [It] should check if kubectl describe prints relevant information for rc and pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Jun 10 21:57:41.954: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-1799 create -f -' Jun 10 21:57:42.428: INFO: stderr: "" Jun 10 21:57:42.428: INFO: stdout: "replicationcontroller/agnhost-primary created\n" Jun 10 21:57:42.428: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-1799 create -f -' Jun 10 21:57:42.782: INFO: stderr: "" Jun 10 21:57:42.782: INFO: stdout: "service/agnhost-primary created\n" STEP: Waiting for Agnhost primary to start. Jun 10 21:57:43.787: INFO: Selector matched 1 pods for map[app:agnhost] Jun 10 21:57:43.787: INFO: Found 0 / 1 Jun 10 21:57:44.787: INFO: Selector matched 1 pods for map[app:agnhost] Jun 10 21:57:44.787: INFO: Found 0 / 1 Jun 10 21:57:45.786: INFO: Selector matched 1 pods for map[app:agnhost] Jun 10 21:57:45.786: INFO: Found 0 / 1 Jun 10 21:57:46.785: INFO: Selector matched 1 pods for map[app:agnhost] Jun 10 21:57:46.786: INFO: Found 0 / 1 Jun 10 21:57:47.786: INFO: Selector matched 1 pods for map[app:agnhost] Jun 10 21:57:47.786: INFO: Found 1 / 1 Jun 10 21:57:47.786: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Jun 10 21:57:47.789: INFO: Selector matched 1 pods for map[app:agnhost] Jun 10 21:57:47.789: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Jun 10 21:57:47.789: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-1799 describe pod agnhost-primary-b9tfx' Jun 10 21:57:47.986: INFO: stderr: "" Jun 10 21:57:47.986: INFO: stdout: "Name: agnhost-primary-b9tfx\nNamespace: kubectl-1799\nPriority: 0\nNode: node1/10.10.190.207\nStart Time: Fri, 10 Jun 2022 21:57:42 +0000\nLabels: app=agnhost\n role=primary\nAnnotations: k8s.v1.cni.cncf.io/network-status:\n [{\n \"name\": \"default-cni-network\",\n \"interface\": \"eth0\",\n \"ips\": [\n \"10.244.3.111\"\n ],\n \"mac\": \"d6:d5:3b:04:48:06\",\n \"default\": true,\n \"dns\": {}\n }]\n k8s.v1.cni.cncf.io/networks-status:\n [{\n \"name\": \"default-cni-network\",\n \"interface\": \"eth0\",\n \"ips\": [\n \"10.244.3.111\"\n ],\n \"mac\": \"d6:d5:3b:04:48:06\",\n \"default\": true,\n \"dns\": {}\n }]\n kubernetes.io/psp: collectd\nStatus: Running\nIP: 10.244.3.111\nIPs:\n IP: 10.244.3.111\nControlled By: ReplicationController/agnhost-primary\nContainers:\n agnhost-primary:\n Container ID: docker://3a30279d45cf95a83b47b1e614cd10dfdde473b09b513ca15d745812f7b908ad\n Image: k8s.gcr.io/e2e-test-images/agnhost:2.32\n Image ID: docker-pullable://k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1\n Port: 6379/TCP\n Host Port: 0/TCP\n State: Running\n Started: Fri, 10 Jun 2022 21:57:46 +0000\n Ready: True\n Restart Count: 0\n Environment: \n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-jkltt (ro)\nConditions:\n Type Status\n Initialized True \n Ready True \n ContainersReady True \n PodScheduled True \nVolumes:\n kube-api-access-jkltt:\n Type: Projected (a volume that contains injected data from multiple sources)\n TokenExpirationSeconds: 3607\n ConfigMapName: kube-root-ca.crt\n ConfigMapOptional: \n DownwardAPI: true\nQoS Class: BestEffort\nNode-Selectors: \nTolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s\n node.kubernetes.io/unreachable:NoExecute op=Exists for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled 5s default-scheduler Successfully assigned kubectl-1799/agnhost-primary-b9tfx to node1\n Normal Pulling 2s kubelet Pulling image \"k8s.gcr.io/e2e-test-images/agnhost:2.32\"\n Normal Pulled 2s kubelet Successfully pulled image \"k8s.gcr.io/e2e-test-images/agnhost:2.32\" in 302.930698ms\n Normal Created 1s kubelet Created container agnhost-primary\n Normal Started 1s kubelet Started container agnhost-primary\n" Jun 10 21:57:47.987: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-1799 describe rc agnhost-primary' Jun 10 21:57:48.179: INFO: stderr: "" Jun 10 21:57:48.180: INFO: stdout: "Name: agnhost-primary\nNamespace: kubectl-1799\nSelector: app=agnhost,role=primary\nLabels: app=agnhost\n role=primary\nAnnotations: \nReplicas: 1 current / 1 desired\nPods Status: 1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n Labels: app=agnhost\n role=primary\n Containers:\n agnhost-primary:\n Image: k8s.gcr.io/e2e-test-images/agnhost:2.32\n Port: 6379/TCP\n Host Port: 0/TCP\n Environment: \n Mounts: \n Volumes: \nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal SuccessfulCreate 6s replication-controller Created pod: agnhost-primary-b9tfx\n" Jun 10 21:57:48.180: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-1799 describe service agnhost-primary' Jun 10 21:57:48.378: INFO: stderr: "" Jun 10 21:57:48.378: INFO: stdout: "Name: agnhost-primary\nNamespace: kubectl-1799\nLabels: app=agnhost\n role=primary\nAnnotations: \nSelector: app=agnhost,role=primary\nType: ClusterIP\nIP Family Policy: SingleStack\nIP Families: IPv4\nIP: 10.233.29.225\nIPs: 10.233.29.225\nPort: 6379/TCP\nTargetPort: agnhost-server/TCP\nEndpoints: 10.244.3.111:6379\nSession Affinity: None\nEvents: \n" Jun 10 21:57:48.383: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-1799 describe node master1' Jun 10 21:57:48.599: INFO: stderr: "" Jun 10 21:57:48.599: INFO: stdout: "Name: master1\nRoles: control-plane,master\nLabels: beta.kubernetes.io/arch=amd64\n beta.kubernetes.io/os=linux\n kubernetes.io/arch=amd64\n kubernetes.io/hostname=master1\n kubernetes.io/os=linux\n node-role.kubernetes.io/control-plane=\n node-role.kubernetes.io/master=\n node.kubernetes.io/exclude-from-external-load-balancers=\nAnnotations: flannel.alpha.coreos.com/backend-data: null\n flannel.alpha.coreos.com/backend-type: host-gw\n flannel.alpha.coreos.com/kube-subnet-manager: true\n flannel.alpha.coreos.com/public-ip: 10.10.190.202\n kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock\n nfd.node.kubernetes.io/master.version: v0.8.2\n node.alpha.kubernetes.io/ttl: 0\n volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp: Fri, 10 Jun 2022 19:57:38 +0000\nTaints: node-role.kubernetes.io/master:NoSchedule\nUnschedulable: false\nLease:\n HolderIdentity: master1\n AcquireTime: \n RenewTime: Fri, 10 Jun 2022 21:57:38 +0000\nConditions:\n Type Status LastHeartbeatTime LastTransitionTime Reason Message\n ---- ------ ----------------- ------------------ ------ -------\n NetworkUnavailable False Fri, 10 Jun 2022 20:03:20 +0000 Fri, 10 Jun 2022 20:03:20 +0000 FlannelIsUp Flannel is running on this node\n MemoryPressure False Fri, 10 Jun 2022 21:57:39 +0000 Fri, 10 Jun 2022 19:57:36 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available\n DiskPressure False Fri, 10 Jun 2022 21:57:39 +0000 Fri, 10 Jun 2022 19:57:36 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure\n PIDPressure False Fri, 10 Jun 2022 21:57:39 +0000 Fri, 10 Jun 2022 19:57:36 +0000 KubeletHasSufficientPID kubelet has sufficient PID available\n Ready True Fri, 10 Jun 2022 21:57:39 +0000 Fri, 10 Jun 2022 20:00:33 +0000 KubeletReady kubelet is posting ready status\nAddresses:\n InternalIP: 10.10.190.202\n Hostname: master1\nCapacity:\n cpu: 80\n ephemeral-storage: 440625980Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 196518300Ki\n pods: 110\nAllocatable:\n cpu: 79550m\n ephemeral-storage: 406080902496\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 195629468Ki\n pods: 110\nSystem Info:\n Machine ID: 3faca96dd267476388422e9ecfe8ffa5\n System UUID: 00ACFB60-0631-E711-906E-0017A4403562\n Boot ID: a8563bde-8faa-4424-940f-741c59dd35bf\n Kernel Version: 3.10.0-1160.66.1.el7.x86_64\n OS Image: CentOS Linux 7 (Core)\n Operating System: linux\n Architecture: amd64\n Container Runtime Version: docker://20.10.17\n Kubelet Version: v1.21.1\n Kube-Proxy Version: v1.21.1\nPodCIDR: 10.244.0.0/24\nPodCIDRs: 10.244.0.0/24\nNon-terminated Pods: (11 in total)\n Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits Age\n --------- ---- ------------ ---------- --------------- ------------- ---\n kube-system container-registry-65d7c44b96-rsh2n 0 (0%) 0 (0%) 0 (0%) 0 (0%) 112m\n kube-system dns-autoscaler-7df78bfcfb-kz7px 20m (0%) 0 (0%) 10Mi (0%) 0 (0%) 116m\n kube-system kube-apiserver-master1 250m (0%) 0 (0%) 0 (0%) 0 (0%) 110m\n kube-system kube-controller-manager-master1 200m (0%) 0 (0%) 0 (0%) 0 (0%) 119m\n kube-system kube-flannel-xx9h7 150m (0%) 300m (0%) 64M (0%) 500M (0%) 117m\n kube-system kube-multus-ds-amd64-t5pr7 100m (0%) 100m (0%) 90Mi (0%) 90Mi (0%) 117m\n kube-system kube-proxy-rd4j7 0 (0%) 0 (0%) 0 (0%) 0 (0%) 118m\n kube-system kube-scheduler-master1 100m (0%) 0 (0%) 0 (0%) 0 (0%) 101m\n kube-system node-feature-discovery-controller-cff799f9f-74qhv 0 (0%) 0 (0%) 0 (0%) 0 (0%) 109m\n monitoring node-exporter-vc67r 112m (0%) 270m (0%) 200Mi (0%) 220Mi (0%) 104m\n monitoring prometheus-operator-585ccfb458-kkb8f 100m (0%) 200m (0%) 100Mi (0%) 200Mi (0%) 104m\nAllocated resources:\n (Total limits may be over 100 percent, i.e., overcommitted.)\n Resource Requests Limits\n -------- -------- ------\n cpu 1032m (1%) 870m (1%)\n memory 472100Ki (0%) 1034773760 (0%)\n ephemeral-storage 0 (0%) 0 (0%)\n hugepages-1Gi 0 (0%) 0 (0%)\n hugepages-2Mi 0 (0%) 0 (0%)\nEvents: \n" Jun 10 21:57:48.599: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-1799 describe namespace kubectl-1799' Jun 10 21:57:48.766: INFO: stderr: "" Jun 10 21:57:48.766: INFO: stdout: "Name: kubectl-1799\nLabels: e2e-framework=kubectl\n e2e-run=d723bdc1-7c63-434c-aeaa-effe634f86ef\n kubernetes.io/metadata.name=kubectl-1799\nAnnotations: \nStatus: Active\n\nNo resource quota.\n\nNo LimitRange resource.\n" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 10 21:57:48.766: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1799" for this suite. • [SLOW TEST:6.853 seconds] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl describe /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1084 should check if kubectl describe prints relevant information for rc and pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance]","total":-1,"completed":1,"skipped":10,"failed":0} SS ------------------------------ [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 10 21:57:41.942: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected W0610 21:57:41.966764 40 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Jun 10 21:57:41.967: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Jun 10 21:57:41.969: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test downward API volume plugin Jun 10 21:57:41.983: INFO: Waiting up to 5m0s for pod "downwardapi-volume-7cb5b451-93f2-4b02-8be4-bf10c6a39fea" in namespace "projected-3442" to be "Succeeded or Failed" Jun 10 21:57:41.986: INFO: Pod "downwardapi-volume-7cb5b451-93f2-4b02-8be4-bf10c6a39fea": Phase="Pending", Reason="", readiness=false. Elapsed: 2.732273ms Jun 10 21:57:43.988: INFO: Pod "downwardapi-volume-7cb5b451-93f2-4b02-8be4-bf10c6a39fea": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005570977s Jun 10 21:57:45.993: INFO: Pod "downwardapi-volume-7cb5b451-93f2-4b02-8be4-bf10c6a39fea": Phase="Pending", Reason="", readiness=false. Elapsed: 4.009746701s Jun 10 21:57:47.997: INFO: Pod "downwardapi-volume-7cb5b451-93f2-4b02-8be4-bf10c6a39fea": Phase="Pending", Reason="", readiness=false. Elapsed: 6.014528086s Jun 10 21:57:50.003: INFO: Pod "downwardapi-volume-7cb5b451-93f2-4b02-8be4-bf10c6a39fea": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.020442214s STEP: Saw pod success Jun 10 21:57:50.003: INFO: Pod "downwardapi-volume-7cb5b451-93f2-4b02-8be4-bf10c6a39fea" satisfied condition "Succeeded or Failed" Jun 10 21:57:50.006: INFO: Trying to get logs from node node2 pod downwardapi-volume-7cb5b451-93f2-4b02-8be4-bf10c6a39fea container client-container: STEP: delete the pod Jun 10 21:57:50.207: INFO: Waiting for pod downwardapi-volume-7cb5b451-93f2-4b02-8be4-bf10c6a39fea to disappear Jun 10 21:57:50.209: INFO: Pod downwardapi-volume-7cb5b451-93f2-4b02-8be4-bf10c6a39fea no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 10 21:57:50.209: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3442" for this suite. • [SLOW TEST:8.275 seconds] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":12,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 10 21:57:42.010: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir W0610 21:57:42.031827 29 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Jun 10 21:57:42.032: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Jun 10 21:57:42.033: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test emptydir 0777 on node default medium Jun 10 21:57:42.048: INFO: Waiting up to 5m0s for pod "pod-7840f373-b9fe-4e0e-8afe-72e0c68e19d9" in namespace "emptydir-8589" to be "Succeeded or Failed" Jun 10 21:57:42.051: INFO: Pod "pod-7840f373-b9fe-4e0e-8afe-72e0c68e19d9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.239788ms Jun 10 21:57:44.054: INFO: Pod "pod-7840f373-b9fe-4e0e-8afe-72e0c68e19d9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005402258s Jun 10 21:57:46.058: INFO: Pod "pod-7840f373-b9fe-4e0e-8afe-72e0c68e19d9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.009311682s Jun 10 21:57:48.061: INFO: Pod "pod-7840f373-b9fe-4e0e-8afe-72e0c68e19d9": Phase="Pending", Reason="", readiness=false. Elapsed: 6.012385222s Jun 10 21:57:50.065: INFO: Pod "pod-7840f373-b9fe-4e0e-8afe-72e0c68e19d9": Phase="Pending", Reason="", readiness=false. Elapsed: 8.01628422s Jun 10 21:57:52.067: INFO: Pod "pod-7840f373-b9fe-4e0e-8afe-72e0c68e19d9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.018862085s STEP: Saw pod success Jun 10 21:57:52.067: INFO: Pod "pod-7840f373-b9fe-4e0e-8afe-72e0c68e19d9" satisfied condition "Succeeded or Failed" Jun 10 21:57:52.070: INFO: Trying to get logs from node node2 pod pod-7840f373-b9fe-4e0e-8afe-72e0c68e19d9 container test-container: STEP: delete the pod Jun 10 21:57:52.085: INFO: Waiting for pod pod-7840f373-b9fe-4e0e-8afe-72e0c68e19d9 to disappear Jun 10 21:57:52.087: INFO: Pod pod-7840f373-b9fe-4e0e-8afe-72e0c68e19d9 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 10 21:57:52.087: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-8589" for this suite. • [SLOW TEST:10.085 seconds] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":36,"failed":0} SSSSS ------------------------------ [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 10 21:57:42.024: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected W0610 21:57:42.048572 33 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Jun 10 21:57:42.048: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Jun 10 21:57:42.050: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41 [It] should provide container's memory request [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test downward API volume plugin Jun 10 21:57:42.064: INFO: Waiting up to 5m0s for pod "downwardapi-volume-4041e412-1e49-474c-bdc2-7589c83890b8" in namespace "projected-8030" to be "Succeeded or Failed" Jun 10 21:57:42.066: INFO: Pod "downwardapi-volume-4041e412-1e49-474c-bdc2-7589c83890b8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.143995ms Jun 10 21:57:44.070: INFO: Pod "downwardapi-volume-4041e412-1e49-474c-bdc2-7589c83890b8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006440149s Jun 10 21:57:46.075: INFO: Pod "downwardapi-volume-4041e412-1e49-474c-bdc2-7589c83890b8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.010803519s Jun 10 21:57:48.079: INFO: Pod "downwardapi-volume-4041e412-1e49-474c-bdc2-7589c83890b8": Phase="Pending", Reason="", readiness=false. Elapsed: 6.015397302s Jun 10 21:57:50.082: INFO: Pod "downwardapi-volume-4041e412-1e49-474c-bdc2-7589c83890b8": Phase="Pending", Reason="", readiness=false. Elapsed: 8.018610992s Jun 10 21:57:52.086: INFO: Pod "downwardapi-volume-4041e412-1e49-474c-bdc2-7589c83890b8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.022154906s STEP: Saw pod success Jun 10 21:57:52.086: INFO: Pod "downwardapi-volume-4041e412-1e49-474c-bdc2-7589c83890b8" satisfied condition "Succeeded or Failed" Jun 10 21:57:52.088: INFO: Trying to get logs from node node2 pod downwardapi-volume-4041e412-1e49-474c-bdc2-7589c83890b8 container client-container: STEP: delete the pod Jun 10 21:57:52.099: INFO: Waiting for pod downwardapi-volume-4041e412-1e49-474c-bdc2-7589c83890b8 to disappear Jun 10 21:57:52.101: INFO: Pod downwardapi-volume-4041e412-1e49-474c-bdc2-7589c83890b8 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 10 21:57:52.101: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8030" for this suite. • [SLOW TEST:10.084 seconds] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should provide container's memory request [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ S ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":40,"failed":0} SSSS ------------------------------ [BeforeEach] [sig-network] EndpointSlice /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 10 21:57:52.117: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename endpointslice STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] EndpointSlice /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/endpointslice.go:49 [It] should have Endpoints and EndpointSlices pointing to API Server [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [AfterEach] [sig-network] EndpointSlice /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 10 21:57:52.151: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "endpointslice-291" for this suite. • ------------------------------ {"msg":"PASSED [sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","total":-1,"completed":2,"skipped":42,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 10 21:57:48.788: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating configMap with name projected-configmap-test-volume-map-0a9a9cfb-d458-4839-80ad-618b49911a2c STEP: Creating a pod to test consume configMaps Jun 10 21:57:48.823: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-b8531eaf-c65f-496a-b6e2-62eb133edcac" in namespace "projected-8868" to be "Succeeded or Failed" Jun 10 21:57:48.826: INFO: Pod "pod-projected-configmaps-b8531eaf-c65f-496a-b6e2-62eb133edcac": Phase="Pending", Reason="", readiness=false. Elapsed: 2.232199ms Jun 10 21:57:50.831: INFO: Pod "pod-projected-configmaps-b8531eaf-c65f-496a-b6e2-62eb133edcac": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007302421s Jun 10 21:57:52.837: INFO: Pod "pod-projected-configmaps-b8531eaf-c65f-496a-b6e2-62eb133edcac": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013582315s STEP: Saw pod success Jun 10 21:57:52.837: INFO: Pod "pod-projected-configmaps-b8531eaf-c65f-496a-b6e2-62eb133edcac" satisfied condition "Succeeded or Failed" Jun 10 21:57:52.839: INFO: Trying to get logs from node node1 pod pod-projected-configmaps-b8531eaf-c65f-496a-b6e2-62eb133edcac container agnhost-container: STEP: delete the pod Jun 10 21:57:52.860: INFO: Waiting for pod pod-projected-configmaps-b8531eaf-c65f-496a-b6e2-62eb133edcac to disappear Jun 10 21:57:52.863: INFO: Pod pod-projected-configmaps-b8531eaf-c65f-496a-b6e2-62eb133edcac no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 10 21:57:52.863: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8868" for this suite. • ------------------------------ [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 10 21:57:41.913: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota W0610 21:57:41.944460 25 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Jun 10 21:57:41.944: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Jun 10 21:57:41.946: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a replica set. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ReplicaSet STEP: Ensuring resource quota status captures replicaset creation STEP: Deleting a ReplicaSet STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 10 21:57:52.980: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-9833" for this suite. • [SLOW TEST:11.076 seconds] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a replica set. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance]","total":-1,"completed":1,"skipped":13,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-instrumentation] Events API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 10 21:57:53.104: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-instrumentation] Events API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/instrumentation/events.go:81 [It] should delete a collection of events [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Create set of events STEP: get a list of Events with a label in the current namespace STEP: delete a list of events Jun 10 21:57:53.136: INFO: requesting DeleteCollection of events STEP: check that the list of events matches the requested quantity [AfterEach] [sig-instrumentation] Events API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 10 21:57:53.151: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-6339" for this suite. • ------------------------------ {"msg":"PASSED [sig-instrumentation] Events API should delete a collection of events [Conformance]","total":-1,"completed":2,"skipped":67,"failed":0} S ------------------------------ [BeforeEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 10 21:57:50.299: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating secret with name secret-test-1a772638-a52f-4ac0-a640-33ab18507e8b STEP: Creating a pod to test consume secrets Jun 10 21:57:50.352: INFO: Waiting up to 5m0s for pod "pod-secrets-1a4e241e-b633-4136-b6bf-fdc510c26954" in namespace "secrets-9012" to be "Succeeded or Failed" Jun 10 21:57:50.354: INFO: Pod "pod-secrets-1a4e241e-b633-4136-b6bf-fdc510c26954": Phase="Pending", Reason="", readiness=false. Elapsed: 1.799423ms Jun 10 21:57:52.357: INFO: Pod "pod-secrets-1a4e241e-b633-4136-b6bf-fdc510c26954": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005396087s Jun 10 21:57:54.361: INFO: Pod "pod-secrets-1a4e241e-b633-4136-b6bf-fdc510c26954": Phase="Pending", Reason="", readiness=false. Elapsed: 4.009166351s Jun 10 21:57:56.365: INFO: Pod "pod-secrets-1a4e241e-b633-4136-b6bf-fdc510c26954": Phase="Pending", Reason="", readiness=false. Elapsed: 6.012872851s Jun 10 21:57:58.369: INFO: Pod "pod-secrets-1a4e241e-b633-4136-b6bf-fdc510c26954": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.017309138s STEP: Saw pod success Jun 10 21:57:58.369: INFO: Pod "pod-secrets-1a4e241e-b633-4136-b6bf-fdc510c26954" satisfied condition "Succeeded or Failed" Jun 10 21:57:58.371: INFO: Trying to get logs from node node2 pod pod-secrets-1a4e241e-b633-4136-b6bf-fdc510c26954 container secret-volume-test: STEP: delete the pod Jun 10 21:57:58.382: INFO: Waiting for pod pod-secrets-1a4e241e-b633-4136-b6bf-fdc510c26954 to disappear Jun 10 21:57:58.384: INFO: Pod pod-secrets-1a4e241e-b633-4136-b6bf-fdc510c26954 no longer exists [AfterEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 10 21:57:58.384: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-9012" for this suite. STEP: Destroying namespace "secret-namespace-994" for this suite. • [SLOW TEST:8.098 seconds] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]","total":-1,"completed":2,"skipped":48,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 10 21:57:42.057: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods W0610 21:57:42.077959 39 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Jun 10 21:57:42.078: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Jun 10 21:57:42.079: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:186 [It] should be submitted and removed [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating the pod STEP: setting up watch STEP: submitting the pod to kubernetes Jun 10 21:57:42.084: INFO: observed the pod list STEP: verifying the pod is in kubernetes STEP: verifying pod creation was observed STEP: deleting the pod gracefully STEP: verifying pod deletion was observed [AfterEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 10 21:58:04.131: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-8222" for this suite. • [SLOW TEST:22.081 seconds] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should be submitted and removed [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Pods should be submitted and removed [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":58,"failed":0} S ------------------------------ [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 10 21:57:52.219: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41 [It] should update labels on modification [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating the pod Jun 10 21:57:52.256: INFO: The status of Pod labelsupdatec09e93a4-cbb4-4421-954e-9cb0a5d1c135 is Pending, waiting for it to be Running (with Ready = true) Jun 10 21:57:54.261: INFO: The status of Pod labelsupdatec09e93a4-cbb4-4421-954e-9cb0a5d1c135 is Pending, waiting for it to be Running (with Ready = true) Jun 10 21:57:56.260: INFO: The status of Pod labelsupdatec09e93a4-cbb4-4421-954e-9cb0a5d1c135 is Pending, waiting for it to be Running (with Ready = true) Jun 10 21:57:58.263: INFO: The status of Pod labelsupdatec09e93a4-cbb4-4421-954e-9cb0a5d1c135 is Pending, waiting for it to be Running (with Ready = true) Jun 10 21:58:00.259: INFO: The status of Pod labelsupdatec09e93a4-cbb4-4421-954e-9cb0a5d1c135 is Running (Ready = true) Jun 10 21:58:00.779: INFO: Successfully updated pod "labelsupdatec09e93a4-cbb4-4421-954e-9cb0a5d1c135" [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 10 21:58:04.805: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-5035" for this suite. • [SLOW TEST:12.594 seconds] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should update labels on modification [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance]","total":-1,"completed":3,"skipped":69,"failed":0} SSSS ------------------------------ [BeforeEach] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 10 21:57:48.021: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should fail substituting values in a volume subpath with backticks [Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Jun 10 21:57:58.064: INFO: Deleting pod "var-expansion-8b1d25c7-defe-4268-b8a3-0ae0548d1733" in namespace "var-expansion-5776" Jun 10 21:57:58.068: INFO: Wait up to 5m0s for pod "var-expansion-8b1d25c7-defe-4268-b8a3-0ae0548d1733" to be fully deleted [AfterEach] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 10 21:58:06.074: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-5776" for this suite. • [SLOW TEST:18.063 seconds] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should fail substituting values in a volume subpath with backticks [Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Variable Expansion should fail substituting values in a volume subpath with backticks [Slow] [Conformance]","total":-1,"completed":2,"skipped":31,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] EndpointSlice /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 10 21:58:06.125: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename endpointslice STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] EndpointSlice /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/endpointslice.go:49 [It] should support creating EndpointSlice API operations [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: getting /apis STEP: getting /apis/discovery.k8s.io STEP: getting /apis/discovery.k8s.iov1 STEP: creating STEP: getting STEP: listing STEP: watching Jun 10 21:58:06.165: INFO: starting watch STEP: cluster-wide listing STEP: cluster-wide watching Jun 10 21:58:06.169: INFO: starting watch STEP: patching STEP: updating Jun 10 21:58:06.185: INFO: waiting for watch events with expected annotations Jun 10 21:58:06.185: INFO: saw patched and updated annotations STEP: deleting STEP: deleting a collection [AfterEach] [sig-network] EndpointSlice /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 10 21:58:06.204: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "endpointslice-6354" for this suite. • ------------------------------ {"msg":"PASSED [sig-network] EndpointSlice should support creating EndpointSlice API operations [Conformance]","total":-1,"completed":3,"skipped":49,"failed":0} SSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 10 21:57:45.121: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/lifecycle_hook.go:52 STEP: create the container to handle the HTTPGet hook request. Jun 10 21:57:45.158: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Jun 10 21:57:47.163: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Jun 10 21:57:49.163: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Jun 10 21:57:51.163: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Jun 10 21:57:53.162: INFO: The status of Pod pod-handle-http-request is Running (Ready = true) [It] should execute prestop exec hook properly [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: create the pod with lifecycle hook Jun 10 21:57:53.177: INFO: The status of Pod pod-with-prestop-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jun 10 21:57:55.181: INFO: The status of Pod pod-with-prestop-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jun 10 21:57:57.181: INFO: The status of Pod pod-with-prestop-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jun 10 21:57:59.181: INFO: The status of Pod pod-with-prestop-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jun 10 21:58:01.182: INFO: The status of Pod pod-with-prestop-exec-hook is Running (Ready = true) STEP: delete the pod with lifecycle hook Jun 10 21:58:01.190: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jun 10 21:58:01.193: INFO: Pod pod-with-prestop-exec-hook still exists Jun 10 21:58:03.194: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jun 10 21:58:03.196: INFO: Pod pod-with-prestop-exec-hook still exists Jun 10 21:58:05.194: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jun 10 21:58:05.197: INFO: Pod pod-with-prestop-exec-hook still exists Jun 10 21:58:07.194: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jun 10 21:58:07.196: INFO: Pod pod-with-prestop-exec-hook no longer exists STEP: check prestop hook [AfterEach] [sig-node] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 10 21:58:07.203: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-7410" for this suite. • [SLOW TEST:22.090 seconds] [sig-node] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 when create a pod with lifecycle hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/lifecycle_hook.go:43 should execute prestop exec hook properly [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]","total":-1,"completed":3,"skipped":34,"failed":0} SSSS ------------------------------ [BeforeEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 10 21:57:41.906: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test W0610 21:57:41.935042 36 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Jun 10 21:57:41.935: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Jun 10 21:57:41.937: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: udp [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Performing setup for networking test in namespace pod-network-test-3095 STEP: creating a selector STEP: Creating the service pods in kubernetes Jun 10 21:57:41.940: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Jun 10 21:57:41.975: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jun 10 21:57:43.980: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jun 10 21:57:45.979: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jun 10 21:57:47.978: INFO: The status of Pod netserver-0 is Running (Ready = false) Jun 10 21:57:49.980: INFO: The status of Pod netserver-0 is Running (Ready = false) Jun 10 21:57:51.978: INFO: The status of Pod netserver-0 is Running (Ready = false) Jun 10 21:57:53.979: INFO: The status of Pod netserver-0 is Running (Ready = false) Jun 10 21:57:55.981: INFO: The status of Pod netserver-0 is Running (Ready = false) Jun 10 21:57:57.980: INFO: The status of Pod netserver-0 is Running (Ready = false) Jun 10 21:57:59.979: INFO: The status of Pod netserver-0 is Running (Ready = false) Jun 10 21:58:01.979: INFO: The status of Pod netserver-0 is Running (Ready = false) Jun 10 21:58:03.979: INFO: The status of Pod netserver-0 is Running (Ready = true) Jun 10 21:58:03.984: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods Jun 10 21:58:08.008: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2 Jun 10 21:58:08.009: INFO: Breadth first check of 10.244.3.110 on host 10.10.190.207... Jun 10 21:58:08.011: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.3.120:9080/dial?request=hostname&protocol=udp&host=10.244.3.110&port=8081&tries=1'] Namespace:pod-network-test-3095 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 10 21:58:08.011: INFO: >>> kubeConfig: /root/.kube/config Jun 10 21:58:08.123: INFO: Waiting for responses: map[] Jun 10 21:58:08.123: INFO: reached 10.244.3.110 after 0/1 tries Jun 10 21:58:08.123: INFO: Breadth first check of 10.244.4.249 on host 10.10.190.208... Jun 10 21:58:08.126: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.3.120:9080/dial?request=hostname&protocol=udp&host=10.244.4.249&port=8081&tries=1'] Namespace:pod-network-test-3095 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 10 21:58:08.126: INFO: >>> kubeConfig: /root/.kube/config Jun 10 21:58:08.214: INFO: Waiting for responses: map[] Jun 10 21:58:08.214: INFO: reached 10.244.4.249 after 0/1 tries Jun 10 21:58:08.214: INFO: Going to retry 0 out of 2 pods.... [AfterEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 10 21:58:08.214: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-3095" for this suite. • [SLOW TEST:26.317 seconds] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/network/framework.go:23 Granular Checks: Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/network/networking.go:30 should function for intra-pod communication: udp [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":8,"failed":0} SSSSSSS ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":-1,"completed":2,"skipped":12,"failed":0} [BeforeEach] [sig-node] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 10 21:57:52.874: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/lifecycle_hook.go:52 STEP: create the container to handle the HTTPGet hook request. Jun 10 21:57:52.909: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Jun 10 21:57:54.913: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Jun 10 21:57:56.912: INFO: The status of Pod pod-handle-http-request is Running (Ready = true) [It] should execute prestop http hook properly [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: create the pod with lifecycle hook Jun 10 21:57:56.928: INFO: The status of Pod pod-with-prestop-http-hook is Pending, waiting for it to be Running (with Ready = true) Jun 10 21:57:58.933: INFO: The status of Pod pod-with-prestop-http-hook is Pending, waiting for it to be Running (with Ready = true) Jun 10 21:58:00.932: INFO: The status of Pod pod-with-prestop-http-hook is Pending, waiting for it to be Running (with Ready = true) Jun 10 21:58:02.932: INFO: The status of Pod pod-with-prestop-http-hook is Pending, waiting for it to be Running (with Ready = true) Jun 10 21:58:04.932: INFO: The status of Pod pod-with-prestop-http-hook is Running (Ready = true) STEP: delete the pod with lifecycle hook Jun 10 21:58:04.938: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Jun 10 21:58:04.941: INFO: Pod pod-with-prestop-http-hook still exists Jun 10 21:58:06.942: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Jun 10 21:58:06.945: INFO: Pod pod-with-prestop-http-hook still exists Jun 10 21:58:08.942: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Jun 10 21:58:08.945: INFO: Pod pod-with-prestop-http-hook no longer exists STEP: check prestop hook [AfterEach] [sig-node] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 10 21:58:08.950: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-2656" for this suite. • [SLOW TEST:16.084 seconds] [sig-node] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 when create a pod with lifecycle hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/lifecycle_hook.go:43 should execute prestop http hook properly [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance]","total":-1,"completed":3,"skipped":12,"failed":0} SSSS ------------------------------ [BeforeEach] [sig-api-machinery] Watchers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 10 21:58:08.241: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should receive events on concurrent watches in same order [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: getting a starting resourceVersion STEP: starting a background goroutine to produce watch events STEP: creating watches starting from each resource version of the events produced and verifying they all receive resource versions in the same order [AfterEach] [sig-api-machinery] Watchers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 10 21:58:12.848: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-2586" for this suite. • ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance]","total":-1,"completed":2,"skipped":15,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 10 21:58:06.242: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context STEP: Waiting for a default service account to be provisioned in namespace [It] should support pod.Spec.SecurityContext.RunAsUser And pod.Spec.SecurityContext.RunAsGroup [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test pod.Spec.SecurityContext.RunAsUser Jun 10 21:58:06.276: INFO: Waiting up to 5m0s for pod "security-context-c9bf2016-f1d0-4cfb-a2e4-984a206d79b0" in namespace "security-context-1186" to be "Succeeded or Failed" Jun 10 21:58:06.279: INFO: Pod "security-context-c9bf2016-f1d0-4cfb-a2e4-984a206d79b0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.409426ms Jun 10 21:58:08.284: INFO: Pod "security-context-c9bf2016-f1d0-4cfb-a2e4-984a206d79b0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007228093s Jun 10 21:58:10.288: INFO: Pod "security-context-c9bf2016-f1d0-4cfb-a2e4-984a206d79b0": Phase="Pending", Reason="", readiness=false. Elapsed: 4.011134515s Jun 10 21:58:12.290: INFO: Pod "security-context-c9bf2016-f1d0-4cfb-a2e4-984a206d79b0": Phase="Pending", Reason="", readiness=false. Elapsed: 6.013510105s Jun 10 21:58:14.294: INFO: Pod "security-context-c9bf2016-f1d0-4cfb-a2e4-984a206d79b0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.017644969s STEP: Saw pod success Jun 10 21:58:14.294: INFO: Pod "security-context-c9bf2016-f1d0-4cfb-a2e4-984a206d79b0" satisfied condition "Succeeded or Failed" Jun 10 21:58:14.297: INFO: Trying to get logs from node node2 pod security-context-c9bf2016-f1d0-4cfb-a2e4-984a206d79b0 container test-container: STEP: delete the pod Jun 10 21:58:14.308: INFO: Waiting for pod security-context-c9bf2016-f1d0-4cfb-a2e4-984a206d79b0 to disappear Jun 10 21:58:14.310: INFO: Pod security-context-c9bf2016-f1d0-4cfb-a2e4-984a206d79b0 no longer exists [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 10 21:58:14.310: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-1186" for this suite. • [SLOW TEST:8.076 seconds] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 should support pod.Spec.SecurityContext.RunAsUser And pod.Spec.SecurityContext.RunAsGroup [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Security Context should support pod.Spec.SecurityContext.RunAsUser And pod.Spec.SecurityContext.RunAsGroup [LinuxOnly] [Conformance]","total":-1,"completed":4,"skipped":60,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 10 21:57:58.545: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should verify ResourceQuota with best effort scope. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a ResourceQuota with best effort scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a ResourceQuota with not best effort scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a best-effort pod STEP: Ensuring resource quota with best effort scope captures the pod usage STEP: Ensuring resource quota with not best effort ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage STEP: Creating a not best-effort pod STEP: Ensuring resource quota with not best effort scope captures the pod usage STEP: Ensuring resource quota with best effort scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 10 21:58:14.641: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-9051" for this suite. • [SLOW TEST:16.104 seconds] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should verify ResourceQuota with best effort scope. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance]","total":-1,"completed":3,"skipped":110,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 10 21:58:07.223: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/kubelet.go:38 [BeforeEach] when scheduling a busybox command that always fails in a pod /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/kubelet.go:82 [It] should have an terminated reason [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [AfterEach] [sig-node] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 10 21:58:15.268: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-3186" for this suite. • [SLOW TEST:8.053 seconds] [sig-node] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 when scheduling a busybox command that always fails in a pod /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/kubelet.go:79 should have an terminated reason [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance]","total":-1,"completed":4,"skipped":38,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 10 21:57:53.164: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746 [It] should be able to change the type from NodePort to ExternalName [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating a service nodeport-service with the type=NodePort in namespace services-5839 STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service STEP: creating service externalsvc in namespace services-5839 STEP: creating replication controller externalsvc in namespace services-5839 I0610 21:57:53.202780 25 runners.go:190] Created replication controller with name: externalsvc, namespace: services-5839, replica count: 2 I0610 21:57:56.255519 25 runners.go:190] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0610 21:57:59.256867 25 runners.go:190] externalsvc Pods: 2 out of 2 created, 1 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0610 21:58:02.257192 25 runners.go:190] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady STEP: changing the NodePort service to type=ExternalName Jun 10 21:58:02.272: INFO: Creating new exec pod Jun 10 21:58:08.292: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5839 exec execpod9gfc6 -- /bin/sh -x -c nslookup nodeport-service.services-5839.svc.cluster.local' Jun 10 21:58:08.553: INFO: stderr: "+ nslookup nodeport-service.services-5839.svc.cluster.local\n" Jun 10 21:58:08.553: INFO: stdout: "Server:\t\t10.233.0.3\nAddress:\t10.233.0.3#53\n\nnodeport-service.services-5839.svc.cluster.local\tcanonical name = externalsvc.services-5839.svc.cluster.local.\nName:\texternalsvc.services-5839.svc.cluster.local\nAddress: 10.233.59.253\n\n" STEP: deleting ReplicationController externalsvc in namespace services-5839, will wait for the garbage collector to delete the pods Jun 10 21:58:08.611: INFO: Deleting ReplicationController externalsvc took: 4.3811ms Jun 10 21:58:08.712: INFO: Terminating ReplicationController externalsvc pods took: 100.945505ms Jun 10 21:58:16.921: INFO: Cleaning up the NodePort to ExternalName test service [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 10 21:58:16.928: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-5839" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750 • [SLOW TEST:23.774 seconds] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should be able to change the type from NodePort to ExternalName [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance]","total":-1,"completed":3,"skipped":68,"failed":0} SSS ------------------------------ [BeforeEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 10 21:58:15.322: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating projection with secret that has name projected-secret-test-map-3480a8be-c63e-4244-8a5e-8935155868d4 STEP: Creating a pod to test consume secrets Jun 10 21:58:15.358: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-44ea6351-1dd1-4be2-985c-f32b366002ca" in namespace "projected-9493" to be "Succeeded or Failed" Jun 10 21:58:15.363: INFO: Pod "pod-projected-secrets-44ea6351-1dd1-4be2-985c-f32b366002ca": Phase="Pending", Reason="", readiness=false. Elapsed: 4.254826ms Jun 10 21:58:17.367: INFO: Pod "pod-projected-secrets-44ea6351-1dd1-4be2-985c-f32b366002ca": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008094745s Jun 10 21:58:19.370: INFO: Pod "pod-projected-secrets-44ea6351-1dd1-4be2-985c-f32b366002ca": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011588334s STEP: Saw pod success Jun 10 21:58:19.370: INFO: Pod "pod-projected-secrets-44ea6351-1dd1-4be2-985c-f32b366002ca" satisfied condition "Succeeded or Failed" Jun 10 21:58:19.373: INFO: Trying to get logs from node node1 pod pod-projected-secrets-44ea6351-1dd1-4be2-985c-f32b366002ca container projected-secret-volume-test: STEP: delete the pod Jun 10 21:58:19.386: INFO: Waiting for pod pod-projected-secrets-44ea6351-1dd1-4be2-985c-f32b366002ca to disappear Jun 10 21:58:19.388: INFO: Pod pod-projected-secrets-44ea6351-1dd1-4be2-985c-f32b366002ca no longer exists [AfterEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 10 21:58:19.388: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9493" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":5,"skipped":61,"failed":0} SSSSSSSSS ------------------------------ [BeforeEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 10 21:58:04.825: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for pods for Subdomain [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-4968.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-querier-2.dns-test-service-2.dns-4968.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-4968.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-querier-2.dns-test-service-2.dns-4968.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-4968.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service-2.dns-4968.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-4968.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service-2.dns-4968.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-4968.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-4968.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-querier-2.dns-test-service-2.dns-4968.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-4968.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-querier-2.dns-test-service-2.dns-4968.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-4968.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service-2.dns-4968.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-4968.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service-2.dns-4968.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-4968.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Jun 10 21:58:16.875: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-4968.svc.cluster.local from pod dns-4968/dns-test-8b9e649b-6aee-4827-9677-f3cdab964c6b: the server could not find the requested resource (get pods dns-test-8b9e649b-6aee-4827-9677-f3cdab964c6b) Jun 10 21:58:16.877: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-4968.svc.cluster.local from pod dns-4968/dns-test-8b9e649b-6aee-4827-9677-f3cdab964c6b: the server could not find the requested resource (get pods dns-test-8b9e649b-6aee-4827-9677-f3cdab964c6b) Jun 10 21:58:16.879: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-4968.svc.cluster.local from pod dns-4968/dns-test-8b9e649b-6aee-4827-9677-f3cdab964c6b: the server could not find the requested resource (get pods dns-test-8b9e649b-6aee-4827-9677-f3cdab964c6b) Jun 10 21:58:16.881: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-4968.svc.cluster.local from pod dns-4968/dns-test-8b9e649b-6aee-4827-9677-f3cdab964c6b: the server could not find the requested resource (get pods dns-test-8b9e649b-6aee-4827-9677-f3cdab964c6b) Jun 10 21:58:16.890: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-4968.svc.cluster.local from pod dns-4968/dns-test-8b9e649b-6aee-4827-9677-f3cdab964c6b: the server could not find the requested resource (get pods dns-test-8b9e649b-6aee-4827-9677-f3cdab964c6b) Jun 10 21:58:16.893: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-4968.svc.cluster.local from pod dns-4968/dns-test-8b9e649b-6aee-4827-9677-f3cdab964c6b: the server could not find the requested resource (get pods dns-test-8b9e649b-6aee-4827-9677-f3cdab964c6b) Jun 10 21:58:16.895: INFO: Unable to read jessie_udp@dns-test-service-2.dns-4968.svc.cluster.local from pod dns-4968/dns-test-8b9e649b-6aee-4827-9677-f3cdab964c6b: the server could not find the requested resource (get pods dns-test-8b9e649b-6aee-4827-9677-f3cdab964c6b) Jun 10 21:58:16.898: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-4968.svc.cluster.local from pod dns-4968/dns-test-8b9e649b-6aee-4827-9677-f3cdab964c6b: the server could not find the requested resource (get pods dns-test-8b9e649b-6aee-4827-9677-f3cdab964c6b) Jun 10 21:58:16.903: INFO: Lookups using dns-4968/dns-test-8b9e649b-6aee-4827-9677-f3cdab964c6b failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-4968.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-4968.svc.cluster.local wheezy_udp@dns-test-service-2.dns-4968.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-4968.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-4968.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-4968.svc.cluster.local jessie_udp@dns-test-service-2.dns-4968.svc.cluster.local jessie_tcp@dns-test-service-2.dns-4968.svc.cluster.local] Jun 10 21:58:21.942: INFO: DNS probes using dns-4968/dns-test-8b9e649b-6aee-4827-9677-f3cdab964c6b succeeded STEP: deleting the pod STEP: deleting the test headless service [AfterEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 10 21:58:21.956: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-4968" for this suite. • [SLOW TEST:17.143 seconds] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should provide DNS for pods for Subdomain [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for pods for Subdomain [Conformance]","total":-1,"completed":4,"skipped":73,"failed":0} SSSSSS ------------------------------ [BeforeEach] [sig-node] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 10 21:58:14.387: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/kubelet.go:38 [It] should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Jun 10 21:58:14.422: INFO: The status of Pod busybox-host-aliases0fe0caf1-b0c7-4c44-b3b0-5fdb36c0b0e9 is Pending, waiting for it to be Running (with Ready = true) Jun 10 21:58:16.426: INFO: The status of Pod busybox-host-aliases0fe0caf1-b0c7-4c44-b3b0-5fdb36c0b0e9 is Pending, waiting for it to be Running (with Ready = true) Jun 10 21:58:18.426: INFO: The status of Pod busybox-host-aliases0fe0caf1-b0c7-4c44-b3b0-5fdb36c0b0e9 is Pending, waiting for it to be Running (with Ready = true) Jun 10 21:58:20.425: INFO: The status of Pod busybox-host-aliases0fe0caf1-b0c7-4c44-b3b0-5fdb36c0b0e9 is Pending, waiting for it to be Running (with Ready = true) Jun 10 21:58:22.424: INFO: The status of Pod busybox-host-aliases0fe0caf1-b0c7-4c44-b3b0-5fdb36c0b0e9 is Running (Ready = true) [AfterEach] [sig-node] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 10 21:58:22.432: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-1695" for this suite. • [SLOW TEST:8.052 seconds] [sig-node] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 when scheduling a busybox Pod with hostAliases /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/kubelet.go:137 should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":5,"skipped":93,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 10 21:57:52.118: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with secret pod [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating pod pod-subpath-test-secret-2wcc STEP: Creating a pod to test atomic-volume-subpath Jun 10 21:57:52.161: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-2wcc" in namespace "subpath-5318" to be "Succeeded or Failed" Jun 10 21:57:52.163: INFO: Pod "pod-subpath-test-secret-2wcc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.218236ms Jun 10 21:57:54.168: INFO: Pod "pod-subpath-test-secret-2wcc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006806439s Jun 10 21:57:56.172: INFO: Pod "pod-subpath-test-secret-2wcc": Phase="Pending", Reason="", readiness=false. Elapsed: 4.011271692s Jun 10 21:57:58.176: INFO: Pod "pod-subpath-test-secret-2wcc": Phase="Pending", Reason="", readiness=false. Elapsed: 6.014763205s Jun 10 21:58:00.179: INFO: Pod "pod-subpath-test-secret-2wcc": Phase="Pending", Reason="", readiness=false. Elapsed: 8.018356661s Jun 10 21:58:02.184: INFO: Pod "pod-subpath-test-secret-2wcc": Phase="Pending", Reason="", readiness=false. Elapsed: 10.022838418s Jun 10 21:58:04.187: INFO: Pod "pod-subpath-test-secret-2wcc": Phase="Running", Reason="", readiness=true. Elapsed: 12.02634495s Jun 10 21:58:06.191: INFO: Pod "pod-subpath-test-secret-2wcc": Phase="Running", Reason="", readiness=true. Elapsed: 14.029544025s Jun 10 21:58:08.194: INFO: Pod "pod-subpath-test-secret-2wcc": Phase="Running", Reason="", readiness=true. Elapsed: 16.032855623s Jun 10 21:58:10.197: INFO: Pod "pod-subpath-test-secret-2wcc": Phase="Running", Reason="", readiness=true. Elapsed: 18.036024045s Jun 10 21:58:12.202: INFO: Pod "pod-subpath-test-secret-2wcc": Phase="Running", Reason="", readiness=true. Elapsed: 20.041254742s Jun 10 21:58:14.207: INFO: Pod "pod-subpath-test-secret-2wcc": Phase="Running", Reason="", readiness=true. Elapsed: 22.04578892s Jun 10 21:58:16.211: INFO: Pod "pod-subpath-test-secret-2wcc": Phase="Running", Reason="", readiness=true. Elapsed: 24.049697227s Jun 10 21:58:18.216: INFO: Pod "pod-subpath-test-secret-2wcc": Phase="Running", Reason="", readiness=true. Elapsed: 26.054998014s Jun 10 21:58:20.220: INFO: Pod "pod-subpath-test-secret-2wcc": Phase="Running", Reason="", readiness=true. Elapsed: 28.058445559s Jun 10 21:58:22.223: INFO: Pod "pod-subpath-test-secret-2wcc": Phase="Running", Reason="", readiness=true. Elapsed: 30.061976471s Jun 10 21:58:24.226: INFO: Pod "pod-subpath-test-secret-2wcc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 32.065363386s STEP: Saw pod success Jun 10 21:58:24.227: INFO: Pod "pod-subpath-test-secret-2wcc" satisfied condition "Succeeded or Failed" Jun 10 21:58:24.229: INFO: Trying to get logs from node node2 pod pod-subpath-test-secret-2wcc container test-container-subpath-secret-2wcc: STEP: delete the pod Jun 10 21:58:24.257: INFO: Waiting for pod pod-subpath-test-secret-2wcc to disappear Jun 10 21:58:24.259: INFO: Pod pod-subpath-test-secret-2wcc no longer exists STEP: Deleting pod pod-subpath-test-secret-2wcc Jun 10 21:58:24.259: INFO: Deleting pod "pod-subpath-test-secret-2wcc" in namespace "subpath-5318" [AfterEach] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 10 21:58:24.261: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-5318" for this suite. • [SLOW TEST:32.152 seconds] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with secret pod [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance]","total":-1,"completed":2,"skipped":44,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 10 21:58:24.311: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should run through the lifecycle of a ServiceAccount [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating a ServiceAccount STEP: watching for the ServiceAccount to be added STEP: patching the ServiceAccount STEP: finding ServiceAccount in list of all ServiceAccounts (by LabelSelector) STEP: deleting the ServiceAccount [AfterEach] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 10 21:58:24.349: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-8567" for this suite. • ------------------------------ {"msg":"PASSED [sig-auth] ServiceAccounts should run through the lifecycle of a ServiceAccount [Conformance]","total":-1,"completed":3,"skipped":63,"failed":0} S ------------------------------ [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 10 21:58:12.993: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41 [It] should update labels on modification [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating the pod Jun 10 21:58:13.030: INFO: The status of Pod labelsupdatea5fb774a-08d8-4a12-ae01-09fa39649ead is Pending, waiting for it to be Running (with Ready = true) Jun 10 21:58:15.034: INFO: The status of Pod labelsupdatea5fb774a-08d8-4a12-ae01-09fa39649ead is Pending, waiting for it to be Running (with Ready = true) Jun 10 21:58:17.034: INFO: The status of Pod labelsupdatea5fb774a-08d8-4a12-ae01-09fa39649ead is Pending, waiting for it to be Running (with Ready = true) Jun 10 21:58:19.035: INFO: The status of Pod labelsupdatea5fb774a-08d8-4a12-ae01-09fa39649ead is Pending, waiting for it to be Running (with Ready = true) Jun 10 21:58:21.033: INFO: The status of Pod labelsupdatea5fb774a-08d8-4a12-ae01-09fa39649ead is Pending, waiting for it to be Running (with Ready = true) Jun 10 21:58:23.033: INFO: The status of Pod labelsupdatea5fb774a-08d8-4a12-ae01-09fa39649ead is Running (Ready = true) Jun 10 21:58:23.551: INFO: Successfully updated pod "labelsupdatea5fb774a-08d8-4a12-ae01-09fa39649ead" [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 10 21:58:25.562: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5766" for this suite. • [SLOW TEST:12.579 seconds] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should update labels on modification [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance]","total":-1,"completed":3,"skipped":32,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 10 21:58:21.984: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Jun 10 21:58:22.319: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Jun 10 21:58:24.327: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63790495102, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63790495102, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63790495102, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63790495102, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jun 10 21:58:27.338: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] patching/updating a mutating webhook should work [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a mutating webhook configuration STEP: Updating a mutating webhook configuration's rules to not include the create operation STEP: Creating a configMap that should not be mutated STEP: Patching a mutating webhook configuration's rules to include the create operation STEP: Creating a configMap that should be mutated [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 10 21:58:27.392: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-3755" for this suite. STEP: Destroying namespace "webhook-3755-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:5.439 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 patching/updating a mutating webhook should work [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","total":-1,"completed":5,"skipped":79,"failed":0} S ------------------------------ [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 10 21:58:04.143: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] removes definition from spec when one version gets changed to not be served [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: set up a multi version CRD Jun 10 21:58:04.166: INFO: >>> kubeConfig: /root/.kube/config STEP: mark a version not serverd STEP: check the unserved version gets removed STEP: check the other version is not changed [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 10 21:58:28.857: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-8349" for this suite. • [SLOW TEST:24.733 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 removes definition from spec when one version gets changed to not be served [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance]","total":-1,"completed":2,"skipped":59,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 10 21:58:22.476: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test emptydir volume type on node default medium Jun 10 21:58:22.510: INFO: Waiting up to 5m0s for pod "pod-7228593a-348a-44c5-b1cb-b229422aeee7" in namespace "emptydir-8300" to be "Succeeded or Failed" Jun 10 21:58:22.513: INFO: Pod "pod-7228593a-348a-44c5-b1cb-b229422aeee7": Phase="Pending", Reason="", readiness=false. Elapsed: 3.359235ms Jun 10 21:58:24.515: INFO: Pod "pod-7228593a-348a-44c5-b1cb-b229422aeee7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005570229s Jun 10 21:58:26.519: INFO: Pod "pod-7228593a-348a-44c5-b1cb-b229422aeee7": Phase="Pending", Reason="", readiness=false. Elapsed: 4.009365862s Jun 10 21:58:28.523: INFO: Pod "pod-7228593a-348a-44c5-b1cb-b229422aeee7": Phase="Pending", Reason="", readiness=false. Elapsed: 6.013497487s Jun 10 21:58:30.527: INFO: Pod "pod-7228593a-348a-44c5-b1cb-b229422aeee7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.01734568s STEP: Saw pod success Jun 10 21:58:30.527: INFO: Pod "pod-7228593a-348a-44c5-b1cb-b229422aeee7" satisfied condition "Succeeded or Failed" Jun 10 21:58:30.529: INFO: Trying to get logs from node node2 pod pod-7228593a-348a-44c5-b1cb-b229422aeee7 container test-container: STEP: delete the pod Jun 10 21:58:30.542: INFO: Waiting for pod pod-7228593a-348a-44c5-b1cb-b229422aeee7 to disappear Jun 10 21:58:30.544: INFO: Pod pod-7228593a-348a-44c5-b1cb-b229422aeee7 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 10 21:58:30.545: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-8300" for this suite. • [SLOW TEST:8.076 seconds] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":6,"skipped":108,"failed":0} SSSS ------------------------------ [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 10 21:58:30.564: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to update and delete ResourceQuota. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a ResourceQuota STEP: Getting a ResourceQuota STEP: Updating a ResourceQuota STEP: Verifying a ResourceQuota was modified STEP: Deleting a ResourceQuota STEP: Verifying the deleted ResourceQuota [AfterEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 10 21:58:30.605: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-3233" for this suite. • ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance]","total":-1,"completed":7,"skipped":112,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 10 21:58:14.701: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-5808.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.dns-5808.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-5808.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-5808.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.dns-5808.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-5808.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe /etc/hosts STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Jun 10 21:58:30.761: INFO: DNS probes using dns-5808/dns-test-be57c379-42fd-4065-8678-6c9933971ca9 succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 10 21:58:30.768: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-5808" for this suite. • [SLOW TEST:16.075 seconds] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]","total":-1,"completed":4,"skipped":133,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 10 21:58:30.852: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746 [It] should provide secure master service [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 10 21:58:30.889: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-8658" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750 • ------------------------------ {"msg":"PASSED [sig-network] Services should provide secure master service [Conformance]","total":-1,"completed":5,"skipped":170,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 10 21:58:25.671: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-3679.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-2.dns-test-service-2.dns-3679.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/wheezy_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-3679.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-3679.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-2.dns-test-service-2.dns-3679.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/jessie_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-3679.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Jun 10 21:58:35.741: INFO: DNS probes using dns-3679/dns-test-817e38cc-74c1-47d3-a1ce-58223f04df17 succeeded STEP: deleting the pod STEP: deleting the test headless service [AfterEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 10 21:58:35.753: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-3679" for this suite. • [SLOW TEST:10.091 seconds] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance]","total":-1,"completed":4,"skipped":81,"failed":0} SSSSSSS ------------------------------ [BeforeEach] [sig-apps] Job /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 10 21:58:27.428: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching orphans and release non-matching pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a job STEP: Ensuring active pods == parallelism STEP: Orphaning one of the Job's Pods Jun 10 21:58:33.980: INFO: Successfully updated pod "adopt-release-cv8x7" STEP: Checking that the Job readopts the Pod Jun 10 21:58:33.980: INFO: Waiting up to 15m0s for pod "adopt-release-cv8x7" in namespace "job-3023" to be "adopted" Jun 10 21:58:33.982: INFO: Pod "adopt-release-cv8x7": Phase="Running", Reason="", readiness=true. Elapsed: 2.175154ms Jun 10 21:58:35.985: INFO: Pod "adopt-release-cv8x7": Phase="Running", Reason="", readiness=true. Elapsed: 2.005136785s Jun 10 21:58:35.985: INFO: Pod "adopt-release-cv8x7" satisfied condition "adopted" STEP: Removing the labels from the Job's Pod Jun 10 21:58:36.494: INFO: Successfully updated pod "adopt-release-cv8x7" STEP: Checking that the Job releases the Pod Jun 10 21:58:36.494: INFO: Waiting up to 15m0s for pod "adopt-release-cv8x7" in namespace "job-3023" to be "released" Jun 10 21:58:36.496: INFO: Pod "adopt-release-cv8x7": Phase="Running", Reason="", readiness=true. Elapsed: 2.303512ms Jun 10 21:58:38.501: INFO: Pod "adopt-release-cv8x7": Phase="Running", Reason="", readiness=true. Elapsed: 2.007032316s Jun 10 21:58:38.501: INFO: Pod "adopt-release-cv8x7" satisfied condition "released" [AfterEach] [sig-apps] Job /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 10 21:58:38.501: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-3023" for this suite. • [SLOW TEST:11.082 seconds] [sig-apps] Job /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching orphans and release non-matching pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance]","total":-1,"completed":6,"skipped":80,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 10 21:58:35.781: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should support configurable pod DNS nameservers [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod with dnsPolicy=None and customized dnsConfig... Jun 10 21:58:35.813: INFO: Created pod &Pod{ObjectMeta:{test-dns-nameservers dns-8908 616167d8-1775-47b4-af89-4f67da1d570e 33332 0 2022-06-10 21:58:35 +0000 UTC map[] map[kubernetes.io/psp:collectd] [] [] [{e2e.test Update v1 2022-06-10 21:58:35 +0000 UTC FieldsV1 {"f:spec":{"f:containers":{"k:{\"name\":\"agnhost-container\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsConfig":{".":{},"f:nameservers":{},"f:searches":{}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-75r2m,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:agnhost-container,Image:k8s.gcr.io/e2e-test-images/agnhost:2.32,Command:[],Args:[pause],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-75r2m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:None,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:&PodDNSConfig{Nameservers:[1.1.1.1],Searches:[resolv.conf.local],Options:[]PodDNSConfigOption{},},ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jun 10 21:58:35.817: INFO: The status of Pod test-dns-nameservers is Pending, waiting for it to be Running (with Ready = true) Jun 10 21:58:37.821: INFO: The status of Pod test-dns-nameservers is Pending, waiting for it to be Running (with Ready = true) Jun 10 21:58:39.824: INFO: The status of Pod test-dns-nameservers is Running (Ready = true) STEP: Verifying customized DNS suffix list is configured on pod... Jun 10 21:58:39.824: INFO: ExecWithOptions {Command:[/agnhost dns-suffix] Namespace:dns-8908 PodName:test-dns-nameservers ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 10 21:58:39.824: INFO: >>> kubeConfig: /root/.kube/config STEP: Verifying customized DNS server is configured on pod... Jun 10 21:58:39.921: INFO: ExecWithOptions {Command:[/agnhost dns-server-list] Namespace:dns-8908 PodName:test-dns-nameservers ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 10 21:58:39.921: INFO: >>> kubeConfig: /root/.kube/config Jun 10 21:58:40.009: INFO: Deleting pod test-dns-nameservers... [AfterEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 10 21:58:40.016: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-8908" for this suite. • ------------------------------ {"msg":"PASSED [sig-network] DNS should support configurable pod DNS nameservers [Conformance]","total":-1,"completed":5,"skipped":88,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 10 21:58:30.967: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746 [It] should be able to change the type from ExternalName to ClusterIP [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating a service externalname-service with the type=ExternalName in namespace services-4270 STEP: changing the ExternalName service to type=ClusterIP STEP: creating replication controller externalname-service in namespace services-4270 I0610 21:58:31.003139 40 runners.go:190] Created replication controller with name: externalname-service, namespace: services-4270, replica count: 2 I0610 21:58:34.054161 40 runners.go:190] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0610 21:58:37.054633 40 runners.go:190] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jun 10 21:58:37.054: INFO: Creating new exec pod Jun 10 21:58:42.073: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4270 exec execpodsqrjk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80' Jun 10 21:58:42.346: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 externalname-service 80\nConnection to externalname-service 80 port [tcp/http] succeeded!\n" Jun 10 21:58:42.346: INFO: stdout: "externalname-service-8949v" Jun 10 21:58:42.346: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4270 exec execpodsqrjk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.233.43.170 80' Jun 10 21:58:42.620: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 10.233.43.170 80\nConnection to 10.233.43.170 80 port [tcp/http] succeeded!\n" Jun 10 21:58:42.620: INFO: stdout: "externalname-service-8949v" Jun 10 21:58:42.620: INFO: Cleaning up the ExternalName to ClusterIP test service [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 10 21:58:42.629: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-4270" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750 • [SLOW TEST:11.670 seconds] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should be able to change the type from ExternalName to ClusterIP [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","total":-1,"completed":6,"skipped":201,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 10 21:58:38.577: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Jun 10 21:58:39.549: INFO: new replicaset for deployment "sample-webhook-deployment" is yet to be created Jun 10 21:58:41.559: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63790495119, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63790495119, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63790495119, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63790495119, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jun 10 21:58:44.578: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should include webhook resources in discovery documents [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: fetching the /apis discovery document STEP: finding the admissionregistration.k8s.io API group in the /apis discovery document STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis discovery document STEP: fetching the /apis/admissionregistration.k8s.io discovery document STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis/admissionregistration.k8s.io discovery document STEP: fetching the /apis/admissionregistration.k8s.io/v1 discovery document STEP: finding mutatingwebhookconfigurations and validatingwebhookconfigurations resources in the /apis/admissionregistration.k8s.io/v1 discovery document [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 10 21:58:44.586: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-8084" for this suite. STEP: Destroying namespace "webhook-8084-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.041 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should include webhook resources in discovery documents [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance]","total":-1,"completed":7,"skipped":109,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 10 21:58:42.680: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context STEP: Waiting for a default service account to be provisioned in namespace [It] should support container.SecurityContext.RunAsUser And container.SecurityContext.RunAsGroup [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test pod.Spec.SecurityContext.RunAsUser Jun 10 21:58:42.714: INFO: Waiting up to 5m0s for pod "security-context-eaac4c32-172c-4373-ac4a-fb881682d76d" in namespace "security-context-3054" to be "Succeeded or Failed" Jun 10 21:58:42.717: INFO: Pod "security-context-eaac4c32-172c-4373-ac4a-fb881682d76d": Phase="Pending", Reason="", readiness=false. Elapsed: 3.080574ms Jun 10 21:58:44.722: INFO: Pod "security-context-eaac4c32-172c-4373-ac4a-fb881682d76d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008609185s Jun 10 21:58:46.725: INFO: Pod "security-context-eaac4c32-172c-4373-ac4a-fb881682d76d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011734753s STEP: Saw pod success Jun 10 21:58:46.725: INFO: Pod "security-context-eaac4c32-172c-4373-ac4a-fb881682d76d" satisfied condition "Succeeded or Failed" Jun 10 21:58:46.728: INFO: Trying to get logs from node node1 pod security-context-eaac4c32-172c-4373-ac4a-fb881682d76d container test-container: STEP: delete the pod Jun 10 21:58:46.783: INFO: Waiting for pod security-context-eaac4c32-172c-4373-ac4a-fb881682d76d to disappear Jun 10 21:58:46.785: INFO: Pod security-context-eaac4c32-172c-4373-ac4a-fb881682d76d no longer exists [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 10 21:58:46.785: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-3054" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Security Context should support container.SecurityContext.RunAsUser And container.SecurityContext.RunAsGroup [LinuxOnly] [Conformance]","total":-1,"completed":7,"skipped":219,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Lease /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 10 21:58:46.834: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename lease-test STEP: Waiting for a default service account to be provisioned in namespace [It] lease API should be available [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [AfterEach] [sig-node] Lease /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 10 21:58:46.907: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "lease-test-6094" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Lease lease API should be available [Conformance]","total":-1,"completed":8,"skipped":235,"failed":0} [BeforeEach] [sig-instrumentation] Events /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 10 21:58:46.916: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [It] should delete a collection of events [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Create set of events Jun 10 21:58:46.940: INFO: created test-event-1 Jun 10 21:58:46.943: INFO: created test-event-2 Jun 10 21:58:46.946: INFO: created test-event-3 STEP: get a list of Events with a label in the current namespace STEP: delete collection of events Jun 10 21:58:46.948: INFO: requesting DeleteCollection of events STEP: check that the list of events matches the requested quantity Jun 10 21:58:46.960: INFO: requesting list of events to confirm quantity [AfterEach] [sig-instrumentation] Events /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 10 21:58:46.962: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-4259" for this suite. • ------------------------------ {"msg":"PASSED [sig-instrumentation] Events should delete a collection of events [Conformance]","total":-1,"completed":9,"skipped":235,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 10 21:58:08.970: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] ServiceAccountIssuerDiscovery should support OIDC discovery of service account issuer [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Jun 10 21:58:09.007: INFO: created pod Jun 10 21:58:09.007: INFO: Waiting up to 5m0s for pod "oidc-discovery-validator" in namespace "svcaccounts-1605" to be "Succeeded or Failed" Jun 10 21:58:09.010: INFO: Pod "oidc-discovery-validator": Phase="Pending", Reason="", readiness=false. Elapsed: 2.097145ms Jun 10 21:58:11.012: INFO: Pod "oidc-discovery-validator": Phase="Pending", Reason="", readiness=false. Elapsed: 2.0048512s Jun 10 21:58:13.016: INFO: Pod "oidc-discovery-validator": Phase="Pending", Reason="", readiness=false. Elapsed: 4.008875753s Jun 10 21:58:15.021: INFO: Pod "oidc-discovery-validator": Phase="Pending", Reason="", readiness=false. Elapsed: 6.01336166s Jun 10 21:58:17.024: INFO: Pod "oidc-discovery-validator": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.01689999s STEP: Saw pod success Jun 10 21:58:17.024: INFO: Pod "oidc-discovery-validator" satisfied condition "Succeeded or Failed" Jun 10 21:58:47.025: INFO: polling logs Jun 10 21:58:47.038: INFO: Pod logs: 2022/06/10 21:58:13 OK: Got token 2022/06/10 21:58:13 validating with in-cluster discovery 2022/06/10 21:58:13 OK: got issuer https://kubernetes.default.svc.cluster.local 2022/06/10 21:58:13 Full, not-validated claims: openidmetadata.claims{Claims:jwt.Claims{Issuer:"https://kubernetes.default.svc.cluster.local", Subject:"system:serviceaccount:svcaccounts-1605:default", Audience:jwt.Audience{"oidc-discovery-test"}, Expiry:1654898889, NotBefore:1654898289, IssuedAt:1654898289, ID:""}, Kubernetes:openidmetadata.kubeClaims{Namespace:"svcaccounts-1605", ServiceAccount:openidmetadata.kubeName{Name:"default", UID:"b66ee5c9-43d5-4695-8099-68f08a16f0ac"}}} 2022/06/10 21:58:13 OK: Constructed OIDC provider for issuer https://kubernetes.default.svc.cluster.local 2022/06/10 21:58:13 OK: Validated signature on JWT 2022/06/10 21:58:13 OK: Got valid claims from token! 2022/06/10 21:58:13 Full, validated claims: &openidmetadata.claims{Claims:jwt.Claims{Issuer:"https://kubernetes.default.svc.cluster.local", Subject:"system:serviceaccount:svcaccounts-1605:default", Audience:jwt.Audience{"oidc-discovery-test"}, Expiry:1654898889, NotBefore:1654898289, IssuedAt:1654898289, ID:""}, Kubernetes:openidmetadata.kubeClaims{Namespace:"svcaccounts-1605", ServiceAccount:openidmetadata.kubeName{Name:"default", UID:"b66ee5c9-43d5-4695-8099-68f08a16f0ac"}}} Jun 10 21:58:47.038: INFO: completed pod [AfterEach] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 10 21:58:47.041: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-1605" for this suite. • [SLOW TEST:38.079 seconds] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23 ServiceAccountIssuerDiscovery should support OIDC discovery of service account issuer [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-auth] ServiceAccounts ServiceAccountIssuerDiscovery should support OIDC discovery of service account issuer [Conformance]","total":-1,"completed":4,"skipped":16,"failed":0} SSSSSS ------------------------------ [BeforeEach] [sig-apps] ReplicaSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 10 21:58:40.101: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] Replace and Patch tests [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Jun 10 21:58:40.131: INFO: Pod name sample-pod: Found 0 pods out of 1 Jun 10 21:58:45.134: INFO: Pod name sample-pod: Found 1 pods out of 1 STEP: ensuring each pod is running STEP: Scaling up "test-rs" replicaset Jun 10 21:58:45.141: INFO: Updating replica set "test-rs" STEP: patching the ReplicaSet Jun 10 21:58:45.151: INFO: observed ReplicaSet test-rs in namespace replicaset-9586 with ReadyReplicas 1, AvailableReplicas 1 Jun 10 21:58:45.173: INFO: observed ReplicaSet test-rs in namespace replicaset-9586 with ReadyReplicas 1, AvailableReplicas 1 Jun 10 21:58:45.197: INFO: observed ReplicaSet test-rs in namespace replicaset-9586 with ReadyReplicas 1, AvailableReplicas 1 Jun 10 21:58:45.200: INFO: observed ReplicaSet test-rs in namespace replicaset-9586 with ReadyReplicas 1, AvailableReplicas 1 Jun 10 21:58:49.160: INFO: observed ReplicaSet test-rs in namespace replicaset-9586 with ReadyReplicas 2, AvailableReplicas 2 Jun 10 21:58:49.169: INFO: observed Replicaset test-rs in namespace replicaset-9586 with ReadyReplicas 3 found true [AfterEach] [sig-apps] ReplicaSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 10 21:58:49.169: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-9586" for this suite. • [SLOW TEST:9.077 seconds] [sig-apps] ReplicaSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 Replace and Patch tests [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] ReplicaSet Replace and Patch tests [Conformance]","total":-1,"completed":6,"skipped":122,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 10 21:58:24.367: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating pod pod-subpath-test-configmap-mj8f STEP: Creating a pod to test atomic-volume-subpath Jun 10 21:58:24.405: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-mj8f" in namespace "subpath-8832" to be "Succeeded or Failed" Jun 10 21:58:24.407: INFO: Pod "pod-subpath-test-configmap-mj8f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.17559ms Jun 10 21:58:26.411: INFO: Pod "pod-subpath-test-configmap-mj8f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005416021s Jun 10 21:58:28.418: INFO: Pod "pod-subpath-test-configmap-mj8f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.013008915s Jun 10 21:58:30.422: INFO: Pod "pod-subpath-test-configmap-mj8f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.016974805s Jun 10 21:58:32.427: INFO: Pod "pod-subpath-test-configmap-mj8f": Phase="Running", Reason="", readiness=true. Elapsed: 8.022299294s Jun 10 21:58:34.434: INFO: Pod "pod-subpath-test-configmap-mj8f": Phase="Running", Reason="", readiness=true. Elapsed: 10.029065547s Jun 10 21:58:36.442: INFO: Pod "pod-subpath-test-configmap-mj8f": Phase="Running", Reason="", readiness=true. Elapsed: 12.036327975s Jun 10 21:58:38.448: INFO: Pod "pod-subpath-test-configmap-mj8f": Phase="Running", Reason="", readiness=true. Elapsed: 14.042468015s Jun 10 21:58:40.451: INFO: Pod "pod-subpath-test-configmap-mj8f": Phase="Running", Reason="", readiness=true. Elapsed: 16.045972466s Jun 10 21:58:42.455: INFO: Pod "pod-subpath-test-configmap-mj8f": Phase="Running", Reason="", readiness=true. Elapsed: 18.049922905s Jun 10 21:58:44.465: INFO: Pod "pod-subpath-test-configmap-mj8f": Phase="Running", Reason="", readiness=true. Elapsed: 20.059555982s Jun 10 21:58:46.471: INFO: Pod "pod-subpath-test-configmap-mj8f": Phase="Running", Reason="", readiness=true. Elapsed: 22.065998042s Jun 10 21:58:48.480: INFO: Pod "pod-subpath-test-configmap-mj8f": Phase="Running", Reason="", readiness=true. Elapsed: 24.075115435s Jun 10 21:58:50.484: INFO: Pod "pod-subpath-test-configmap-mj8f": Phase="Running", Reason="", readiness=true. Elapsed: 26.078727781s Jun 10 21:58:52.489: INFO: Pod "pod-subpath-test-configmap-mj8f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 28.083583997s STEP: Saw pod success Jun 10 21:58:52.489: INFO: Pod "pod-subpath-test-configmap-mj8f" satisfied condition "Succeeded or Failed" Jun 10 21:58:52.492: INFO: Trying to get logs from node node2 pod pod-subpath-test-configmap-mj8f container test-container-subpath-configmap-mj8f: STEP: delete the pod Jun 10 21:58:52.505: INFO: Waiting for pod pod-subpath-test-configmap-mj8f to disappear Jun 10 21:58:52.507: INFO: Pod pod-subpath-test-configmap-mj8f no longer exists STEP: Deleting pod pod-subpath-test-configmap-mj8f Jun 10 21:58:52.507: INFO: Deleting pod "pod-subpath-test-configmap-mj8f" in namespace "subpath-8832" [AfterEach] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 10 21:58:52.509: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-8832" for this suite. • [SLOW TEST:28.150 seconds] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ [BeforeEach] version v1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 10 21:58:44.760: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] A set of valid responses are returned for both pod and service ProxyWithPath [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Jun 10 21:58:44.784: INFO: Creating pod... Jun 10 21:58:44.799: INFO: Pod Quantity: 1 Status: Pending Jun 10 21:58:45.804: INFO: Pod Quantity: 1 Status: Pending Jun 10 21:58:46.803: INFO: Pod Quantity: 1 Status: Pending Jun 10 21:58:47.802: INFO: Pod Quantity: 1 Status: Pending Jun 10 21:58:48.802: INFO: Pod Quantity: 1 Status: Pending Jun 10 21:58:49.802: INFO: Pod Quantity: 1 Status: Pending Jun 10 21:58:50.801: INFO: Pod Quantity: 1 Status: Pending Jun 10 21:58:51.803: INFO: Pod Quantity: 1 Status: Pending Jun 10 21:58:52.803: INFO: Pod Status: Running Jun 10 21:58:52.803: INFO: Creating service... Jun 10 21:58:52.810: INFO: Starting http.Client for https://10.10.190.202:6443/api/v1/namespaces/proxy-147/pods/agnhost/proxy/some/path/with/DELETE Jun 10 21:58:52.813: INFO: http.Client request:DELETE | StatusCode:200 | Response:foo | Method:DELETE Jun 10 21:58:52.813: INFO: Starting http.Client for https://10.10.190.202:6443/api/v1/namespaces/proxy-147/pods/agnhost/proxy/some/path/with/GET Jun 10 21:58:52.815: INFO: http.Client request:GET | StatusCode:200 | Response:foo | Method:GET Jun 10 21:58:52.815: INFO: Starting http.Client for https://10.10.190.202:6443/api/v1/namespaces/proxy-147/pods/agnhost/proxy/some/path/with/HEAD Jun 10 21:58:52.817: INFO: http.Client request:HEAD | StatusCode:200 Jun 10 21:58:52.817: INFO: Starting http.Client for https://10.10.190.202:6443/api/v1/namespaces/proxy-147/pods/agnhost/proxy/some/path/with/OPTIONS Jun 10 21:58:52.819: INFO: http.Client request:OPTIONS | StatusCode:200 | Response:foo | Method:OPTIONS Jun 10 21:58:52.819: INFO: Starting http.Client for https://10.10.190.202:6443/api/v1/namespaces/proxy-147/pods/agnhost/proxy/some/path/with/PATCH Jun 10 21:58:52.822: INFO: http.Client request:PATCH | StatusCode:200 | Response:foo | Method:PATCH Jun 10 21:58:52.822: INFO: Starting http.Client for https://10.10.190.202:6443/api/v1/namespaces/proxy-147/pods/agnhost/proxy/some/path/with/POST Jun 10 21:58:52.824: INFO: http.Client request:POST | StatusCode:200 | Response:foo | Method:POST Jun 10 21:58:52.824: INFO: Starting http.Client for https://10.10.190.202:6443/api/v1/namespaces/proxy-147/pods/agnhost/proxy/some/path/with/PUT Jun 10 21:58:52.826: INFO: http.Client request:PUT | StatusCode:200 | Response:foo | Method:PUT Jun 10 21:58:52.826: INFO: Starting http.Client for https://10.10.190.202:6443/api/v1/namespaces/proxy-147/services/test-service/proxy/some/path/with/DELETE Jun 10 21:58:52.830: INFO: http.Client request:DELETE | StatusCode:200 | Response:foo | Method:DELETE Jun 10 21:58:52.830: INFO: Starting http.Client for https://10.10.190.202:6443/api/v1/namespaces/proxy-147/services/test-service/proxy/some/path/with/GET Jun 10 21:58:52.834: INFO: http.Client request:GET | StatusCode:200 | Response:foo | Method:GET Jun 10 21:58:52.834: INFO: Starting http.Client for https://10.10.190.202:6443/api/v1/namespaces/proxy-147/services/test-service/proxy/some/path/with/HEAD Jun 10 21:58:52.837: INFO: http.Client request:HEAD | StatusCode:200 Jun 10 21:58:52.837: INFO: Starting http.Client for https://10.10.190.202:6443/api/v1/namespaces/proxy-147/services/test-service/proxy/some/path/with/OPTIONS Jun 10 21:58:52.841: INFO: http.Client request:OPTIONS | StatusCode:200 | Response:foo | Method:OPTIONS Jun 10 21:58:52.841: INFO: Starting http.Client for https://10.10.190.202:6443/api/v1/namespaces/proxy-147/services/test-service/proxy/some/path/with/PATCH Jun 10 21:58:52.844: INFO: http.Client request:PATCH | StatusCode:200 | Response:foo | Method:PATCH Jun 10 21:58:52.844: INFO: Starting http.Client for https://10.10.190.202:6443/api/v1/namespaces/proxy-147/services/test-service/proxy/some/path/with/POST Jun 10 21:58:52.847: INFO: http.Client request:POST | StatusCode:200 | Response:foo | Method:POST Jun 10 21:58:52.847: INFO: Starting http.Client for https://10.10.190.202:6443/api/v1/namespaces/proxy-147/services/test-service/proxy/some/path/with/PUT Jun 10 21:58:52.851: INFO: http.Client request:PUT | StatusCode:200 | Response:foo | Method:PUT [AfterEach] version v1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 10 21:58:52.851: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "proxy-147" for this suite. • [SLOW TEST:8.107 seconds] [sig-network] Proxy /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 version v1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:74 A set of valid responses are returned for both pod and service ProxyWithPath [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-network] Proxy version v1 A set of valid responses are returned for both pod and service ProxyWithPath [Conformance]","total":-1,"completed":8,"skipped":174,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 10 21:58:47.027: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test emptydir volume type on tmpfs Jun 10 21:58:47.059: INFO: Waiting up to 5m0s for pod "pod-3deff001-84bd-4772-ac59-ff0d5e443e5b" in namespace "emptydir-6109" to be "Succeeded or Failed" Jun 10 21:58:47.061: INFO: Pod "pod-3deff001-84bd-4772-ac59-ff0d5e443e5b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.203592ms Jun 10 21:58:49.066: INFO: Pod "pod-3deff001-84bd-4772-ac59-ff0d5e443e5b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007266353s Jun 10 21:58:51.072: INFO: Pod "pod-3deff001-84bd-4772-ac59-ff0d5e443e5b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.012992332s Jun 10 21:58:53.076: INFO: Pod "pod-3deff001-84bd-4772-ac59-ff0d5e443e5b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.0174226s STEP: Saw pod success Jun 10 21:58:53.076: INFO: Pod "pod-3deff001-84bd-4772-ac59-ff0d5e443e5b" satisfied condition "Succeeded or Failed" Jun 10 21:58:53.078: INFO: Trying to get logs from node node2 pod pod-3deff001-84bd-4772-ac59-ff0d5e443e5b container test-container: STEP: delete the pod Jun 10 21:58:53.091: INFO: Waiting for pod pod-3deff001-84bd-4772-ac59-ff0d5e443e5b to disappear Jun 10 21:58:53.093: INFO: Pod pod-3deff001-84bd-4772-ac59-ff0d5e443e5b no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 10 21:58:53.093: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-6109" for this suite. • [SLOW TEST:6.075 seconds] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":10,"skipped":262,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 10 21:58:47.066: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test emptydir 0644 on node default medium Jun 10 21:58:47.100: INFO: Waiting up to 5m0s for pod "pod-8bbcb2c8-dbe4-4981-a185-db6985bc3132" in namespace "emptydir-3029" to be "Succeeded or Failed" Jun 10 21:58:47.102: INFO: Pod "pod-8bbcb2c8-dbe4-4981-a185-db6985bc3132": Phase="Pending", Reason="", readiness=false. Elapsed: 2.149385ms Jun 10 21:58:49.104: INFO: Pod "pod-8bbcb2c8-dbe4-4981-a185-db6985bc3132": Phase="Pending", Reason="", readiness=false. Elapsed: 2.004600778s Jun 10 21:58:51.108: INFO: Pod "pod-8bbcb2c8-dbe4-4981-a185-db6985bc3132": Phase="Pending", Reason="", readiness=false. Elapsed: 4.008377271s Jun 10 21:58:53.112: INFO: Pod "pod-8bbcb2c8-dbe4-4981-a185-db6985bc3132": Phase="Pending", Reason="", readiness=false. Elapsed: 6.012027118s Jun 10 21:58:55.116: INFO: Pod "pod-8bbcb2c8-dbe4-4981-a185-db6985bc3132": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.016471116s STEP: Saw pod success Jun 10 21:58:55.116: INFO: Pod "pod-8bbcb2c8-dbe4-4981-a185-db6985bc3132" satisfied condition "Succeeded or Failed" Jun 10 21:58:55.119: INFO: Trying to get logs from node node1 pod pod-8bbcb2c8-dbe4-4981-a185-db6985bc3132 container test-container: STEP: delete the pod Jun 10 21:58:55.131: INFO: Waiting for pod pod-8bbcb2c8-dbe4-4981-a185-db6985bc3132 to disappear Jun 10 21:58:55.133: INFO: Pod pod-8bbcb2c8-dbe4-4981-a185-db6985bc3132 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 10 21:58:55.133: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-3029" for this suite. • [SLOW TEST:8.076 seconds] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":5,"skipped":22,"failed":0} SSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 10 21:58:49.260: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:186 [It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating the pod STEP: submitting the pod to kubernetes Jun 10 21:58:49.309: INFO: The status of Pod pod-update-activedeadlineseconds-62bf30d6-7b08-4dfd-8c64-af0db86ca942 is Pending, waiting for it to be Running (with Ready = true) Jun 10 21:58:51.314: INFO: The status of Pod pod-update-activedeadlineseconds-62bf30d6-7b08-4dfd-8c64-af0db86ca942 is Pending, waiting for it to be Running (with Ready = true) Jun 10 21:58:53.314: INFO: The status of Pod pod-update-activedeadlineseconds-62bf30d6-7b08-4dfd-8c64-af0db86ca942 is Pending, waiting for it to be Running (with Ready = true) Jun 10 21:58:55.313: INFO: The status of Pod pod-update-activedeadlineseconds-62bf30d6-7b08-4dfd-8c64-af0db86ca942 is Running (Ready = true) STEP: verifying the pod is in kubernetes STEP: updating the pod Jun 10 21:58:55.827: INFO: Successfully updated pod "pod-update-activedeadlineseconds-62bf30d6-7b08-4dfd-8c64-af0db86ca942" Jun 10 21:58:55.827: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-62bf30d6-7b08-4dfd-8c64-af0db86ca942" in namespace "pods-8463" to be "terminated due to deadline exceeded" Jun 10 21:58:55.830: INFO: Pod "pod-update-activedeadlineseconds-62bf30d6-7b08-4dfd-8c64-af0db86ca942": Phase="Running", Reason="", readiness=true. Elapsed: 2.583366ms Jun 10 21:58:57.837: INFO: Pod "pod-update-activedeadlineseconds-62bf30d6-7b08-4dfd-8c64-af0db86ca942": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.009311126s Jun 10 21:58:57.837: INFO: Pod "pod-update-activedeadlineseconds-62bf30d6-7b08-4dfd-8c64-af0db86ca942" satisfied condition "terminated due to deadline exceeded" [AfterEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 10 21:58:57.837: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-8463" for this suite. • [SLOW TEST:8.585 seconds] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]","total":-1,"completed":7,"skipped":157,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]","total":-1,"completed":4,"skipped":64,"failed":0} [BeforeEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 10 21:58:52.520: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating configMap with name projected-configmap-test-volume-map-2f4cdf21-53b5-4c03-8b0d-215491313150 STEP: Creating a pod to test consume configMaps Jun 10 21:58:52.556: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-2664a6e2-ec9c-4231-b057-cd9163f8d691" in namespace "projected-9990" to be "Succeeded or Failed" Jun 10 21:58:52.558: INFO: Pod "pod-projected-configmaps-2664a6e2-ec9c-4231-b057-cd9163f8d691": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016889ms Jun 10 21:58:54.561: INFO: Pod "pod-projected-configmaps-2664a6e2-ec9c-4231-b057-cd9163f8d691": Phase="Pending", Reason="", readiness=false. Elapsed: 2.004704308s Jun 10 21:58:56.569: INFO: Pod "pod-projected-configmaps-2664a6e2-ec9c-4231-b057-cd9163f8d691": Phase="Pending", Reason="", readiness=false. Elapsed: 4.012830137s Jun 10 21:58:58.573: INFO: Pod "pod-projected-configmaps-2664a6e2-ec9c-4231-b057-cd9163f8d691": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.016108866s STEP: Saw pod success Jun 10 21:58:58.573: INFO: Pod "pod-projected-configmaps-2664a6e2-ec9c-4231-b057-cd9163f8d691" satisfied condition "Succeeded or Failed" Jun 10 21:58:58.575: INFO: Trying to get logs from node node1 pod pod-projected-configmaps-2664a6e2-ec9c-4231-b057-cd9163f8d691 container agnhost-container: STEP: delete the pod Jun 10 21:58:58.746: INFO: Waiting for pod pod-projected-configmaps-2664a6e2-ec9c-4231-b057-cd9163f8d691 to disappear Jun 10 21:58:58.747: INFO: Pod pod-projected-configmaps-2664a6e2-ec9c-4231-b057-cd9163f8d691 no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 10 21:58:58.747: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9990" for this suite. • [SLOW TEST:6.235 seconds] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":-1,"completed":5,"skipped":64,"failed":0} SSSSSS ------------------------------ [BeforeEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 10 21:58:30.672: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Performing setup for networking test in namespace pod-network-test-1119 STEP: creating a selector STEP: Creating the service pods in kubernetes Jun 10 21:58:30.692: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Jun 10 21:58:30.722: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jun 10 21:58:32.725: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jun 10 21:58:34.726: INFO: The status of Pod netserver-0 is Running (Ready = false) Jun 10 21:58:36.725: INFO: The status of Pod netserver-0 is Running (Ready = false) Jun 10 21:58:38.727: INFO: The status of Pod netserver-0 is Running (Ready = false) Jun 10 21:58:40.725: INFO: The status of Pod netserver-0 is Running (Ready = false) Jun 10 21:58:42.728: INFO: The status of Pod netserver-0 is Running (Ready = false) Jun 10 21:58:44.726: INFO: The status of Pod netserver-0 is Running (Ready = false) Jun 10 21:58:46.725: INFO: The status of Pod netserver-0 is Running (Ready = false) Jun 10 21:58:48.726: INFO: The status of Pod netserver-0 is Running (Ready = false) Jun 10 21:58:50.725: INFO: The status of Pod netserver-0 is Running (Ready = false) Jun 10 21:58:52.726: INFO: The status of Pod netserver-0 is Running (Ready = true) Jun 10 21:58:52.731: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods Jun 10 21:58:56.768: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2 Jun 10 21:58:56.768: INFO: Going to poll 10.244.3.127 on port 8081 at least 0 times, with a maximum of 34 tries before failing Jun 10 21:58:56.771: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.3.127 8081 | grep -v '^\s*$'] Namespace:pod-network-test-1119 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 10 21:58:56.771: INFO: >>> kubeConfig: /root/.kube/config Jun 10 21:58:57.859: INFO: Found all 1 expected endpoints: [netserver-0] Jun 10 21:58:57.859: INFO: Going to poll 10.244.4.26 on port 8081 at least 0 times, with a maximum of 34 tries before failing Jun 10 21:58:57.861: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.4.26 8081 | grep -v '^\s*$'] Namespace:pod-network-test-1119 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 10 21:58:57.861: INFO: >>> kubeConfig: /root/.kube/config Jun 10 21:58:59.259: INFO: Found all 1 expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 10 21:58:59.260: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-1119" for this suite. • [SLOW TEST:28.596 seconds] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/network/framework.go:23 Granular Checks: Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/network/networking.go:30 should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":8,"skipped":136,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-apps] ReplicaSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 10 21:58:52.931: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation and release no longer matching pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Given a Pod with a 'name' label pod-adoption-release is created Jun 10 21:58:52.966: INFO: The status of Pod pod-adoption-release is Pending, waiting for it to be Running (with Ready = true) Jun 10 21:58:54.970: INFO: The status of Pod pod-adoption-release is Pending, waiting for it to be Running (with Ready = true) Jun 10 21:58:56.969: INFO: The status of Pod pod-adoption-release is Pending, waiting for it to be Running (with Ready = true) Jun 10 21:58:58.974: INFO: The status of Pod pod-adoption-release is Running (Ready = true) STEP: When a replicaset with a matching selector is created STEP: Then the orphan pod is adopted STEP: When the matched label of one of its pods change Jun 10 21:58:59.990: INFO: Pod name pod-adoption-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicaSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 10 21:59:01.004: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-6244" for this suite. • [SLOW TEST:8.083 seconds] [sig-apps] ReplicaSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching pods on creation and release no longer matching pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance]","total":-1,"completed":9,"skipped":203,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 10 21:58:53.171: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating configMap with name configmap-test-volume-a4a6323f-31fc-47bb-8f7b-74b64919da85 STEP: Creating a pod to test consume configMaps Jun 10 21:58:53.206: INFO: Waiting up to 5m0s for pod "pod-configmaps-4b44edee-a067-49a1-a218-1228b48c4561" in namespace "configmap-378" to be "Succeeded or Failed" Jun 10 21:58:53.208: INFO: Pod "pod-configmaps-4b44edee-a067-49a1-a218-1228b48c4561": Phase="Pending", Reason="", readiness=false. Elapsed: 1.935876ms Jun 10 21:58:55.213: INFO: Pod "pod-configmaps-4b44edee-a067-49a1-a218-1228b48c4561": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006860173s Jun 10 21:58:57.218: INFO: Pod "pod-configmaps-4b44edee-a067-49a1-a218-1228b48c4561": Phase="Pending", Reason="", readiness=false. Elapsed: 4.011091001s Jun 10 21:58:59.222: INFO: Pod "pod-configmaps-4b44edee-a067-49a1-a218-1228b48c4561": Phase="Pending", Reason="", readiness=false. Elapsed: 6.015306516s Jun 10 21:59:01.225: INFO: Pod "pod-configmaps-4b44edee-a067-49a1-a218-1228b48c4561": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.018239868s STEP: Saw pod success Jun 10 21:59:01.225: INFO: Pod "pod-configmaps-4b44edee-a067-49a1-a218-1228b48c4561" satisfied condition "Succeeded or Failed" Jun 10 21:59:01.227: INFO: Trying to get logs from node node1 pod pod-configmaps-4b44edee-a067-49a1-a218-1228b48c4561 container agnhost-container: STEP: delete the pod Jun 10 21:59:01.244: INFO: Waiting for pod pod-configmaps-4b44edee-a067-49a1-a218-1228b48c4561 to disappear Jun 10 21:59:01.246: INFO: Pod pod-configmaps-4b44edee-a067-49a1-a218-1228b48c4561 no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 10 21:59:01.246: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-378" for this suite. • [SLOW TEST:8.082 seconds] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":-1,"completed":11,"skipped":292,"failed":0} SSSSS ------------------------------ [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 10 21:58:55.171: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Jun 10 21:58:55.396: INFO: new replicaset for deployment "sample-webhook-deployment" is yet to be created Jun 10 21:58:57.407: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63790495135, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63790495135, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63790495135, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63790495135, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} Jun 10 21:58:59.411: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63790495135, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63790495135, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63790495135, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63790495135, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jun 10 21:59:02.419: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should deny crd creation [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Registering the crd webhook via the AdmissionRegistration API STEP: Creating a custom resource definition that should be denied by the webhook Jun 10 21:59:02.433: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 10 21:59:02.447: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-9741" for this suite. STEP: Destroying namespace "webhook-9741-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:7.308 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should deny crd creation [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","total":-1,"completed":6,"skipped":33,"failed":0} SSSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 10 21:59:01.143: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating projection with secret that has name projected-secret-test-a9c111db-89db-4ac2-85e9-4a8f896fe67f STEP: Creating a pod to test consume secrets Jun 10 21:59:01.183: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-57dfe1ba-b408-4814-8b05-c921343d9c7d" in namespace "projected-1326" to be "Succeeded or Failed" Jun 10 21:59:01.188: INFO: Pod "pod-projected-secrets-57dfe1ba-b408-4814-8b05-c921343d9c7d": Phase="Pending", Reason="", readiness=false. Elapsed: 5.048953ms Jun 10 21:59:03.191: INFO: Pod "pod-projected-secrets-57dfe1ba-b408-4814-8b05-c921343d9c7d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007945594s Jun 10 21:59:05.193: INFO: Pod "pod-projected-secrets-57dfe1ba-b408-4814-8b05-c921343d9c7d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010452839s STEP: Saw pod success Jun 10 21:59:05.193: INFO: Pod "pod-projected-secrets-57dfe1ba-b408-4814-8b05-c921343d9c7d" satisfied condition "Succeeded or Failed" Jun 10 21:59:05.198: INFO: Trying to get logs from node node1 pod pod-projected-secrets-57dfe1ba-b408-4814-8b05-c921343d9c7d container projected-secret-volume-test: STEP: delete the pod Jun 10 21:59:05.218: INFO: Waiting for pod pod-projected-secrets-57dfe1ba-b408-4814-8b05-c921343d9c7d to disappear Jun 10 21:59:05.220: INFO: Pod pod-projected-secrets-57dfe1ba-b408-4814-8b05-c921343d9c7d no longer exists [AfterEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 10 21:59:05.220: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1326" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":10,"skipped":262,"failed":0} S ------------------------------ [BeforeEach] [sig-api-machinery] Servers with support for Table transformation /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 10 21:59:05.236: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename tables STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Servers with support for Table transformation /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/table_conversion.go:47 [It] should return a 406 for a backend which does not implement metadata [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [AfterEach] [sig-api-machinery] Servers with support for Table transformation /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 10 21:59:05.262: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "tables-9712" for this suite. • ------------------------------ {"msg":"PASSED [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance]","total":-1,"completed":11,"skipped":263,"failed":0} SSSS ------------------------------ [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 10 21:58:59.330: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] binary data should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating configMap with name configmap-test-upd-e394b26e-a805-4f3e-be31-7636cb5a7059 STEP: Creating the pod STEP: Waiting for pod with text data STEP: Waiting for pod with binary data [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 10 21:59:05.391: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-7274" for this suite. • [SLOW TEST:6.072 seconds] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 binary data should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance]","total":-1,"completed":9,"skipped":164,"failed":0} SSS ------------------------------ [BeforeEach] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 10 21:59:02.497: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test downward api env vars Jun 10 21:59:02.532: INFO: Waiting up to 5m0s for pod "downward-api-a490f9c0-4a71-4ee5-8182-610a144400c7" in namespace "downward-api-6390" to be "Succeeded or Failed" Jun 10 21:59:02.535: INFO: Pod "downward-api-a490f9c0-4a71-4ee5-8182-610a144400c7": Phase="Pending", Reason="", readiness=false. Elapsed: 3.145817ms Jun 10 21:59:04.541: INFO: Pod "downward-api-a490f9c0-4a71-4ee5-8182-610a144400c7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008472359s Jun 10 21:59:06.548: INFO: Pod "downward-api-a490f9c0-4a71-4ee5-8182-610a144400c7": Phase="Pending", Reason="", readiness=false. Elapsed: 4.015198908s Jun 10 21:59:08.552: INFO: Pod "downward-api-a490f9c0-4a71-4ee5-8182-610a144400c7": Phase="Pending", Reason="", readiness=false. Elapsed: 6.019659722s Jun 10 21:59:10.557: INFO: Pod "downward-api-a490f9c0-4a71-4ee5-8182-610a144400c7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.024799131s STEP: Saw pod success Jun 10 21:59:10.557: INFO: Pod "downward-api-a490f9c0-4a71-4ee5-8182-610a144400c7" satisfied condition "Succeeded or Failed" Jun 10 21:59:10.559: INFO: Trying to get logs from node node1 pod downward-api-a490f9c0-4a71-4ee5-8182-610a144400c7 container dapi-container: STEP: delete the pod Jun 10 21:59:10.591: INFO: Waiting for pod downward-api-a490f9c0-4a71-4ee5-8182-610a144400c7 to disappear Jun 10 21:59:10.593: INFO: Pod downward-api-a490f9c0-4a71-4ee5-8182-610a144400c7 no longer exists [AfterEach] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 10 21:59:10.593: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-6390" for this suite. • [SLOW TEST:8.103 seconds] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]","total":-1,"completed":7,"skipped":40,"failed":0} SSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 10 21:59:05.282: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted STEP: Gathering metrics Jun 10 21:59:11.367: INFO: The status of Pod kube-controller-manager-master3 is Running (Ready = true) Jun 10 21:59:11.432: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 10 21:59:11.432: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-1277" for this suite. • [SLOW TEST:6.158 seconds] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]","total":-1,"completed":12,"skipped":267,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 10 21:59:01.266: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD preserving unknown fields at the schema root [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Jun 10 21:59:01.290: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties Jun 10 21:59:09.440: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-5323 --namespace=crd-publish-openapi-5323 create -f -' Jun 10 21:59:09.946: INFO: stderr: "" Jun 10 21:59:09.947: INFO: stdout: "e2e-test-crd-publish-openapi-2925-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n" Jun 10 21:59:09.947: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-5323 --namespace=crd-publish-openapi-5323 delete e2e-test-crd-publish-openapi-2925-crds test-cr' Jun 10 21:59:10.122: INFO: stderr: "" Jun 10 21:59:10.122: INFO: stdout: "e2e-test-crd-publish-openapi-2925-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n" Jun 10 21:59:10.122: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-5323 --namespace=crd-publish-openapi-5323 apply -f -' Jun 10 21:59:10.473: INFO: stderr: "" Jun 10 21:59:10.473: INFO: stdout: "e2e-test-crd-publish-openapi-2925-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n" Jun 10 21:59:10.473: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-5323 --namespace=crd-publish-openapi-5323 delete e2e-test-crd-publish-openapi-2925-crds test-cr' Jun 10 21:59:10.654: INFO: stderr: "" Jun 10 21:59:10.654: INFO: stdout: "e2e-test-crd-publish-openapi-2925-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR Jun 10 21:59:10.654: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-5323 explain e2e-test-crd-publish-openapi-2925-crds' Jun 10 21:59:11.042: INFO: stderr: "" Jun 10 21:59:11.042: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-2925-crd\nVERSION: crd-publish-openapi-test-unknown-at-root.example.com/v1\n\nDESCRIPTION:\n \n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 10 21:59:14.686: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-5323" for this suite. • [SLOW TEST:13.440 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD preserving unknown fields at the schema root [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance]","total":-1,"completed":12,"skipped":297,"failed":0} S ------------------------------ [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 10 21:59:05.411: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating configMap with name configmap-test-volume-0a62cacc-dadb-4e6e-b72d-31254794e62e STEP: Creating a pod to test consume configMaps Jun 10 21:59:05.447: INFO: Waiting up to 5m0s for pod "pod-configmaps-42039101-a883-4ca5-b668-cf908859bb83" in namespace "configmap-7329" to be "Succeeded or Failed" Jun 10 21:59:05.448: INFO: Pod "pod-configmaps-42039101-a883-4ca5-b668-cf908859bb83": Phase="Pending", Reason="", readiness=false. Elapsed: 1.829907ms Jun 10 21:59:07.451: INFO: Pod "pod-configmaps-42039101-a883-4ca5-b668-cf908859bb83": Phase="Pending", Reason="", readiness=false. Elapsed: 2.004404229s Jun 10 21:59:09.455: INFO: Pod "pod-configmaps-42039101-a883-4ca5-b668-cf908859bb83": Phase="Pending", Reason="", readiness=false. Elapsed: 4.008693168s Jun 10 21:59:11.459: INFO: Pod "pod-configmaps-42039101-a883-4ca5-b668-cf908859bb83": Phase="Pending", Reason="", readiness=false. Elapsed: 6.012055142s Jun 10 21:59:13.463: INFO: Pod "pod-configmaps-42039101-a883-4ca5-b668-cf908859bb83": Phase="Pending", Reason="", readiness=false. Elapsed: 8.015884774s Jun 10 21:59:15.467: INFO: Pod "pod-configmaps-42039101-a883-4ca5-b668-cf908859bb83": Phase="Pending", Reason="", readiness=false. Elapsed: 10.020088658s Jun 10 21:59:17.470: INFO: Pod "pod-configmaps-42039101-a883-4ca5-b668-cf908859bb83": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.023790007s STEP: Saw pod success Jun 10 21:59:17.470: INFO: Pod "pod-configmaps-42039101-a883-4ca5-b668-cf908859bb83" satisfied condition "Succeeded or Failed" Jun 10 21:59:17.473: INFO: Trying to get logs from node node1 pod pod-configmaps-42039101-a883-4ca5-b668-cf908859bb83 container agnhost-container: STEP: delete the pod Jun 10 21:59:17.489: INFO: Waiting for pod pod-configmaps-42039101-a883-4ca5-b668-cf908859bb83 to disappear Jun 10 21:59:17.491: INFO: Pod pod-configmaps-42039101-a883-4ca5-b668-cf908859bb83 no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 10 21:59:17.491: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-7329" for this suite. • [SLOW TEST:12.089 seconds] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":10,"skipped":167,"failed":0} SSSSSSSS ------------------------------ [BeforeEach] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 10 21:59:17.521: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should allow opting out of API token automount [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: getting the auto-created API token Jun 10 21:59:18.069: INFO: created pod pod-service-account-defaultsa Jun 10 21:59:18.069: INFO: pod pod-service-account-defaultsa service account token volume mount: true Jun 10 21:59:18.078: INFO: created pod pod-service-account-mountsa Jun 10 21:59:18.078: INFO: pod pod-service-account-mountsa service account token volume mount: true Jun 10 21:59:18.087: INFO: created pod pod-service-account-nomountsa Jun 10 21:59:18.087: INFO: pod pod-service-account-nomountsa service account token volume mount: false Jun 10 21:59:18.096: INFO: created pod pod-service-account-defaultsa-mountspec Jun 10 21:59:18.096: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true Jun 10 21:59:18.105: INFO: created pod pod-service-account-mountsa-mountspec Jun 10 21:59:18.105: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true Jun 10 21:59:18.115: INFO: created pod pod-service-account-nomountsa-mountspec Jun 10 21:59:18.115: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true Jun 10 21:59:18.124: INFO: created pod pod-service-account-defaultsa-nomountspec Jun 10 21:59:18.124: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false Jun 10 21:59:18.132: INFO: created pod pod-service-account-mountsa-nomountspec Jun 10 21:59:18.132: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false Jun 10 21:59:18.141: INFO: created pod pod-service-account-nomountsa-nomountspec Jun 10 21:59:18.141: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false [AfterEach] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 10 21:59:18.141: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-9726" for this suite. • ------------------------------ {"msg":"PASSED [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance]","total":-1,"completed":11,"skipped":175,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-instrumentation] Events /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 10 21:59:18.302: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that an event can be fetched, patched, deleted, and listed [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating a test event STEP: listing all events in all namespaces STEP: patching the test event STEP: fetching the test event STEP: deleting the test event STEP: listing all events in all namespaces [AfterEach] [sig-instrumentation] Events /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 10 21:59:18.349: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-3237" for this suite. • ------------------------------ [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 10 21:58:19.421: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] listing custom resource definition objects works [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Jun 10 21:58:19.452: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 10 21:59:20.762: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-1699" for this suite. • [SLOW TEST:61.349 seconds] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Simple CustomResourceDefinition /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:48 listing custom resource definition objects works [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works [Conformance]","total":-1,"completed":6,"skipped":70,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] Discovery /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 10 21:59:20.830: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename discovery STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Discovery /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/discovery.go:39 STEP: Setting up server cert [It] should validate PreferredVersion for each APIGroup [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Jun 10 21:59:21.293: INFO: Checking APIGroup: apiregistration.k8s.io Jun 10 21:59:21.294: INFO: PreferredVersion.GroupVersion: apiregistration.k8s.io/v1 Jun 10 21:59:21.294: INFO: Versions found [{apiregistration.k8s.io/v1 v1} {apiregistration.k8s.io/v1beta1 v1beta1}] Jun 10 21:59:21.294: INFO: apiregistration.k8s.io/v1 matches apiregistration.k8s.io/v1 Jun 10 21:59:21.294: INFO: Checking APIGroup: apps Jun 10 21:59:21.295: INFO: PreferredVersion.GroupVersion: apps/v1 Jun 10 21:59:21.295: INFO: Versions found [{apps/v1 v1}] Jun 10 21:59:21.295: INFO: apps/v1 matches apps/v1 Jun 10 21:59:21.295: INFO: Checking APIGroup: events.k8s.io Jun 10 21:59:21.295: INFO: PreferredVersion.GroupVersion: events.k8s.io/v1 Jun 10 21:59:21.295: INFO: Versions found [{events.k8s.io/v1 v1} {events.k8s.io/v1beta1 v1beta1}] Jun 10 21:59:21.295: INFO: events.k8s.io/v1 matches events.k8s.io/v1 Jun 10 21:59:21.295: INFO: Checking APIGroup: authentication.k8s.io Jun 10 21:59:21.296: INFO: PreferredVersion.GroupVersion: authentication.k8s.io/v1 Jun 10 21:59:21.296: INFO: Versions found [{authentication.k8s.io/v1 v1} {authentication.k8s.io/v1beta1 v1beta1}] Jun 10 21:59:21.296: INFO: authentication.k8s.io/v1 matches authentication.k8s.io/v1 Jun 10 21:59:21.296: INFO: Checking APIGroup: authorization.k8s.io Jun 10 21:59:21.297: INFO: PreferredVersion.GroupVersion: authorization.k8s.io/v1 Jun 10 21:59:21.297: INFO: Versions found [{authorization.k8s.io/v1 v1} {authorization.k8s.io/v1beta1 v1beta1}] Jun 10 21:59:21.297: INFO: authorization.k8s.io/v1 matches authorization.k8s.io/v1 Jun 10 21:59:21.297: INFO: Checking APIGroup: autoscaling Jun 10 21:59:21.298: INFO: PreferredVersion.GroupVersion: autoscaling/v1 Jun 10 21:59:21.298: INFO: Versions found [{autoscaling/v1 v1} {autoscaling/v2beta1 v2beta1} {autoscaling/v2beta2 v2beta2}] Jun 10 21:59:21.298: INFO: autoscaling/v1 matches autoscaling/v1 Jun 10 21:59:21.298: INFO: Checking APIGroup: batch Jun 10 21:59:21.299: INFO: PreferredVersion.GroupVersion: batch/v1 Jun 10 21:59:21.299: INFO: Versions found [{batch/v1 v1} {batch/v1beta1 v1beta1}] Jun 10 21:59:21.299: INFO: batch/v1 matches batch/v1 Jun 10 21:59:21.299: INFO: Checking APIGroup: certificates.k8s.io Jun 10 21:59:21.300: INFO: PreferredVersion.GroupVersion: certificates.k8s.io/v1 Jun 10 21:59:21.300: INFO: Versions found [{certificates.k8s.io/v1 v1} {certificates.k8s.io/v1beta1 v1beta1}] Jun 10 21:59:21.300: INFO: certificates.k8s.io/v1 matches certificates.k8s.io/v1 Jun 10 21:59:21.300: INFO: Checking APIGroup: networking.k8s.io Jun 10 21:59:21.301: INFO: PreferredVersion.GroupVersion: networking.k8s.io/v1 Jun 10 21:59:21.301: INFO: Versions found [{networking.k8s.io/v1 v1} {networking.k8s.io/v1beta1 v1beta1}] Jun 10 21:59:21.301: INFO: networking.k8s.io/v1 matches networking.k8s.io/v1 Jun 10 21:59:21.301: INFO: Checking APIGroup: extensions Jun 10 21:59:21.302: INFO: PreferredVersion.GroupVersion: extensions/v1beta1 Jun 10 21:59:21.302: INFO: Versions found [{extensions/v1beta1 v1beta1}] Jun 10 21:59:21.302: INFO: extensions/v1beta1 matches extensions/v1beta1 Jun 10 21:59:21.302: INFO: Checking APIGroup: policy Jun 10 21:59:21.303: INFO: PreferredVersion.GroupVersion: policy/v1 Jun 10 21:59:21.303: INFO: Versions found [{policy/v1 v1} {policy/v1beta1 v1beta1}] Jun 10 21:59:21.303: INFO: policy/v1 matches policy/v1 Jun 10 21:59:21.303: INFO: Checking APIGroup: rbac.authorization.k8s.io Jun 10 21:59:21.304: INFO: PreferredVersion.GroupVersion: rbac.authorization.k8s.io/v1 Jun 10 21:59:21.304: INFO: Versions found [{rbac.authorization.k8s.io/v1 v1} {rbac.authorization.k8s.io/v1beta1 v1beta1}] Jun 10 21:59:21.304: INFO: rbac.authorization.k8s.io/v1 matches rbac.authorization.k8s.io/v1 Jun 10 21:59:21.304: INFO: Checking APIGroup: storage.k8s.io Jun 10 21:59:21.305: INFO: PreferredVersion.GroupVersion: storage.k8s.io/v1 Jun 10 21:59:21.305: INFO: Versions found [{storage.k8s.io/v1 v1} {storage.k8s.io/v1beta1 v1beta1}] Jun 10 21:59:21.305: INFO: storage.k8s.io/v1 matches storage.k8s.io/v1 Jun 10 21:59:21.305: INFO: Checking APIGroup: admissionregistration.k8s.io Jun 10 21:59:21.305: INFO: PreferredVersion.GroupVersion: admissionregistration.k8s.io/v1 Jun 10 21:59:21.305: INFO: Versions found [{admissionregistration.k8s.io/v1 v1} {admissionregistration.k8s.io/v1beta1 v1beta1}] Jun 10 21:59:21.305: INFO: admissionregistration.k8s.io/v1 matches admissionregistration.k8s.io/v1 Jun 10 21:59:21.305: INFO: Checking APIGroup: apiextensions.k8s.io Jun 10 21:59:21.306: INFO: PreferredVersion.GroupVersion: apiextensions.k8s.io/v1 Jun 10 21:59:21.306: INFO: Versions found [{apiextensions.k8s.io/v1 v1} {apiextensions.k8s.io/v1beta1 v1beta1}] Jun 10 21:59:21.306: INFO: apiextensions.k8s.io/v1 matches apiextensions.k8s.io/v1 Jun 10 21:59:21.306: INFO: Checking APIGroup: scheduling.k8s.io Jun 10 21:59:21.307: INFO: PreferredVersion.GroupVersion: scheduling.k8s.io/v1 Jun 10 21:59:21.307: INFO: Versions found [{scheduling.k8s.io/v1 v1} {scheduling.k8s.io/v1beta1 v1beta1}] Jun 10 21:59:21.307: INFO: scheduling.k8s.io/v1 matches scheduling.k8s.io/v1 Jun 10 21:59:21.307: INFO: Checking APIGroup: coordination.k8s.io Jun 10 21:59:21.308: INFO: PreferredVersion.GroupVersion: coordination.k8s.io/v1 Jun 10 21:59:21.308: INFO: Versions found [{coordination.k8s.io/v1 v1} {coordination.k8s.io/v1beta1 v1beta1}] Jun 10 21:59:21.308: INFO: coordination.k8s.io/v1 matches coordination.k8s.io/v1 Jun 10 21:59:21.308: INFO: Checking APIGroup: node.k8s.io Jun 10 21:59:21.309: INFO: PreferredVersion.GroupVersion: node.k8s.io/v1 Jun 10 21:59:21.309: INFO: Versions found [{node.k8s.io/v1 v1} {node.k8s.io/v1beta1 v1beta1}] Jun 10 21:59:21.309: INFO: node.k8s.io/v1 matches node.k8s.io/v1 Jun 10 21:59:21.309: INFO: Checking APIGroup: discovery.k8s.io Jun 10 21:59:21.310: INFO: PreferredVersion.GroupVersion: discovery.k8s.io/v1 Jun 10 21:59:21.310: INFO: Versions found [{discovery.k8s.io/v1 v1} {discovery.k8s.io/v1beta1 v1beta1}] Jun 10 21:59:21.310: INFO: discovery.k8s.io/v1 matches discovery.k8s.io/v1 Jun 10 21:59:21.310: INFO: Checking APIGroup: flowcontrol.apiserver.k8s.io Jun 10 21:59:21.311: INFO: PreferredVersion.GroupVersion: flowcontrol.apiserver.k8s.io/v1beta1 Jun 10 21:59:21.311: INFO: Versions found [{flowcontrol.apiserver.k8s.io/v1beta1 v1beta1}] Jun 10 21:59:21.311: INFO: flowcontrol.apiserver.k8s.io/v1beta1 matches flowcontrol.apiserver.k8s.io/v1beta1 Jun 10 21:59:21.311: INFO: Checking APIGroup: intel.com Jun 10 21:59:21.311: INFO: PreferredVersion.GroupVersion: intel.com/v1 Jun 10 21:59:21.311: INFO: Versions found [{intel.com/v1 v1}] Jun 10 21:59:21.311: INFO: intel.com/v1 matches intel.com/v1 Jun 10 21:59:21.311: INFO: Checking APIGroup: k8s.cni.cncf.io Jun 10 21:59:21.312: INFO: PreferredVersion.GroupVersion: k8s.cni.cncf.io/v1 Jun 10 21:59:21.312: INFO: Versions found [{k8s.cni.cncf.io/v1 v1}] Jun 10 21:59:21.312: INFO: k8s.cni.cncf.io/v1 matches k8s.cni.cncf.io/v1 Jun 10 21:59:21.312: INFO: Checking APIGroup: monitoring.coreos.com Jun 10 21:59:21.313: INFO: PreferredVersion.GroupVersion: monitoring.coreos.com/v1 Jun 10 21:59:21.313: INFO: Versions found [{monitoring.coreos.com/v1 v1} {monitoring.coreos.com/v1alpha1 v1alpha1}] Jun 10 21:59:21.313: INFO: monitoring.coreos.com/v1 matches monitoring.coreos.com/v1 Jun 10 21:59:21.313: INFO: Checking APIGroup: telemetry.intel.com Jun 10 21:59:21.314: INFO: PreferredVersion.GroupVersion: telemetry.intel.com/v1alpha1 Jun 10 21:59:21.314: INFO: Versions found [{telemetry.intel.com/v1alpha1 v1alpha1}] Jun 10 21:59:21.314: INFO: telemetry.intel.com/v1alpha1 matches telemetry.intel.com/v1alpha1 Jun 10 21:59:21.314: INFO: Checking APIGroup: custom.metrics.k8s.io Jun 10 21:59:21.315: INFO: PreferredVersion.GroupVersion: custom.metrics.k8s.io/v1beta1 Jun 10 21:59:21.315: INFO: Versions found [{custom.metrics.k8s.io/v1beta1 v1beta1}] Jun 10 21:59:21.315: INFO: custom.metrics.k8s.io/v1beta1 matches custom.metrics.k8s.io/v1beta1 [AfterEach] [sig-api-machinery] Discovery /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 10 21:59:21.315: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "discovery-4498" for this suite. • ------------------------------ {"msg":"PASSED [sig-api-machinery] Discovery should validate PreferredVersion for each APIGroup [Conformance]","total":-1,"completed":7,"skipped":97,"failed":0} SSSSSSSS ------------------------------ {"msg":"PASSED [sig-instrumentation] Events should ensure that an event can be fetched, patched, deleted, and listed [Conformance]","total":-1,"completed":12,"skipped":244,"failed":0} [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 10 21:59:18.361: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41 [It] should provide podname only [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test downward API volume plugin Jun 10 21:59:18.406: INFO: Waiting up to 5m0s for pod "downwardapi-volume-733878e8-cd85-4a70-83dc-d1e952481e5b" in namespace "downward-api-3918" to be "Succeeded or Failed" Jun 10 21:59:18.408: INFO: Pod "downwardapi-volume-733878e8-cd85-4a70-83dc-d1e952481e5b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.15327ms Jun 10 21:59:20.411: INFO: Pod "downwardapi-volume-733878e8-cd85-4a70-83dc-d1e952481e5b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00513315s Jun 10 21:59:22.416: INFO: Pod "downwardapi-volume-733878e8-cd85-4a70-83dc-d1e952481e5b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.010022092s Jun 10 21:59:24.422: INFO: Pod "downwardapi-volume-733878e8-cd85-4a70-83dc-d1e952481e5b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.016188084s STEP: Saw pod success Jun 10 21:59:24.422: INFO: Pod "downwardapi-volume-733878e8-cd85-4a70-83dc-d1e952481e5b" satisfied condition "Succeeded or Failed" Jun 10 21:59:24.424: INFO: Trying to get logs from node node1 pod downwardapi-volume-733878e8-cd85-4a70-83dc-d1e952481e5b container client-container: STEP: delete the pod Jun 10 21:59:24.438: INFO: Waiting for pod downwardapi-volume-733878e8-cd85-4a70-83dc-d1e952481e5b to disappear Jun 10 21:59:24.440: INFO: Pod downwardapi-volume-733878e8-cd85-4a70-83dc-d1e952481e5b no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 10 21:59:24.440: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-3918" for this suite. • [SLOW TEST:6.087 seconds] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should provide podname only [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance]","total":-1,"completed":13,"skipped":244,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 10 21:58:16.948: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746 [It] should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating service in namespace services-1537 STEP: creating service affinity-clusterip-transition in namespace services-1537 STEP: creating replication controller affinity-clusterip-transition in namespace services-1537 I0610 21:58:16.979947 25 runners.go:190] Created replication controller with name: affinity-clusterip-transition, namespace: services-1537, replica count: 3 I0610 21:58:20.031637 25 runners.go:190] affinity-clusterip-transition Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0610 21:58:23.033066 25 runners.go:190] affinity-clusterip-transition Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0610 21:58:26.033997 25 runners.go:190] affinity-clusterip-transition Pods: 3 out of 3 created, 1 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0610 21:58:29.036636 25 runners.go:190] affinity-clusterip-transition Pods: 3 out of 3 created, 1 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0610 21:58:32.037688 25 runners.go:190] affinity-clusterip-transition Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jun 10 21:58:32.043: INFO: Creating new exec pod Jun 10 21:58:37.063: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1537 exec execpod-affinity7jjll -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-transition 80' Jun 10 21:58:37.325: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-clusterip-transition 80\nConnection to affinity-clusterip-transition 80 port [tcp/http] succeeded!\n" Jun 10 21:58:37.325: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" Jun 10 21:58:37.325: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1537 exec execpod-affinity7jjll -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.233.11.53 80' Jun 10 21:58:37.622: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 10.233.11.53 80\nConnection to 10.233.11.53 80 port [tcp/http] succeeded!\n" Jun 10 21:58:37.622: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" Jun 10 21:58:37.629: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1537 exec execpod-affinity7jjll -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://10.233.11.53:80/ ; done' Jun 10 21:58:38.016: INFO: stderr: "+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.11.53:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.11.53:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.11.53:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.11.53:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.11.53:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.11.53:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.11.53:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.11.53:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.11.53:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.11.53:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.11.53:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.11.53:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.11.53:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.11.53:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.11.53:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.11.53:80/\n" Jun 10 21:58:38.016: INFO: stdout: "\naffinity-clusterip-transition-wflfv\naffinity-clusterip-transition-qbgdm\naffinity-clusterip-transition-zf7tf\naffinity-clusterip-transition-qbgdm\naffinity-clusterip-transition-wflfv\naffinity-clusterip-transition-zf7tf\naffinity-clusterip-transition-qbgdm\naffinity-clusterip-transition-zf7tf\naffinity-clusterip-transition-zf7tf\naffinity-clusterip-transition-zf7tf\naffinity-clusterip-transition-wflfv\naffinity-clusterip-transition-wflfv\naffinity-clusterip-transition-qbgdm\naffinity-clusterip-transition-wflfv\naffinity-clusterip-transition-qbgdm\naffinity-clusterip-transition-zf7tf" Jun 10 21:58:38.016: INFO: Received response from host: affinity-clusterip-transition-wflfv Jun 10 21:58:38.016: INFO: Received response from host: affinity-clusterip-transition-qbgdm Jun 10 21:58:38.016: INFO: Received response from host: affinity-clusterip-transition-zf7tf Jun 10 21:58:38.016: INFO: Received response from host: affinity-clusterip-transition-qbgdm Jun 10 21:58:38.016: INFO: Received response from host: affinity-clusterip-transition-wflfv Jun 10 21:58:38.016: INFO: Received response from host: affinity-clusterip-transition-zf7tf Jun 10 21:58:38.016: INFO: Received response from host: affinity-clusterip-transition-qbgdm Jun 10 21:58:38.016: INFO: Received response from host: affinity-clusterip-transition-zf7tf Jun 10 21:58:38.016: INFO: Received response from host: affinity-clusterip-transition-zf7tf Jun 10 21:58:38.016: INFO: Received response from host: affinity-clusterip-transition-zf7tf Jun 10 21:58:38.016: INFO: Received response from host: affinity-clusterip-transition-wflfv Jun 10 21:58:38.016: INFO: Received response from host: affinity-clusterip-transition-wflfv Jun 10 21:58:38.016: INFO: Received response from host: affinity-clusterip-transition-qbgdm Jun 10 21:58:38.016: INFO: Received response from host: affinity-clusterip-transition-wflfv Jun 10 21:58:38.016: INFO: Received response from host: affinity-clusterip-transition-qbgdm Jun 10 21:58:38.016: INFO: Received response from host: affinity-clusterip-transition-zf7tf Jun 10 21:58:38.024: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1537 exec execpod-affinity7jjll -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://10.233.11.53:80/ ; done' Jun 10 21:58:38.401: INFO: stderr: "+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.11.53:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.11.53:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.11.53:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.11.53:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.11.53:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.11.53:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.11.53:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.11.53:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.11.53:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.11.53:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.11.53:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.11.53:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.11.53:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.11.53:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.11.53:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.11.53:80/\n" Jun 10 21:58:38.401: INFO: stdout: "\naffinity-clusterip-transition-wflfv\naffinity-clusterip-transition-zf7tf\naffinity-clusterip-transition-qbgdm\naffinity-clusterip-transition-qbgdm\naffinity-clusterip-transition-wflfv\naffinity-clusterip-transition-zf7tf\naffinity-clusterip-transition-wflfv\naffinity-clusterip-transition-wflfv\naffinity-clusterip-transition-wflfv\naffinity-clusterip-transition-zf7tf\naffinity-clusterip-transition-wflfv\naffinity-clusterip-transition-wflfv\naffinity-clusterip-transition-qbgdm\naffinity-clusterip-transition-qbgdm\naffinity-clusterip-transition-wflfv\naffinity-clusterip-transition-wflfv" Jun 10 21:58:38.401: INFO: Received response from host: affinity-clusterip-transition-wflfv Jun 10 21:58:38.401: INFO: Received response from host: affinity-clusterip-transition-zf7tf Jun 10 21:58:38.401: INFO: Received response from host: affinity-clusterip-transition-qbgdm Jun 10 21:58:38.401: INFO: Received response from host: affinity-clusterip-transition-qbgdm Jun 10 21:58:38.401: INFO: Received response from host: affinity-clusterip-transition-wflfv Jun 10 21:58:38.401: INFO: Received response from host: affinity-clusterip-transition-zf7tf Jun 10 21:58:38.401: INFO: Received response from host: affinity-clusterip-transition-wflfv Jun 10 21:58:38.401: INFO: Received response from host: affinity-clusterip-transition-wflfv Jun 10 21:58:38.401: INFO: Received response from host: affinity-clusterip-transition-wflfv Jun 10 21:58:38.401: INFO: Received response from host: affinity-clusterip-transition-zf7tf Jun 10 21:58:38.401: INFO: Received response from host: affinity-clusterip-transition-wflfv Jun 10 21:58:38.401: INFO: Received response from host: affinity-clusterip-transition-wflfv Jun 10 21:58:38.401: INFO: Received response from host: affinity-clusterip-transition-qbgdm Jun 10 21:58:38.401: INFO: Received response from host: affinity-clusterip-transition-qbgdm Jun 10 21:58:38.401: INFO: Received response from host: affinity-clusterip-transition-wflfv Jun 10 21:58:38.401: INFO: Received response from host: affinity-clusterip-transition-wflfv Jun 10 21:59:08.402: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1537 exec execpod-affinity7jjll -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://10.233.11.53:80/ ; done' Jun 10 21:59:08.681: INFO: stderr: "+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.11.53:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.11.53:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.11.53:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.11.53:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.11.53:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.11.53:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.11.53:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.11.53:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.11.53:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.11.53:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.11.53:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.11.53:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.11.53:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.11.53:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.11.53:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.11.53:80/\n" Jun 10 21:59:08.681: INFO: stdout: "\naffinity-clusterip-transition-qbgdm\naffinity-clusterip-transition-qbgdm\naffinity-clusterip-transition-qbgdm\naffinity-clusterip-transition-qbgdm\naffinity-clusterip-transition-qbgdm\naffinity-clusterip-transition-qbgdm\naffinity-clusterip-transition-qbgdm\naffinity-clusterip-transition-qbgdm\naffinity-clusterip-transition-qbgdm\naffinity-clusterip-transition-qbgdm\naffinity-clusterip-transition-qbgdm\naffinity-clusterip-transition-qbgdm\naffinity-clusterip-transition-qbgdm\naffinity-clusterip-transition-qbgdm\naffinity-clusterip-transition-qbgdm\naffinity-clusterip-transition-qbgdm" Jun 10 21:59:08.681: INFO: Received response from host: affinity-clusterip-transition-qbgdm Jun 10 21:59:08.681: INFO: Received response from host: affinity-clusterip-transition-qbgdm Jun 10 21:59:08.681: INFO: Received response from host: affinity-clusterip-transition-qbgdm Jun 10 21:59:08.681: INFO: Received response from host: affinity-clusterip-transition-qbgdm Jun 10 21:59:08.681: INFO: Received response from host: affinity-clusterip-transition-qbgdm Jun 10 21:59:08.681: INFO: Received response from host: affinity-clusterip-transition-qbgdm Jun 10 21:59:08.681: INFO: Received response from host: affinity-clusterip-transition-qbgdm Jun 10 21:59:08.681: INFO: Received response from host: affinity-clusterip-transition-qbgdm Jun 10 21:59:08.681: INFO: Received response from host: affinity-clusterip-transition-qbgdm Jun 10 21:59:08.681: INFO: Received response from host: affinity-clusterip-transition-qbgdm Jun 10 21:59:08.681: INFO: Received response from host: affinity-clusterip-transition-qbgdm Jun 10 21:59:08.681: INFO: Received response from host: affinity-clusterip-transition-qbgdm Jun 10 21:59:08.681: INFO: Received response from host: affinity-clusterip-transition-qbgdm Jun 10 21:59:08.681: INFO: Received response from host: affinity-clusterip-transition-qbgdm Jun 10 21:59:08.681: INFO: Received response from host: affinity-clusterip-transition-qbgdm Jun 10 21:59:08.681: INFO: Received response from host: affinity-clusterip-transition-qbgdm Jun 10 21:59:08.681: INFO: Cleaning up the exec pod STEP: deleting ReplicationController affinity-clusterip-transition in namespace services-1537, will wait for the garbage collector to delete the pods Jun 10 21:59:08.755: INFO: Deleting ReplicationController affinity-clusterip-transition took: 13.969047ms Jun 10 21:59:08.855: INFO: Terminating ReplicationController affinity-clusterip-transition pods took: 100.306288ms [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 10 21:59:24.664: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-1537" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750 • [SLOW TEST:67.724 seconds] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 10 21:59:21.343: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:186 [It] should support remote command execution over websockets [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Jun 10 21:59:21.366: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes Jun 10 21:59:21.380: INFO: The status of Pod pod-exec-websocket-357709cd-2662-4422-8ba1-69782ba85e50 is Pending, waiting for it to be Running (with Ready = true) Jun 10 21:59:23.384: INFO: The status of Pod pod-exec-websocket-357709cd-2662-4422-8ba1-69782ba85e50 is Pending, waiting for it to be Running (with Ready = true) Jun 10 21:59:25.384: INFO: The status of Pod pod-exec-websocket-357709cd-2662-4422-8ba1-69782ba85e50 is Running (Ready = true) [AfterEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 10 21:59:25.483: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-9649" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Pods should support remote command execution over websockets [NodeConformance] [Conformance]","total":-1,"completed":8,"skipped":105,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] Aggregator /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 10 21:59:10.634: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename aggregator STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Aggregator /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:77 Jun 10 21:59:10.656: INFO: >>> kubeConfig: /root/.kube/config [It] Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Registering the sample API server. Jun 10 21:59:11.128: INFO: deployment "sample-apiserver-deployment" doesn't have the required revision set Jun 10 21:59:13.157: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63790495151, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63790495151, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63790495151, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63790495151, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-64f6b9dc99\" is progressing."}}, CollisionCount:(*int32)(nil)} Jun 10 21:59:15.161: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63790495151, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63790495151, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63790495151, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63790495151, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-64f6b9dc99\" is progressing."}}, CollisionCount:(*int32)(nil)} Jun 10 21:59:17.161: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63790495151, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63790495151, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63790495151, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63790495151, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-64f6b9dc99\" is progressing."}}, CollisionCount:(*int32)(nil)} Jun 10 21:59:19.162: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63790495151, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63790495151, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63790495151, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63790495151, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-64f6b9dc99\" is progressing."}}, CollisionCount:(*int32)(nil)} Jun 10 21:59:21.161: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63790495151, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63790495151, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63790495151, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63790495151, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-64f6b9dc99\" is progressing."}}, CollisionCount:(*int32)(nil)} Jun 10 21:59:23.161: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63790495151, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63790495151, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63790495151, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63790495151, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-64f6b9dc99\" is progressing."}}, CollisionCount:(*int32)(nil)} Jun 10 21:59:26.279: INFO: Waited 1.113455386s for the sample-apiserver to be ready to handle requests. STEP: Read Status for v1alpha1.wardle.example.com STEP: kubectl patch apiservice v1alpha1.wardle.example.com -p '{"spec":{"versionPriority": 400}}' STEP: List APIServices Jun 10 21:59:26.733: INFO: Found v1alpha1.wardle.example.com in APIServiceList [AfterEach] [sig-api-machinery] Aggregator /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:68 [AfterEach] [sig-api-machinery] Aggregator /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 10 21:59:27.524: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "aggregator-1519" for this suite. • [SLOW TEST:16.992 seconds] [sig-api-machinery] Aggregator /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","total":-1,"completed":8,"skipped":54,"failed":0} S ------------------------------ [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 10 21:59:25.573: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Jun 10 21:59:25.942: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Jun 10 21:59:27.950: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63790495165, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63790495165, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63790495165, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63790495165, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jun 10 21:59:30.970: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] listing validating webhooks should work [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Listing all of the created validation webhooks STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Deleting the collection of validation webhooks STEP: Creating a configMap that does not comply to the validation webhook rules [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 10 21:59:31.082: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-4607" for this suite. STEP: Destroying namespace "webhook-4607-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:5.541 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 listing validating webhooks should work [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]","total":-1,"completed":9,"skipped":139,"failed":0} SSS ------------------------------ [BeforeEach] [sig-node] Docker Containers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 10 21:59:27.632: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test override command Jun 10 21:59:27.663: INFO: Waiting up to 5m0s for pod "client-containers-c5a8fd31-d0cc-47da-a607-9780a94100dd" in namespace "containers-3487" to be "Succeeded or Failed" Jun 10 21:59:27.666: INFO: Pod "client-containers-c5a8fd31-d0cc-47da-a607-9780a94100dd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.940098ms Jun 10 21:59:29.668: INFO: Pod "client-containers-c5a8fd31-d0cc-47da-a607-9780a94100dd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005593312s Jun 10 21:59:31.672: INFO: Pod "client-containers-c5a8fd31-d0cc-47da-a607-9780a94100dd": Phase="Pending", Reason="", readiness=false. Elapsed: 4.009541307s Jun 10 21:59:33.678: INFO: Pod "client-containers-c5a8fd31-d0cc-47da-a607-9780a94100dd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.015144743s STEP: Saw pod success Jun 10 21:59:33.678: INFO: Pod "client-containers-c5a8fd31-d0cc-47da-a607-9780a94100dd" satisfied condition "Succeeded or Failed" Jun 10 21:59:33.680: INFO: Trying to get logs from node node2 pod client-containers-c5a8fd31-d0cc-47da-a607-9780a94100dd container agnhost-container: STEP: delete the pod Jun 10 21:59:33.695: INFO: Waiting for pod client-containers-c5a8fd31-d0cc-47da-a607-9780a94100dd to disappear Jun 10 21:59:33.697: INFO: Pod client-containers-c5a8fd31-d0cc-47da-a607-9780a94100dd no longer exists [AfterEach] [sig-node] Docker Containers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 10 21:59:33.697: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-3487" for this suite. • [SLOW TEST:6.073 seconds] [sig-node] Docker Containers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]","total":-1,"completed":9,"skipped":55,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 10 21:59:33.762: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should guarantee kube-root-ca.crt exist in any namespace [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Jun 10 21:59:33.786: INFO: Got root ca configmap in namespace "svcaccounts-4708" Jun 10 21:59:33.790: INFO: Deleted root ca configmap in namespace "svcaccounts-4708" STEP: waiting for a new root ca configmap created Jun 10 21:59:34.293: INFO: Recreated root ca configmap in namespace "svcaccounts-4708" Jun 10 21:59:34.297: INFO: Updated root ca configmap in namespace "svcaccounts-4708" STEP: waiting for the root ca configmap reconciled Jun 10 21:59:34.801: INFO: Reconciled root ca configmap in namespace "svcaccounts-4708" [AfterEach] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 10 21:59:34.801: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-4708" for this suite. • ------------------------------ {"msg":"PASSED [sig-auth] ServiceAccounts should guarantee kube-root-ca.crt exist in any namespace [Conformance]","total":-1,"completed":10,"skipped":80,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 10 21:59:31.123: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating projection with secret that has name projected-secret-test-6df05e6b-13d2-42c8-a280-1e123506e202 STEP: Creating a pod to test consume secrets Jun 10 21:59:31.160: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-42a8ae15-3bbd-45ca-a5d0-72d3631eade6" in namespace "projected-2242" to be "Succeeded or Failed" Jun 10 21:59:31.163: INFO: Pod "pod-projected-secrets-42a8ae15-3bbd-45ca-a5d0-72d3631eade6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.343718ms Jun 10 21:59:33.166: INFO: Pod "pod-projected-secrets-42a8ae15-3bbd-45ca-a5d0-72d3631eade6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00532814s Jun 10 21:59:35.170: INFO: Pod "pod-projected-secrets-42a8ae15-3bbd-45ca-a5d0-72d3631eade6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.009290004s STEP: Saw pod success Jun 10 21:59:35.170: INFO: Pod "pod-projected-secrets-42a8ae15-3bbd-45ca-a5d0-72d3631eade6" satisfied condition "Succeeded or Failed" Jun 10 21:59:35.172: INFO: Trying to get logs from node node1 pod pod-projected-secrets-42a8ae15-3bbd-45ca-a5d0-72d3631eade6 container projected-secret-volume-test: STEP: delete the pod Jun 10 21:59:35.184: INFO: Waiting for pod pod-projected-secrets-42a8ae15-3bbd-45ca-a5d0-72d3631eade6 to disappear Jun 10 21:59:35.186: INFO: Pod pod-projected-secrets-42a8ae15-3bbd-45ca-a5d0-72d3631eade6 no longer exists [AfterEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 10 21:59:35.186: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2242" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":10,"skipped":142,"failed":0} SSSSSS ------------------------------ {"msg":"PASSED [sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","total":-1,"completed":4,"skipped":71,"failed":0} [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 10 21:59:24.675: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a pod. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Pod that fits quota STEP: Ensuring ResourceQuota status captures the pod usage STEP: Not allowing a pod to be created that exceeds remaining quota STEP: Not allowing a pod to be created that exceeds remaining quota(validation on extended resources) STEP: Ensuring a pod cannot update its resource requirements STEP: Ensuring attempts to update pod resource requirements did not change quota usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 10 21:59:37.763: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-9459" for this suite. • [SLOW TEST:13.096 seconds] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a pod. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ [BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/sysctl.go:35 [BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 10 21:59:34.876: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sysctl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/sysctl.go:64 [It] should support unsafe sysctls which are actually allowed [MinimumKubeletVersion:1.21] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod with the kernel.shm_rmid_forced sysctl STEP: Watching for error events or started pod STEP: Waiting for pod completion STEP: Checking that the pod succeeded STEP: Getting logs from the pod STEP: Checking that the sysctl is actually updated [AfterEach] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 10 21:59:38.922: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sysctl-9073" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] should support unsafe sysctls which are actually allowed [MinimumKubeletVersion:1.21] [Conformance]","total":-1,"completed":11,"skipped":110,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance]","total":-1,"completed":5,"skipped":71,"failed":0} [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 10 21:59:37.775: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Jun 10 21:59:38.154: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Jun 10 21:59:40.163: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63790495178, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63790495178, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63790495178, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63790495178, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jun 10 21:59:43.174: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should unconditionally reject operations on fail closed webhook [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Registering a webhook that server cannot talk to, with fail closed policy, via the AdmissionRegistration API STEP: create a namespace for the webhook STEP: create a configmap should be unconditionally rejected by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 10 21:59:43.219: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-1632" for this suite. STEP: Destroying namespace "webhook-1632-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:5.477 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should unconditionally reject operations on fail closed webhook [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","total":-1,"completed":6,"skipped":71,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 10 21:59:24.535: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of different groups [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: CRs in different groups (two CRDs) show up in OpenAPI documentation Jun 10 21:59:24.559: INFO: >>> kubeConfig: /root/.kube/config Jun 10 21:59:33.158: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 10 21:59:52.253: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-2724" for this suite. • [SLOW TEST:27.738 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of different groups [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","total":-1,"completed":14,"skipped":285,"failed":0} SSSS ------------------------------ [BeforeEach] [sig-node] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 10 21:58:57.924: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/init_container.go:162 [It] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating the pod Jun 10 21:58:57.949: INFO: PodSpec: initContainers in spec.initContainers Jun 10 21:59:52.407: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-bb0e482a-6502-439d-a9f9-9ec9d51f416f", GenerateName:"", Namespace:"init-container-5089", SelfLink:"", UID:"a7cb18ae-a777-4d54-a05a-77f9a3ea4593", ResourceVersion:"35705", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63790495137, loc:(*time.Location)(0x9e2e180)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"949004344"}, Annotations:map[string]string{"k8s.v1.cni.cncf.io/network-status":"[{\n \"name\": \"default-cni-network\",\n \"interface\": \"eth0\",\n \"ips\": [\n \"10.244.4.39\"\n ],\n \"mac\": \"26:7a:00:f5:6a:a8\",\n \"default\": true,\n \"dns\": {}\n}]", "k8s.v1.cni.cncf.io/networks-status":"[{\n \"name\": \"default-cni-network\",\n \"interface\": \"eth0\",\n \"ips\": [\n \"10.244.4.39\"\n ],\n \"mac\": \"26:7a:00:f5:6a:a8\",\n \"default\": true,\n \"dns\": {}\n}]", "kubernetes.io/psp":"collectd"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc004f963a8), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc004f963c0)}, v1.ManagedFieldsEntry{Manager:"multus", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc004f963d8), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc004f963f0)}, v1.ManagedFieldsEntry{Manager:"kubelet", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc004f96408), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc004f96420)}}}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"kube-api-access-qhf99", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(0xc004b57600), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"k8s.gcr.io/e2e-test-images/busybox:1.29-1", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"kube-api-access-qhf99", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"Always", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"k8s.gcr.io/e2e-test-images/busybox:1.29-1", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"kube-api-access-qhf99", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"Always", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.4.1", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"kube-api-access-qhf99", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"Always", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc004f8c888), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"node2", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc00343c1c0), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc004f8c910)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc004f8c930)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc004f8c938), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc004f8c93c), PreemptionPolicy:(*v1.PreemptionPolicy)(0xc004f98110), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63790495137, loc:(*time.Location)(0x9e2e180)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63790495137, loc:(*time.Location)(0x9e2e180)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63790495137, loc:(*time.Location)(0x9e2e180)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63790495137, loc:(*time.Location)(0x9e2e180)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"10.10.190.208", PodIP:"10.244.4.39", PodIPs:[]v1.PodIP{v1.PodIP{IP:"10.244.4.39"}}, StartTime:(*v1.Time)(0xc004f96450), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc00343c2a0)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc00343c310)}, Ready:false, RestartCount:3, Image:"k8s.gcr.io/e2e-test-images/busybox:1.29-1", ImageID:"docker-pullable://k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592", ContainerID:"docker://b662541f56bf3f94937fa4ef010cd44e294770e50e4be1bd4ccc05a0f667a748", Started:(*bool)(nil)}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc004b57680), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/e2e-test-images/busybox:1.29-1", ImageID:"", ContainerID:"", Started:(*bool)(nil)}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc004b57660), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.4.1", ImageID:"", ContainerID:"", Started:(*bool)(0xc004f8c9bf)}}, QOSClass:"Burstable", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}} [AfterEach] [sig-node] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 10 21:59:52.408: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-5089" for this suite. • [SLOW TEST:54.493 seconds] [sig-node] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should not start app containers if init containers fail on a RestartAlways pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance]","total":-1,"completed":8,"skipped":191,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 10 21:59:52.498: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test downward API volume plugin Jun 10 21:59:52.534: INFO: Waiting up to 5m0s for pod "downwardapi-volume-ba643425-8a3a-49a7-b99a-a549b7fd6277" in namespace "downward-api-3089" to be "Succeeded or Failed" Jun 10 21:59:52.536: INFO: Pod "downwardapi-volume-ba643425-8a3a-49a7-b99a-a549b7fd6277": Phase="Pending", Reason="", readiness=false. Elapsed: 2.508473ms Jun 10 21:59:54.542: INFO: Pod "downwardapi-volume-ba643425-8a3a-49a7-b99a-a549b7fd6277": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00828907s Jun 10 21:59:56.548: INFO: Pod "downwardapi-volume-ba643425-8a3a-49a7-b99a-a549b7fd6277": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.014608412s STEP: Saw pod success Jun 10 21:59:56.548: INFO: Pod "downwardapi-volume-ba643425-8a3a-49a7-b99a-a549b7fd6277" satisfied condition "Succeeded or Failed" Jun 10 21:59:56.551: INFO: Trying to get logs from node node2 pod downwardapi-volume-ba643425-8a3a-49a7-b99a-a549b7fd6277 container client-container: STEP: delete the pod Jun 10 21:59:56.568: INFO: Waiting for pod downwardapi-volume-ba643425-8a3a-49a7-b99a-a549b7fd6277 to disappear Jun 10 21:59:56.570: INFO: Pod downwardapi-volume-ba643425-8a3a-49a7-b99a-a549b7fd6277 no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 10 21:59:56.571: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-3089" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":-1,"completed":9,"skipped":227,"failed":0} SSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 10 21:59:52.289: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test emptydir 0666 on node default medium Jun 10 21:59:52.326: INFO: Waiting up to 5m0s for pod "pod-6416ea3d-49e4-480f-9e03-76142f956744" in namespace "emptydir-5482" to be "Succeeded or Failed" Jun 10 21:59:52.330: INFO: Pod "pod-6416ea3d-49e4-480f-9e03-76142f956744": Phase="Pending", Reason="", readiness=false. Elapsed: 3.46727ms Jun 10 21:59:54.334: INFO: Pod "pod-6416ea3d-49e4-480f-9e03-76142f956744": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007497623s Jun 10 21:59:56.337: INFO: Pod "pod-6416ea3d-49e4-480f-9e03-76142f956744": Phase="Pending", Reason="", readiness=false. Elapsed: 4.010979418s Jun 10 21:59:58.341: INFO: Pod "pod-6416ea3d-49e4-480f-9e03-76142f956744": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.014910336s STEP: Saw pod success Jun 10 21:59:58.341: INFO: Pod "pod-6416ea3d-49e4-480f-9e03-76142f956744" satisfied condition "Succeeded or Failed" Jun 10 21:59:58.344: INFO: Trying to get logs from node node1 pod pod-6416ea3d-49e4-480f-9e03-76142f956744 container test-container: STEP: delete the pod Jun 10 21:59:58.357: INFO: Waiting for pod pod-6416ea3d-49e4-480f-9e03-76142f956744 to disappear Jun 10 21:59:58.359: INFO: Pod pod-6416ea3d-49e4-480f-9e03-76142f956744 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 10 21:59:58.359: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-5482" for this suite. • [SLOW TEST:6.078 seconds] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":15,"skipped":289,"failed":0} SSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 10 21:59:38.974: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:54 [It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Jun 10 21:59:39.013: INFO: The status of Pod test-webserver-563fb882-4007-40b2-a378-9c406e0ea064 is Pending, waiting for it to be Running (with Ready = true) Jun 10 21:59:41.018: INFO: The status of Pod test-webserver-563fb882-4007-40b2-a378-9c406e0ea064 is Pending, waiting for it to be Running (with Ready = true) Jun 10 21:59:43.018: INFO: The status of Pod test-webserver-563fb882-4007-40b2-a378-9c406e0ea064 is Running (Ready = false) Jun 10 21:59:45.019: INFO: The status of Pod test-webserver-563fb882-4007-40b2-a378-9c406e0ea064 is Running (Ready = false) Jun 10 21:59:47.017: INFO: The status of Pod test-webserver-563fb882-4007-40b2-a378-9c406e0ea064 is Running (Ready = false) Jun 10 21:59:49.018: INFO: The status of Pod test-webserver-563fb882-4007-40b2-a378-9c406e0ea064 is Running (Ready = false) Jun 10 21:59:51.017: INFO: The status of Pod test-webserver-563fb882-4007-40b2-a378-9c406e0ea064 is Running (Ready = false) Jun 10 21:59:53.018: INFO: The status of Pod test-webserver-563fb882-4007-40b2-a378-9c406e0ea064 is Running (Ready = false) Jun 10 21:59:55.018: INFO: The status of Pod test-webserver-563fb882-4007-40b2-a378-9c406e0ea064 is Running (Ready = false) Jun 10 21:59:57.016: INFO: The status of Pod test-webserver-563fb882-4007-40b2-a378-9c406e0ea064 is Running (Ready = false) Jun 10 21:59:59.019: INFO: The status of Pod test-webserver-563fb882-4007-40b2-a378-9c406e0ea064 is Running (Ready = false) Jun 10 22:00:01.018: INFO: The status of Pod test-webserver-563fb882-4007-40b2-a378-9c406e0ea064 is Running (Ready = true) Jun 10 22:00:01.020: INFO: Container started at 2022-06-10 21:59:41 +0000 UTC, pod became ready at 2022-06-10 21:59:59 +0000 UTC [AfterEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 10 22:00:01.020: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-3765" for this suite. • [SLOW TEST:22.054 seconds] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","total":-1,"completed":12,"skipped":128,"failed":0} SSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 10 21:59:58.400: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: create the container STEP: wait for the container to reach Failed STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Jun 10 22:00:02.459: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 10 22:00:02.469: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-9089" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":-1,"completed":16,"skipped":303,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 10 21:59:11.575: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:54 [It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [AfterEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 10 22:00:11.616: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-3873" for this suite. • [SLOW TEST:60.050 seconds] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]","total":-1,"completed":13,"skipped":331,"failed":0} SSSS ------------------------------ [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 10 22:00:11.638: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46 [It] should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Jun 10 22:00:11.674: INFO: Waiting up to 5m0s for pod "busybox-privileged-false-b2dceb21-dafa-49fa-b947-9e7739b98116" in namespace "security-context-test-8309" to be "Succeeded or Failed" Jun 10 22:00:11.676: INFO: Pod "busybox-privileged-false-b2dceb21-dafa-49fa-b947-9e7739b98116": Phase="Pending", Reason="", readiness=false. Elapsed: 1.921511ms Jun 10 22:00:13.679: INFO: Pod "busybox-privileged-false-b2dceb21-dafa-49fa-b947-9e7739b98116": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005714261s Jun 10 22:00:15.685: INFO: Pod "busybox-privileged-false-b2dceb21-dafa-49fa-b947-9e7739b98116": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011068289s Jun 10 22:00:15.685: INFO: Pod "busybox-privileged-false-b2dceb21-dafa-49fa-b947-9e7739b98116" satisfied condition "Succeeded or Failed" Jun 10 22:00:15.691: INFO: Got logs for pod "busybox-privileged-false-b2dceb21-dafa-49fa-b947-9e7739b98116": "ip: RTNETLINK answers: Operation not permitted\n" [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 10 22:00:15.691: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-8309" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":14,"skipped":335,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 10 21:59:56.610: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:86 [It] deployment should support rollover [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Jun 10 21:59:56.638: INFO: Pod name rollover-pod: Found 0 pods out of 1 Jun 10 22:00:01.642: INFO: Pod name rollover-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Jun 10 22:00:01.642: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready Jun 10 22:00:03.645: INFO: Creating deployment "test-rollover-deployment" Jun 10 22:00:03.651: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations Jun 10 22:00:05.658: INFO: Check revision of new replica set for deployment "test-rollover-deployment" Jun 10 22:00:05.664: INFO: Ensure that both replica sets have 1 created replica Jun 10 22:00:05.670: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update Jun 10 22:00:05.678: INFO: Updating deployment test-rollover-deployment Jun 10 22:00:05.678: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller Jun 10 22:00:07.684: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2 Jun 10 22:00:07.691: INFO: Make sure deployment "test-rollover-deployment" is complete Jun 10 22:00:07.696: INFO: all replica sets need to contain the pod-template-hash label Jun 10 22:00:07.696: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63790495203, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63790495203, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63790495205, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63790495203, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-98c5f4599\" is progressing."}}, CollisionCount:(*int32)(nil)} Jun 10 22:00:09.706: INFO: all replica sets need to contain the pod-template-hash label Jun 10 22:00:09.706: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63790495203, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63790495203, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63790495209, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63790495203, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-98c5f4599\" is progressing."}}, CollisionCount:(*int32)(nil)} Jun 10 22:00:11.702: INFO: all replica sets need to contain the pod-template-hash label Jun 10 22:00:11.702: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63790495203, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63790495203, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63790495209, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63790495203, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-98c5f4599\" is progressing."}}, CollisionCount:(*int32)(nil)} Jun 10 22:00:13.704: INFO: all replica sets need to contain the pod-template-hash label Jun 10 22:00:13.704: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63790495203, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63790495203, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63790495209, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63790495203, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-98c5f4599\" is progressing."}}, CollisionCount:(*int32)(nil)} Jun 10 22:00:15.703: INFO: all replica sets need to contain the pod-template-hash label Jun 10 22:00:15.704: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63790495203, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63790495203, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63790495209, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63790495203, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-98c5f4599\" is progressing."}}, CollisionCount:(*int32)(nil)} Jun 10 22:00:17.703: INFO: all replica sets need to contain the pod-template-hash label Jun 10 22:00:17.703: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63790495203, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63790495203, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63790495209, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63790495203, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-98c5f4599\" is progressing."}}, CollisionCount:(*int32)(nil)} Jun 10 22:00:19.704: INFO: Jun 10 22:00:19.704: INFO: Ensure that both old replica sets have no replicas [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:80 Jun 10 22:00:19.712: INFO: Deployment "test-rollover-deployment": &Deployment{ObjectMeta:{test-rollover-deployment deployment-5921 0559c7cb-374f-4811-a2a9-2de61419f897 36125 2 2022-06-10 22:00:03 +0000 UTC map[name:rollover-pod] map[deployment.kubernetes.io/revision:2] [] [] [{e2e.test Update apps/v1 2022-06-10 22:00:05 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:minReadySeconds":{},"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2022-06-10 22:00:19 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:availableReplicas":{},"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{},"f:updatedReplicas":{}}}}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.32 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc002ba32a8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2022-06-10 22:00:03 +0000 UTC,LastTransitionTime:2022-06-10 22:00:03 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rollover-deployment-98c5f4599" has successfully progressed.,LastUpdateTime:2022-06-10 22:00:19 +0000 UTC,LastTransitionTime:2022-06-10 22:00:03 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} Jun 10 22:00:19.716: INFO: New ReplicaSet "test-rollover-deployment-98c5f4599" of Deployment "test-rollover-deployment": &ReplicaSet{ObjectMeta:{test-rollover-deployment-98c5f4599 deployment-5921 43cc226e-c13a-42bb-9686-d8aa7f21c75e 36115 2 2022-06-10 22:00:05 +0000 UTC map[name:rollover-pod pod-template-hash:98c5f4599] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-rollover-deployment 0559c7cb-374f-4811-a2a9-2de61419f897 0xc002ba3820 0xc002ba3821}] [] [{kube-controller-manager Update apps/v1 2022-06-10 22:00:19 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0559c7cb-374f-4811-a2a9-2de61419f897\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:minReadySeconds":{},"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 98c5f4599,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod-template-hash:98c5f4599] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.32 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc002ba3898 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} Jun 10 22:00:19.716: INFO: All old ReplicaSets of Deployment "test-rollover-deployment": Jun 10 22:00:19.716: INFO: &ReplicaSet{ObjectMeta:{test-rollover-controller deployment-5921 7333afba-2a43-468a-9242-a453f42bdbc9 36124 2 2022-06-10 21:59:56 +0000 UTC map[name:rollover-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2] [{apps/v1 Deployment test-rollover-deployment 0559c7cb-374f-4811-a2a9-2de61419f897 0xc002ba3617 0xc002ba3618}] [] [{e2e.test Update apps/v1 2022-06-10 21:59:56 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2022-06-10 22:00:19 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0559c7cb-374f-4811-a2a9-2de61419f897\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{}},"f:status":{"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod:httpd] map[] [] [] []} {[] [] [{httpd k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc002ba36b8 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Jun 10 22:00:19.716: INFO: &ReplicaSet{ObjectMeta:{test-rollover-deployment-78bc8b888c deployment-5921 37916576-9679-4ad6-bf91-14f03cb17ac9 35959 2 2022-06-10 22:00:03 +0000 UTC map[name:rollover-pod pod-template-hash:78bc8b888c] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-rollover-deployment 0559c7cb-374f-4811-a2a9-2de61419f897 0xc002ba3727 0xc002ba3728}] [] [{kube-controller-manager Update apps/v1 2022-06-10 22:00:05 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0559c7cb-374f-4811-a2a9-2de61419f897\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:minReadySeconds":{},"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"redis-slave\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 78bc8b888c,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod-template-hash:78bc8b888c] map[] [] [] []} {[] [] [{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc002ba37b8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Jun 10 22:00:19.720: INFO: Pod "test-rollover-deployment-98c5f4599-7p49z" is available: &Pod{ObjectMeta:{test-rollover-deployment-98c5f4599-7p49z test-rollover-deployment-98c5f4599- deployment-5921 541dd9af-553e-4f9e-b420-9882b46eca05 36010 0 2022-06-10 22:00:05 +0000 UTC map[name:rollover-pod pod-template-hash:98c5f4599] map[k8s.v1.cni.cncf.io/network-status:[{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.3.160" ], "mac": "ba:82:06:a7:71:d6", "default": true, "dns": {} }] k8s.v1.cni.cncf.io/networks-status:[{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.3.160" ], "mac": "ba:82:06:a7:71:d6", "default": true, "dns": {} }] kubernetes.io/psp:collectd] [{apps/v1 ReplicaSet test-rollover-deployment-98c5f4599 43cc226e-c13a-42bb-9686-d8aa7f21c75e 0xc002d19f8f 0xc002d19fa0}] [] [{kube-controller-manager Update v1 2022-06-10 22:00:05 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"43cc226e-c13a-42bb-9686-d8aa7f21c75e\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {multus Update v1 2022-06-10 22:00:07 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:k8s.v1.cni.cncf.io/network-status":{},"f:k8s.v1.cni.cncf.io/networks-status":{}}}}} {kubelet Update v1 2022-06-10 22:00:09 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.3.160\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-dvpkn,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:k8s.gcr.io/e2e-test-images/agnhost:2.32,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-dvpkn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node1,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-06-10 22:00:05 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-06-10 22:00:09 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-06-10 22:00:09 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-06-10 22:00:05 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.10.190.207,PodIP:10.244.3.160,StartTime:2022-06-10 22:00:05 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2022-06-10 22:00:08 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/agnhost:2.32,ImageID:docker-pullable://k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1,ContainerID:docker://b79d3916f27196e4a5db3bbb47c560e5bf93f9d65fda610634dd641a669524e3,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.3.160,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 10 22:00:19.721: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-5921" for this suite. • [SLOW TEST:23.120 seconds] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support rollover [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should support rollover [Conformance]","total":-1,"completed":10,"skipped":238,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] version v1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 10 22:00:15.743: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy through a service and a pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: starting an echo server on multiple ports STEP: creating replication controller proxy-service-v5r9q in namespace proxy-4083 I0610 22:00:15.785304 33 runners.go:190] Created replication controller with name: proxy-service-v5r9q, namespace: proxy-4083, replica count: 1 I0610 22:00:16.836852 33 runners.go:190] proxy-service-v5r9q Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0610 22:00:17.837670 33 runners.go:190] proxy-service-v5r9q Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0610 22:00:18.838046 33 runners.go:190] proxy-service-v5r9q Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0610 22:00:19.839029 33 runners.go:190] proxy-service-v5r9q Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jun 10 22:00:19.841: INFO: setup took 4.067009857s, starting test cases STEP: running 16 cases, 20 attempts per case, 320 total attempts Jun 10 22:00:19.844: INFO: (0) /api/v1/namespaces/proxy-4083/pods/http:proxy-service-v5r9q-p768z:160/proxy/: foo (200; 2.951794ms) Jun 10 22:00:19.844: INFO: (0) /api/v1/namespaces/proxy-4083/pods/http:proxy-service-v5r9q-p768z:162/proxy/: bar (200; 3.214464ms) Jun 10 22:00:19.845: INFO: (0) /api/v1/namespaces/proxy-4083/pods/proxy-service-v5r9q-p768z:1080/proxy/: test<... (200; 3.126572ms) Jun 10 22:00:19.845: INFO: (0) /api/v1/namespaces/proxy-4083/pods/proxy-service-v5r9q-p768z:160/proxy/: foo (200; 3.0348ms) Jun 10 22:00:19.845: INFO: (0) /api/v1/namespaces/proxy-4083/pods/proxy-service-v5r9q-p768z:162/proxy/: bar (200; 3.336004ms) Jun 10 22:00:19.845: INFO: (0) /api/v1/namespaces/proxy-4083/pods/http:proxy-service-v5r9q-p768z:1080/proxy/: ... (200; 3.301319ms) Jun 10 22:00:19.847: INFO: (0) /api/v1/namespaces/proxy-4083/services/proxy-service-v5r9q:portname2/proxy/: bar (200; 5.850898ms) Jun 10 22:00:19.847: INFO: (0) /api/v1/namespaces/proxy-4083/services/proxy-service-v5r9q:portname1/proxy/: foo (200; 5.802108ms) Jun 10 22:00:19.847: INFO: (0) /api/v1/namespaces/proxy-4083/services/http:proxy-service-v5r9q:portname2/proxy/: bar (200; 6.124338ms) Jun 10 22:00:19.847: INFO: (0) /api/v1/namespaces/proxy-4083/pods/proxy-service-v5r9q-p768z/proxy/: test (200; 5.790544ms) Jun 10 22:00:19.847: INFO: (0) /api/v1/namespaces/proxy-4083/services/http:proxy-service-v5r9q:portname1/proxy/: foo (200; 6.007346ms) Jun 10 22:00:19.849: INFO: (0) /api/v1/namespaces/proxy-4083/services/https:proxy-service-v5r9q:tlsportname1/proxy/: tls baz (200; 8.050077ms) Jun 10 22:00:19.850: INFO: (0) /api/v1/namespaces/proxy-4083/pods/https:proxy-service-v5r9q-p768z:443/proxy/: ... (200; 2.443328ms) Jun 10 22:00:19.852: INFO: (1) /api/v1/namespaces/proxy-4083/pods/http:proxy-service-v5r9q-p768z:160/proxy/: foo (200; 2.417772ms) Jun 10 22:00:19.853: INFO: (1) /api/v1/namespaces/proxy-4083/pods/proxy-service-v5r9q-p768z:162/proxy/: bar (200; 2.728797ms) Jun 10 22:00:19.853: INFO: (1) /api/v1/namespaces/proxy-4083/services/proxy-service-v5r9q:portname1/proxy/: foo (200; 3.562459ms) Jun 10 22:00:19.853: INFO: (1) /api/v1/namespaces/proxy-4083/pods/https:proxy-service-v5r9q-p768z:443/proxy/: test<... (200; 3.480084ms) Jun 10 22:00:19.854: INFO: (1) /api/v1/namespaces/proxy-4083/pods/https:proxy-service-v5r9q-p768z:460/proxy/: tls baz (200; 3.534084ms) Jun 10 22:00:19.854: INFO: (1) /api/v1/namespaces/proxy-4083/pods/proxy-service-v5r9q-p768z:160/proxy/: foo (200; 3.856571ms) Jun 10 22:00:19.854: INFO: (1) /api/v1/namespaces/proxy-4083/pods/proxy-service-v5r9q-p768z/proxy/: test (200; 3.783403ms) Jun 10 22:00:19.854: INFO: (1) /api/v1/namespaces/proxy-4083/services/proxy-service-v5r9q:portname2/proxy/: bar (200; 3.988557ms) Jun 10 22:00:19.854: INFO: (1) /api/v1/namespaces/proxy-4083/services/http:proxy-service-v5r9q:portname2/proxy/: bar (200; 3.915999ms) Jun 10 22:00:19.854: INFO: (1) /api/v1/namespaces/proxy-4083/pods/http:proxy-service-v5r9q-p768z:162/proxy/: bar (200; 3.890268ms) Jun 10 22:00:19.854: INFO: (1) /api/v1/namespaces/proxy-4083/services/http:proxy-service-v5r9q:portname1/proxy/: foo (200; 4.367485ms) Jun 10 22:00:19.855: INFO: (1) /api/v1/namespaces/proxy-4083/services/https:proxy-service-v5r9q:tlsportname2/proxy/: tls qux (200; 4.490095ms) Jun 10 22:00:19.857: INFO: (2) /api/v1/namespaces/proxy-4083/pods/proxy-service-v5r9q-p768z/proxy/: test (200; 2.380022ms) Jun 10 22:00:19.857: INFO: (2) /api/v1/namespaces/proxy-4083/pods/proxy-service-v5r9q-p768z:162/proxy/: bar (200; 2.359153ms) Jun 10 22:00:19.857: INFO: (2) /api/v1/namespaces/proxy-4083/pods/https:proxy-service-v5r9q-p768z:460/proxy/: tls baz (200; 2.587689ms) Jun 10 22:00:19.857: INFO: (2) /api/v1/namespaces/proxy-4083/pods/http:proxy-service-v5r9q-p768z:162/proxy/: bar (200; 2.408216ms) Jun 10 22:00:19.858: INFO: (2) /api/v1/namespaces/proxy-4083/pods/http:proxy-service-v5r9q-p768z:1080/proxy/: ... (200; 2.612843ms) Jun 10 22:00:19.858: INFO: (2) /api/v1/namespaces/proxy-4083/services/https:proxy-service-v5r9q:tlsportname2/proxy/: tls qux (200; 2.956752ms) Jun 10 22:00:19.858: INFO: (2) /api/v1/namespaces/proxy-4083/pods/https:proxy-service-v5r9q-p768z:443/proxy/: test<... (200; 3.257996ms) Jun 10 22:00:19.858: INFO: (2) /api/v1/namespaces/proxy-4083/pods/proxy-service-v5r9q-p768z:160/proxy/: foo (200; 3.385461ms) Jun 10 22:00:19.858: INFO: (2) /api/v1/namespaces/proxy-4083/services/proxy-service-v5r9q:portname2/proxy/: bar (200; 3.491532ms) Jun 10 22:00:19.858: INFO: (2) /api/v1/namespaces/proxy-4083/pods/http:proxy-service-v5r9q-p768z:160/proxy/: foo (200; 3.293245ms) Jun 10 22:00:19.858: INFO: (2) /api/v1/namespaces/proxy-4083/pods/https:proxy-service-v5r9q-p768z:462/proxy/: tls qux (200; 3.347226ms) Jun 10 22:00:19.858: INFO: (2) /api/v1/namespaces/proxy-4083/services/http:proxy-service-v5r9q:portname1/proxy/: foo (200; 3.806536ms) Jun 10 22:00:19.859: INFO: (2) /api/v1/namespaces/proxy-4083/services/http:proxy-service-v5r9q:portname2/proxy/: bar (200; 4.058417ms) Jun 10 22:00:19.859: INFO: (2) /api/v1/namespaces/proxy-4083/services/https:proxy-service-v5r9q:tlsportname1/proxy/: tls baz (200; 4.128842ms) Jun 10 22:00:19.859: INFO: (2) /api/v1/namespaces/proxy-4083/services/proxy-service-v5r9q:portname1/proxy/: foo (200; 4.268314ms) Jun 10 22:00:19.861: INFO: (3) /api/v1/namespaces/proxy-4083/pods/http:proxy-service-v5r9q-p768z:160/proxy/: foo (200; 2.053952ms) Jun 10 22:00:19.862: INFO: (3) /api/v1/namespaces/proxy-4083/pods/proxy-service-v5r9q-p768z/proxy/: test (200; 2.51188ms) Jun 10 22:00:19.862: INFO: (3) /api/v1/namespaces/proxy-4083/pods/proxy-service-v5r9q-p768z:160/proxy/: foo (200; 2.805956ms) Jun 10 22:00:19.862: INFO: (3) /api/v1/namespaces/proxy-4083/pods/http:proxy-service-v5r9q-p768z:162/proxy/: bar (200; 2.976913ms) Jun 10 22:00:19.862: INFO: (3) /api/v1/namespaces/proxy-4083/pods/https:proxy-service-v5r9q-p768z:462/proxy/: tls qux (200; 2.897799ms) Jun 10 22:00:19.862: INFO: (3) /api/v1/namespaces/proxy-4083/pods/proxy-service-v5r9q-p768z:1080/proxy/: test<... (200; 2.873097ms) Jun 10 22:00:19.862: INFO: (3) /api/v1/namespaces/proxy-4083/pods/http:proxy-service-v5r9q-p768z:1080/proxy/: ... (200; 2.854379ms) Jun 10 22:00:19.862: INFO: (3) /api/v1/namespaces/proxy-4083/services/https:proxy-service-v5r9q:tlsportname2/proxy/: tls qux (200; 3.045514ms) Jun 10 22:00:19.863: INFO: (3) /api/v1/namespaces/proxy-4083/pods/proxy-service-v5r9q-p768z:162/proxy/: bar (200; 3.099236ms) Jun 10 22:00:19.863: INFO: (3) /api/v1/namespaces/proxy-4083/services/http:proxy-service-v5r9q:portname2/proxy/: bar (200; 3.186263ms) Jun 10 22:00:19.863: INFO: (3) /api/v1/namespaces/proxy-4083/pods/https:proxy-service-v5r9q-p768z:443/proxy/: test<... (200; 2.318301ms) Jun 10 22:00:19.866: INFO: (4) /api/v1/namespaces/proxy-4083/pods/http:proxy-service-v5r9q-p768z:162/proxy/: bar (200; 2.24449ms) Jun 10 22:00:19.866: INFO: (4) /api/v1/namespaces/proxy-4083/pods/http:proxy-service-v5r9q-p768z:1080/proxy/: ... (200; 2.482623ms) Jun 10 22:00:19.866: INFO: (4) /api/v1/namespaces/proxy-4083/pods/https:proxy-service-v5r9q-p768z:460/proxy/: tls baz (200; 2.616921ms) Jun 10 22:00:19.866: INFO: (4) /api/v1/namespaces/proxy-4083/pods/https:proxy-service-v5r9q-p768z:443/proxy/: test (200; 3.061571ms) Jun 10 22:00:19.867: INFO: (4) /api/v1/namespaces/proxy-4083/pods/proxy-service-v5r9q-p768z:160/proxy/: foo (200; 3.033671ms) Jun 10 22:00:19.867: INFO: (4) /api/v1/namespaces/proxy-4083/services/https:proxy-service-v5r9q:tlsportname2/proxy/: tls qux (200; 3.248164ms) Jun 10 22:00:19.867: INFO: (4) /api/v1/namespaces/proxy-4083/pods/https:proxy-service-v5r9q-p768z:462/proxy/: tls qux (200; 3.142082ms) Jun 10 22:00:19.867: INFO: (4) /api/v1/namespaces/proxy-4083/services/https:proxy-service-v5r9q:tlsportname1/proxy/: tls baz (200; 3.459866ms) Jun 10 22:00:19.868: INFO: (4) /api/v1/namespaces/proxy-4083/services/http:proxy-service-v5r9q:portname1/proxy/: foo (200; 3.717701ms) Jun 10 22:00:19.868: INFO: (4) /api/v1/namespaces/proxy-4083/services/proxy-service-v5r9q:portname2/proxy/: bar (200; 3.673223ms) Jun 10 22:00:19.868: INFO: (4) /api/v1/namespaces/proxy-4083/services/proxy-service-v5r9q:portname1/proxy/: foo (200; 3.894657ms) Jun 10 22:00:19.868: INFO: (4) /api/v1/namespaces/proxy-4083/services/http:proxy-service-v5r9q:portname2/proxy/: bar (200; 4.044721ms) Jun 10 22:00:19.871: INFO: (5) /api/v1/namespaces/proxy-4083/pods/proxy-service-v5r9q-p768z:1080/proxy/: test<... (200; 2.35377ms) Jun 10 22:00:19.871: INFO: (5) /api/v1/namespaces/proxy-4083/pods/http:proxy-service-v5r9q-p768z:160/proxy/: foo (200; 2.364255ms) Jun 10 22:00:19.871: INFO: (5) /api/v1/namespaces/proxy-4083/pods/proxy-service-v5r9q-p768z/proxy/: test (200; 2.307059ms) Jun 10 22:00:19.871: INFO: (5) /api/v1/namespaces/proxy-4083/pods/https:proxy-service-v5r9q-p768z:462/proxy/: tls qux (200; 2.626065ms) Jun 10 22:00:19.871: INFO: (5) /api/v1/namespaces/proxy-4083/pods/proxy-service-v5r9q-p768z:162/proxy/: bar (200; 2.64946ms) Jun 10 22:00:19.871: INFO: (5) /api/v1/namespaces/proxy-4083/services/http:proxy-service-v5r9q:portname1/proxy/: foo (200; 2.975877ms) Jun 10 22:00:19.871: INFO: (5) /api/v1/namespaces/proxy-4083/pods/http:proxy-service-v5r9q-p768z:1080/proxy/: ... (200; 2.757254ms) Jun 10 22:00:19.871: INFO: (5) /api/v1/namespaces/proxy-4083/pods/https:proxy-service-v5r9q-p768z:460/proxy/: tls baz (200; 3.138192ms) Jun 10 22:00:19.871: INFO: (5) /api/v1/namespaces/proxy-4083/services/proxy-service-v5r9q:portname2/proxy/: bar (200; 3.279758ms) Jun 10 22:00:19.872: INFO: (5) /api/v1/namespaces/proxy-4083/services/https:proxy-service-v5r9q:tlsportname1/proxy/: tls baz (200; 3.720234ms) Jun 10 22:00:19.872: INFO: (5) /api/v1/namespaces/proxy-4083/pods/http:proxy-service-v5r9q-p768z:162/proxy/: bar (200; 3.508531ms) Jun 10 22:00:19.872: INFO: (5) /api/v1/namespaces/proxy-4083/services/proxy-service-v5r9q:portname1/proxy/: foo (200; 3.533079ms) Jun 10 22:00:19.872: INFO: (5) /api/v1/namespaces/proxy-4083/pods/proxy-service-v5r9q-p768z:160/proxy/: foo (200; 3.70636ms) Jun 10 22:00:19.872: INFO: (5) /api/v1/namespaces/proxy-4083/pods/https:proxy-service-v5r9q-p768z:443/proxy/: ... (200; 1.965564ms) Jun 10 22:00:19.875: INFO: (6) /api/v1/namespaces/proxy-4083/pods/http:proxy-service-v5r9q-p768z:160/proxy/: foo (200; 2.459633ms) Jun 10 22:00:19.875: INFO: (6) /api/v1/namespaces/proxy-4083/pods/proxy-service-v5r9q-p768z:162/proxy/: bar (200; 2.552629ms) Jun 10 22:00:19.876: INFO: (6) /api/v1/namespaces/proxy-4083/pods/https:proxy-service-v5r9q-p768z:460/proxy/: tls baz (200; 2.809989ms) Jun 10 22:00:19.876: INFO: (6) /api/v1/namespaces/proxy-4083/pods/proxy-service-v5r9q-p768z/proxy/: test (200; 2.998206ms) Jun 10 22:00:19.876: INFO: (6) /api/v1/namespaces/proxy-4083/pods/proxy-service-v5r9q-p768z:160/proxy/: foo (200; 2.943414ms) Jun 10 22:00:19.876: INFO: (6) /api/v1/namespaces/proxy-4083/pods/https:proxy-service-v5r9q-p768z:443/proxy/: test<... (200; 2.917507ms) Jun 10 22:00:19.876: INFO: (6) /api/v1/namespaces/proxy-4083/pods/https:proxy-service-v5r9q-p768z:462/proxy/: tls qux (200; 3.044145ms) Jun 10 22:00:19.876: INFO: (6) /api/v1/namespaces/proxy-4083/services/http:proxy-service-v5r9q:portname1/proxy/: foo (200; 3.471073ms) Jun 10 22:00:19.876: INFO: (6) /api/v1/namespaces/proxy-4083/pods/http:proxy-service-v5r9q-p768z:162/proxy/: bar (200; 3.402217ms) Jun 10 22:00:19.876: INFO: (6) /api/v1/namespaces/proxy-4083/services/proxy-service-v5r9q:portname2/proxy/: bar (200; 3.532814ms) Jun 10 22:00:19.876: INFO: (6) /api/v1/namespaces/proxy-4083/services/https:proxy-service-v5r9q:tlsportname2/proxy/: tls qux (200; 3.423675ms) Jun 10 22:00:19.876: INFO: (6) /api/v1/namespaces/proxy-4083/services/https:proxy-service-v5r9q:tlsportname1/proxy/: tls baz (200; 3.638726ms) Jun 10 22:00:19.877: INFO: (6) /api/v1/namespaces/proxy-4083/services/proxy-service-v5r9q:portname1/proxy/: foo (200; 3.954635ms) Jun 10 22:00:19.877: INFO: (6) /api/v1/namespaces/proxy-4083/services/http:proxy-service-v5r9q:portname2/proxy/: bar (200; 4.00069ms) Jun 10 22:00:19.879: INFO: (7) /api/v1/namespaces/proxy-4083/pods/http:proxy-service-v5r9q-p768z:160/proxy/: foo (200; 2.323302ms) Jun 10 22:00:19.879: INFO: (7) /api/v1/namespaces/proxy-4083/pods/proxy-service-v5r9q-p768z/proxy/: test (200; 2.289486ms) Jun 10 22:00:19.879: INFO: (7) /api/v1/namespaces/proxy-4083/pods/proxy-service-v5r9q-p768z:160/proxy/: foo (200; 2.429789ms) Jun 10 22:00:19.879: INFO: (7) /api/v1/namespaces/proxy-4083/pods/https:proxy-service-v5r9q-p768z:443/proxy/: ... (200; 3.085857ms) Jun 10 22:00:19.880: INFO: (7) /api/v1/namespaces/proxy-4083/pods/https:proxy-service-v5r9q-p768z:462/proxy/: tls qux (200; 3.243299ms) Jun 10 22:00:19.880: INFO: (7) /api/v1/namespaces/proxy-4083/pods/proxy-service-v5r9q-p768z:1080/proxy/: test<... (200; 3.326694ms) Jun 10 22:00:19.880: INFO: (7) /api/v1/namespaces/proxy-4083/services/http:proxy-service-v5r9q:portname2/proxy/: bar (200; 3.382897ms) Jun 10 22:00:19.881: INFO: (7) /api/v1/namespaces/proxy-4083/services/proxy-service-v5r9q:portname2/proxy/: bar (200; 3.735561ms) Jun 10 22:00:19.881: INFO: (7) /api/v1/namespaces/proxy-4083/services/proxy-service-v5r9q:portname1/proxy/: foo (200; 3.91301ms) Jun 10 22:00:19.881: INFO: (7) /api/v1/namespaces/proxy-4083/services/http:proxy-service-v5r9q:portname1/proxy/: foo (200; 3.946601ms) Jun 10 22:00:19.883: INFO: (8) /api/v1/namespaces/proxy-4083/pods/http:proxy-service-v5r9q-p768z:1080/proxy/: ... (200; 2.109804ms) Jun 10 22:00:19.883: INFO: (8) /api/v1/namespaces/proxy-4083/pods/proxy-service-v5r9q-p768z:160/proxy/: foo (200; 2.276467ms) Jun 10 22:00:19.883: INFO: (8) /api/v1/namespaces/proxy-4083/pods/http:proxy-service-v5r9q-p768z:160/proxy/: foo (200; 2.318134ms) Jun 10 22:00:19.884: INFO: (8) /api/v1/namespaces/proxy-4083/pods/proxy-service-v5r9q-p768z/proxy/: test (200; 2.376322ms) Jun 10 22:00:19.884: INFO: (8) /api/v1/namespaces/proxy-4083/pods/http:proxy-service-v5r9q-p768z:162/proxy/: bar (200; 2.68924ms) Jun 10 22:00:19.884: INFO: (8) /api/v1/namespaces/proxy-4083/services/http:proxy-service-v5r9q:portname1/proxy/: foo (200; 3.140975ms) Jun 10 22:00:19.884: INFO: (8) /api/v1/namespaces/proxy-4083/pods/proxy-service-v5r9q-p768z:1080/proxy/: test<... (200; 3.174282ms) Jun 10 22:00:19.884: INFO: (8) /api/v1/namespaces/proxy-4083/pods/https:proxy-service-v5r9q-p768z:443/proxy/: test<... (200; 1.970606ms) Jun 10 22:00:19.888: INFO: (9) /api/v1/namespaces/proxy-4083/pods/http:proxy-service-v5r9q-p768z:1080/proxy/: ... (200; 2.061554ms) Jun 10 22:00:19.889: INFO: (9) /api/v1/namespaces/proxy-4083/pods/https:proxy-service-v5r9q-p768z:462/proxy/: tls qux (200; 2.704791ms) Jun 10 22:00:19.889: INFO: (9) /api/v1/namespaces/proxy-4083/pods/https:proxy-service-v5r9q-p768z:460/proxy/: tls baz (200; 2.711346ms) Jun 10 22:00:19.889: INFO: (9) /api/v1/namespaces/proxy-4083/pods/http:proxy-service-v5r9q-p768z:160/proxy/: foo (200; 2.786003ms) Jun 10 22:00:19.889: INFO: (9) /api/v1/namespaces/proxy-4083/pods/http:proxy-service-v5r9q-p768z:162/proxy/: bar (200; 2.674067ms) Jun 10 22:00:19.889: INFO: (9) /api/v1/namespaces/proxy-4083/pods/proxy-service-v5r9q-p768z:160/proxy/: foo (200; 3.027566ms) Jun 10 22:00:19.889: INFO: (9) /api/v1/namespaces/proxy-4083/pods/proxy-service-v5r9q-p768z/proxy/: test (200; 2.804497ms) Jun 10 22:00:19.889: INFO: (9) /api/v1/namespaces/proxy-4083/pods/proxy-service-v5r9q-p768z:162/proxy/: bar (200; 2.849867ms) Jun 10 22:00:19.889: INFO: (9) /api/v1/namespaces/proxy-4083/services/https:proxy-service-v5r9q:tlsportname2/proxy/: tls qux (200; 3.26624ms) Jun 10 22:00:19.889: INFO: (9) /api/v1/namespaces/proxy-4083/services/proxy-service-v5r9q:portname1/proxy/: foo (200; 3.53393ms) Jun 10 22:00:19.889: INFO: (9) /api/v1/namespaces/proxy-4083/pods/https:proxy-service-v5r9q-p768z:443/proxy/: test<... (200; 2.561446ms) Jun 10 22:00:19.892: INFO: (10) /api/v1/namespaces/proxy-4083/pods/proxy-service-v5r9q-p768z/proxy/: test (200; 2.531125ms) Jun 10 22:00:19.893: INFO: (10) /api/v1/namespaces/proxy-4083/pods/https:proxy-service-v5r9q-p768z:443/proxy/: ... (200; 3.218705ms) Jun 10 22:00:19.893: INFO: (10) /api/v1/namespaces/proxy-4083/pods/https:proxy-service-v5r9q-p768z:462/proxy/: tls qux (200; 3.25432ms) Jun 10 22:00:19.893: INFO: (10) /api/v1/namespaces/proxy-4083/services/http:proxy-service-v5r9q:portname2/proxy/: bar (200; 3.611245ms) Jun 10 22:00:19.894: INFO: (10) /api/v1/namespaces/proxy-4083/services/https:proxy-service-v5r9q:tlsportname1/proxy/: tls baz (200; 3.79689ms) Jun 10 22:00:19.894: INFO: (10) /api/v1/namespaces/proxy-4083/services/https:proxy-service-v5r9q:tlsportname2/proxy/: tls qux (200; 3.947447ms) Jun 10 22:00:19.894: INFO: (10) /api/v1/namespaces/proxy-4083/services/http:proxy-service-v5r9q:portname1/proxy/: foo (200; 3.906328ms) Jun 10 22:00:19.896: INFO: (11) /api/v1/namespaces/proxy-4083/pods/https:proxy-service-v5r9q-p768z:460/proxy/: tls baz (200; 2.328532ms) Jun 10 22:00:19.896: INFO: (11) /api/v1/namespaces/proxy-4083/pods/http:proxy-service-v5r9q-p768z:160/proxy/: foo (200; 2.329245ms) Jun 10 22:00:19.896: INFO: (11) /api/v1/namespaces/proxy-4083/pods/http:proxy-service-v5r9q-p768z:1080/proxy/: ... (200; 2.412647ms) Jun 10 22:00:19.896: INFO: (11) /api/v1/namespaces/proxy-4083/pods/proxy-service-v5r9q-p768z/proxy/: test (200; 2.485999ms) Jun 10 22:00:19.897: INFO: (11) /api/v1/namespaces/proxy-4083/pods/proxy-service-v5r9q-p768z:160/proxy/: foo (200; 2.494108ms) Jun 10 22:00:19.897: INFO: (11) /api/v1/namespaces/proxy-4083/pods/proxy-service-v5r9q-p768z:162/proxy/: bar (200; 2.76138ms) Jun 10 22:00:19.897: INFO: (11) /api/v1/namespaces/proxy-4083/pods/http:proxy-service-v5r9q-p768z:162/proxy/: bar (200; 2.875563ms) Jun 10 22:00:19.898: INFO: (11) /api/v1/namespaces/proxy-4083/services/http:proxy-service-v5r9q:portname2/proxy/: bar (200; 3.703677ms) Jun 10 22:00:19.898: INFO: (11) /api/v1/namespaces/proxy-4083/services/http:proxy-service-v5r9q:portname1/proxy/: foo (200; 3.859115ms) Jun 10 22:00:19.898: INFO: (11) /api/v1/namespaces/proxy-4083/services/https:proxy-service-v5r9q:tlsportname1/proxy/: tls baz (200; 4.158971ms) Jun 10 22:00:19.899: INFO: (11) /api/v1/namespaces/proxy-4083/pods/https:proxy-service-v5r9q-p768z:443/proxy/: test<... (200; 5.242531ms) Jun 10 22:00:19.900: INFO: (11) /api/v1/namespaces/proxy-4083/services/proxy-service-v5r9q:portname1/proxy/: foo (200; 6.141145ms) Jun 10 22:00:19.901: INFO: (11) /api/v1/namespaces/proxy-4083/services/proxy-service-v5r9q:portname2/proxy/: bar (200; 6.854057ms) Jun 10 22:00:19.902: INFO: (11) /api/v1/namespaces/proxy-4083/services/https:proxy-service-v5r9q:tlsportname2/proxy/: tls qux (200; 7.629678ms) Jun 10 22:00:19.904: INFO: (12) /api/v1/namespaces/proxy-4083/pods/proxy-service-v5r9q-p768z:162/proxy/: bar (200; 2.027633ms) Jun 10 22:00:19.904: INFO: (12) /api/v1/namespaces/proxy-4083/pods/http:proxy-service-v5r9q-p768z:1080/proxy/: ... (200; 2.2525ms) Jun 10 22:00:19.904: INFO: (12) /api/v1/namespaces/proxy-4083/pods/proxy-service-v5r9q-p768z:160/proxy/: foo (200; 2.262892ms) Jun 10 22:00:19.904: INFO: (12) /api/v1/namespaces/proxy-4083/pods/proxy-service-v5r9q-p768z/proxy/: test (200; 2.440932ms) Jun 10 22:00:19.904: INFO: (12) /api/v1/namespaces/proxy-4083/pods/https:proxy-service-v5r9q-p768z:460/proxy/: tls baz (200; 2.557303ms) Jun 10 22:00:19.905: INFO: (12) /api/v1/namespaces/proxy-4083/pods/proxy-service-v5r9q-p768z:1080/proxy/: test<... (200; 2.541658ms) Jun 10 22:00:19.905: INFO: (12) /api/v1/namespaces/proxy-4083/pods/http:proxy-service-v5r9q-p768z:160/proxy/: foo (200; 2.957228ms) Jun 10 22:00:19.905: INFO: (12) /api/v1/namespaces/proxy-4083/pods/http:proxy-service-v5r9q-p768z:162/proxy/: bar (200; 2.995173ms) Jun 10 22:00:19.905: INFO: (12) /api/v1/namespaces/proxy-4083/pods/https:proxy-service-v5r9q-p768z:462/proxy/: tls qux (200; 3.06364ms) Jun 10 22:00:19.905: INFO: (12) /api/v1/namespaces/proxy-4083/services/proxy-service-v5r9q:portname1/proxy/: foo (200; 3.342669ms) Jun 10 22:00:19.905: INFO: (12) /api/v1/namespaces/proxy-4083/services/proxy-service-v5r9q:portname2/proxy/: bar (200; 3.218006ms) Jun 10 22:00:19.905: INFO: (12) /api/v1/namespaces/proxy-4083/pods/https:proxy-service-v5r9q-p768z:443/proxy/: test (200; 2.346773ms) Jun 10 22:00:19.908: INFO: (13) /api/v1/namespaces/proxy-4083/pods/http:proxy-service-v5r9q-p768z:160/proxy/: foo (200; 2.413865ms) Jun 10 22:00:19.908: INFO: (13) /api/v1/namespaces/proxy-4083/pods/http:proxy-service-v5r9q-p768z:1080/proxy/: ... (200; 2.446245ms) Jun 10 22:00:19.909: INFO: (13) /api/v1/namespaces/proxy-4083/pods/proxy-service-v5r9q-p768z:1080/proxy/: test<... (200; 2.541739ms) Jun 10 22:00:19.909: INFO: (13) /api/v1/namespaces/proxy-4083/pods/proxy-service-v5r9q-p768z:160/proxy/: foo (200; 2.515739ms) Jun 10 22:00:19.909: INFO: (13) /api/v1/namespaces/proxy-4083/pods/https:proxy-service-v5r9q-p768z:462/proxy/: tls qux (200; 2.564027ms) Jun 10 22:00:19.909: INFO: (13) /api/v1/namespaces/proxy-4083/pods/http:proxy-service-v5r9q-p768z:162/proxy/: bar (200; 2.936473ms) Jun 10 22:00:19.909: INFO: (13) /api/v1/namespaces/proxy-4083/pods/proxy-service-v5r9q-p768z:162/proxy/: bar (200; 3.089584ms) Jun 10 22:00:19.909: INFO: (13) /api/v1/namespaces/proxy-4083/pods/https:proxy-service-v5r9q-p768z:460/proxy/: tls baz (200; 3.071822ms) Jun 10 22:00:19.909: INFO: (13) /api/v1/namespaces/proxy-4083/services/http:proxy-service-v5r9q:portname1/proxy/: foo (200; 3.205023ms) Jun 10 22:00:19.910: INFO: (13) /api/v1/namespaces/proxy-4083/services/proxy-service-v5r9q:portname2/proxy/: bar (200; 3.50441ms) Jun 10 22:00:19.910: INFO: (13) /api/v1/namespaces/proxy-4083/pods/https:proxy-service-v5r9q-p768z:443/proxy/: test (200; 2.040751ms) Jun 10 22:00:19.913: INFO: (14) /api/v1/namespaces/proxy-4083/pods/https:proxy-service-v5r9q-p768z:443/proxy/: test<... (200; 2.648673ms) Jun 10 22:00:19.914: INFO: (14) /api/v1/namespaces/proxy-4083/pods/http:proxy-service-v5r9q-p768z:162/proxy/: bar (200; 2.978469ms) Jun 10 22:00:19.914: INFO: (14) /api/v1/namespaces/proxy-4083/pods/https:proxy-service-v5r9q-p768z:460/proxy/: tls baz (200; 2.922132ms) Jun 10 22:00:19.914: INFO: (14) /api/v1/namespaces/proxy-4083/pods/proxy-service-v5r9q-p768z:160/proxy/: foo (200; 2.938819ms) Jun 10 22:00:19.914: INFO: (14) /api/v1/namespaces/proxy-4083/pods/http:proxy-service-v5r9q-p768z:160/proxy/: foo (200; 2.922407ms) Jun 10 22:00:19.914: INFO: (14) /api/v1/namespaces/proxy-4083/services/https:proxy-service-v5r9q:tlsportname2/proxy/: tls qux (200; 3.086469ms) Jun 10 22:00:19.914: INFO: (14) /api/v1/namespaces/proxy-4083/pods/https:proxy-service-v5r9q-p768z:462/proxy/: tls qux (200; 3.015573ms) Jun 10 22:00:19.914: INFO: (14) /api/v1/namespaces/proxy-4083/services/proxy-service-v5r9q:portname2/proxy/: bar (200; 3.274291ms) Jun 10 22:00:19.914: INFO: (14) /api/v1/namespaces/proxy-4083/services/http:proxy-service-v5r9q:portname2/proxy/: bar (200; 3.683232ms) Jun 10 22:00:19.914: INFO: (14) /api/v1/namespaces/proxy-4083/pods/http:proxy-service-v5r9q-p768z:1080/proxy/: ... (200; 3.503464ms) Jun 10 22:00:19.915: INFO: (14) /api/v1/namespaces/proxy-4083/services/http:proxy-service-v5r9q:portname1/proxy/: foo (200; 3.848555ms) Jun 10 22:00:19.915: INFO: (14) /api/v1/namespaces/proxy-4083/services/proxy-service-v5r9q:portname1/proxy/: foo (200; 3.814733ms) Jun 10 22:00:19.915: INFO: (14) /api/v1/namespaces/proxy-4083/services/https:proxy-service-v5r9q:tlsportname1/proxy/: tls baz (200; 3.939424ms) Jun 10 22:00:19.917: INFO: (15) /api/v1/namespaces/proxy-4083/pods/http:proxy-service-v5r9q-p768z:1080/proxy/: ... (200; 2.401997ms) Jun 10 22:00:19.917: INFO: (15) /api/v1/namespaces/proxy-4083/pods/proxy-service-v5r9q-p768z:162/proxy/: bar (200; 2.551388ms) Jun 10 22:00:19.918: INFO: (15) /api/v1/namespaces/proxy-4083/pods/https:proxy-service-v5r9q-p768z:460/proxy/: tls baz (200; 2.737728ms) Jun 10 22:00:19.918: INFO: (15) /api/v1/namespaces/proxy-4083/pods/http:proxy-service-v5r9q-p768z:162/proxy/: bar (200; 2.707724ms) Jun 10 22:00:19.918: INFO: (15) /api/v1/namespaces/proxy-4083/pods/https:proxy-service-v5r9q-p768z:443/proxy/: test<... (200; 2.875792ms) Jun 10 22:00:19.918: INFO: (15) /api/v1/namespaces/proxy-4083/pods/proxy-service-v5r9q-p768z/proxy/: test (200; 3.136735ms) Jun 10 22:00:19.919: INFO: (15) /api/v1/namespaces/proxy-4083/services/http:proxy-service-v5r9q:portname2/proxy/: bar (200; 3.599843ms) Jun 10 22:00:19.919: INFO: (15) /api/v1/namespaces/proxy-4083/pods/https:proxy-service-v5r9q-p768z:462/proxy/: tls qux (200; 3.538564ms) Jun 10 22:00:19.919: INFO: (15) /api/v1/namespaces/proxy-4083/services/proxy-service-v5r9q:portname1/proxy/: foo (200; 3.770567ms) Jun 10 22:00:19.919: INFO: (15) /api/v1/namespaces/proxy-4083/services/https:proxy-service-v5r9q:tlsportname1/proxy/: tls baz (200; 4.09464ms) Jun 10 22:00:19.919: INFO: (15) /api/v1/namespaces/proxy-4083/services/http:proxy-service-v5r9q:portname1/proxy/: foo (200; 4.047799ms) Jun 10 22:00:19.919: INFO: (15) /api/v1/namespaces/proxy-4083/services/proxy-service-v5r9q:portname2/proxy/: bar (200; 4.21115ms) Jun 10 22:00:19.922: INFO: (16) /api/v1/namespaces/proxy-4083/pods/proxy-service-v5r9q-p768z:160/proxy/: foo (200; 2.323082ms) Jun 10 22:00:19.922: INFO: (16) /api/v1/namespaces/proxy-4083/pods/http:proxy-service-v5r9q-p768z:1080/proxy/: ... (200; 2.324046ms) Jun 10 22:00:19.922: INFO: (16) /api/v1/namespaces/proxy-4083/pods/proxy-service-v5r9q-p768z/proxy/: test (200; 2.43668ms) Jun 10 22:00:19.922: INFO: (16) /api/v1/namespaces/proxy-4083/pods/proxy-service-v5r9q-p768z:162/proxy/: bar (200; 2.557651ms) Jun 10 22:00:19.922: INFO: (16) /api/v1/namespaces/proxy-4083/pods/https:proxy-service-v5r9q-p768z:460/proxy/: tls baz (200; 2.591339ms) Jun 10 22:00:19.922: INFO: (16) /api/v1/namespaces/proxy-4083/pods/https:proxy-service-v5r9q-p768z:443/proxy/: test<... (200; 2.809673ms) Jun 10 22:00:19.922: INFO: (16) /api/v1/namespaces/proxy-4083/pods/http:proxy-service-v5r9q-p768z:160/proxy/: foo (200; 2.850159ms) Jun 10 22:00:19.923: INFO: (16) /api/v1/namespaces/proxy-4083/pods/https:proxy-service-v5r9q-p768z:462/proxy/: tls qux (200; 2.997607ms) Jun 10 22:00:19.923: INFO: (16) /api/v1/namespaces/proxy-4083/services/https:proxy-service-v5r9q:tlsportname1/proxy/: tls baz (200; 3.258771ms) Jun 10 22:00:19.923: INFO: (16) /api/v1/namespaces/proxy-4083/services/proxy-service-v5r9q:portname2/proxy/: bar (200; 3.420285ms) Jun 10 22:00:19.923: INFO: (16) /api/v1/namespaces/proxy-4083/services/https:proxy-service-v5r9q:tlsportname2/proxy/: tls qux (200; 3.582937ms) Jun 10 22:00:19.923: INFO: (16) /api/v1/namespaces/proxy-4083/services/proxy-service-v5r9q:portname1/proxy/: foo (200; 3.63571ms) Jun 10 22:00:19.923: INFO: (16) /api/v1/namespaces/proxy-4083/services/http:proxy-service-v5r9q:portname2/proxy/: bar (200; 3.633265ms) Jun 10 22:00:19.924: INFO: (16) /api/v1/namespaces/proxy-4083/services/http:proxy-service-v5r9q:portname1/proxy/: foo (200; 4.044986ms) Jun 10 22:00:19.926: INFO: (17) /api/v1/namespaces/proxy-4083/pods/proxy-service-v5r9q-p768z:160/proxy/: foo (200; 2.32132ms) Jun 10 22:00:19.926: INFO: (17) /api/v1/namespaces/proxy-4083/pods/proxy-service-v5r9q-p768z:1080/proxy/: test<... (200; 2.271699ms) Jun 10 22:00:19.926: INFO: (17) /api/v1/namespaces/proxy-4083/pods/https:proxy-service-v5r9q-p768z:460/proxy/: tls baz (200; 2.421852ms) Jun 10 22:00:19.926: INFO: (17) /api/v1/namespaces/proxy-4083/pods/proxy-service-v5r9q-p768z/proxy/: test (200; 2.352651ms) Jun 10 22:00:19.926: INFO: (17) /api/v1/namespaces/proxy-4083/pods/proxy-service-v5r9q-p768z:162/proxy/: bar (200; 2.345203ms) Jun 10 22:00:19.927: INFO: (17) /api/v1/namespaces/proxy-4083/pods/http:proxy-service-v5r9q-p768z:1080/proxy/: ... (200; 2.724986ms) Jun 10 22:00:19.927: INFO: (17) /api/v1/namespaces/proxy-4083/pods/http:proxy-service-v5r9q-p768z:162/proxy/: bar (200; 2.864107ms) Jun 10 22:00:19.927: INFO: (17) /api/v1/namespaces/proxy-4083/services/proxy-service-v5r9q:portname1/proxy/: foo (200; 2.981925ms) Jun 10 22:00:19.927: INFO: (17) /api/v1/namespaces/proxy-4083/pods/https:proxy-service-v5r9q-p768z:462/proxy/: tls qux (200; 2.997617ms) Jun 10 22:00:19.927: INFO: (17) /api/v1/namespaces/proxy-4083/pods/https:proxy-service-v5r9q-p768z:443/proxy/: test (200; 2.753699ms) Jun 10 22:00:19.931: INFO: (18) /api/v1/namespaces/proxy-4083/pods/https:proxy-service-v5r9q-p768z:462/proxy/: tls qux (200; 2.753413ms) Jun 10 22:00:19.931: INFO: (18) /api/v1/namespaces/proxy-4083/pods/proxy-service-v5r9q-p768z:1080/proxy/: test<... (200; 2.93839ms) Jun 10 22:00:19.931: INFO: (18) /api/v1/namespaces/proxy-4083/pods/proxy-service-v5r9q-p768z:160/proxy/: foo (200; 3.036047ms) Jun 10 22:00:19.931: INFO: (18) /api/v1/namespaces/proxy-4083/pods/http:proxy-service-v5r9q-p768z:1080/proxy/: ... (200; 3.058832ms) Jun 10 22:00:19.931: INFO: (18) /api/v1/namespaces/proxy-4083/pods/http:proxy-service-v5r9q-p768z:162/proxy/: bar (200; 3.025239ms) Jun 10 22:00:19.931: INFO: (18) /api/v1/namespaces/proxy-4083/pods/https:proxy-service-v5r9q-p768z:460/proxy/: tls baz (200; 3.099172ms) Jun 10 22:00:19.931: INFO: (18) /api/v1/namespaces/proxy-4083/pods/http:proxy-service-v5r9q-p768z:160/proxy/: foo (200; 3.246211ms) Jun 10 22:00:19.931: INFO: (18) /api/v1/namespaces/proxy-4083/services/proxy-service-v5r9q:portname1/proxy/: foo (200; 3.460529ms) Jun 10 22:00:19.932: INFO: (18) /api/v1/namespaces/proxy-4083/services/http:proxy-service-v5r9q:portname1/proxy/: foo (200; 3.502621ms) Jun 10 22:00:19.932: INFO: (18) /api/v1/namespaces/proxy-4083/services/http:proxy-service-v5r9q:portname2/proxy/: bar (200; 3.892296ms) Jun 10 22:00:19.932: INFO: (18) /api/v1/namespaces/proxy-4083/services/proxy-service-v5r9q:portname2/proxy/: bar (200; 4.288277ms) Jun 10 22:00:19.932: INFO: (18) /api/v1/namespaces/proxy-4083/services/https:proxy-service-v5r9q:tlsportname1/proxy/: tls baz (200; 4.112628ms) Jun 10 22:00:19.932: INFO: (18) /api/v1/namespaces/proxy-4083/services/https:proxy-service-v5r9q:tlsportname2/proxy/: tls qux (200; 4.162962ms) Jun 10 22:00:19.935: INFO: (19) /api/v1/namespaces/proxy-4083/pods/proxy-service-v5r9q-p768z:160/proxy/: foo (200; 2.163634ms) Jun 10 22:00:19.935: INFO: (19) /api/v1/namespaces/proxy-4083/pods/https:proxy-service-v5r9q-p768z:460/proxy/: tls baz (200; 2.44399ms) Jun 10 22:00:19.935: INFO: (19) /api/v1/namespaces/proxy-4083/pods/proxy-service-v5r9q-p768z/proxy/: test (200; 2.515387ms) Jun 10 22:00:19.935: INFO: (19) /api/v1/namespaces/proxy-4083/pods/https:proxy-service-v5r9q-p768z:462/proxy/: tls qux (200; 2.516191ms) Jun 10 22:00:19.935: INFO: (19) /api/v1/namespaces/proxy-4083/pods/proxy-service-v5r9q-p768z:1080/proxy/: test<... (200; 2.698831ms) Jun 10 22:00:19.935: INFO: (19) /api/v1/namespaces/proxy-4083/pods/proxy-service-v5r9q-p768z:162/proxy/: bar (200; 2.779015ms) Jun 10 22:00:19.935: INFO: (19) /api/v1/namespaces/proxy-4083/pods/http:proxy-service-v5r9q-p768z:162/proxy/: bar (200; 2.891965ms) Jun 10 22:00:19.936: INFO: (19) /api/v1/namespaces/proxy-4083/pods/http:proxy-service-v5r9q-p768z:160/proxy/: foo (200; 2.921771ms) Jun 10 22:00:19.936: INFO: (19) /api/v1/namespaces/proxy-4083/services/http:proxy-service-v5r9q:portname1/proxy/: foo (200; 3.215294ms) Jun 10 22:00:19.936: INFO: (19) /api/v1/namespaces/proxy-4083/pods/http:proxy-service-v5r9q-p768z:1080/proxy/: ... (200; 3.355997ms) Jun 10 22:00:19.936: INFO: (19) /api/v1/namespaces/proxy-4083/services/http:proxy-service-v5r9q:portname2/proxy/: bar (200; 3.512546ms) Jun 10 22:00:19.936: INFO: (19) /api/v1/namespaces/proxy-4083/services/proxy-service-v5r9q:portname2/proxy/: bar (200; 3.773803ms) Jun 10 22:00:19.936: INFO: (19) /api/v1/namespaces/proxy-4083/services/https:proxy-service-v5r9q:tlsportname2/proxy/: tls qux (200; 3.713048ms) Jun 10 22:00:19.936: INFO: (19) /api/v1/namespaces/proxy-4083/services/https:proxy-service-v5r9q:tlsportname1/proxy/: tls baz (200; 3.886286ms) Jun 10 22:00:19.936: INFO: (19) /api/v1/namespaces/proxy-4083/pods/https:proxy-service-v5r9q-p768z:443/proxy/: >> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename podtemplate STEP: Waiting for a default service account to be provisioned in namespace [It] should delete a collection of pod templates [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Create set of pod templates Jun 10 22:00:27.160: INFO: created test-podtemplate-1 Jun 10 22:00:27.163: INFO: created test-podtemplate-2 Jun 10 22:00:27.166: INFO: created test-podtemplate-3 STEP: get a list of pod templates with a label in the current namespace STEP: delete collection of pod templates Jun 10 22:00:27.171: INFO: requesting DeleteCollection of pod templates STEP: check that the list of pod templates matches the requested quantity Jun 10 22:00:27.180: INFO: requesting list of pod templates to confirm quantity [AfterEach] [sig-node] PodTemplates /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 10 22:00:27.182: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "podtemplate-4620" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] PodTemplates should delete a collection of pod templates [Conformance]","total":-1,"completed":16,"skipped":358,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] Watchers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 10 22:00:27.257: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to restart watching from the last resource version observed by the previous watch [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating a watch on configmaps STEP: creating a new configmap STEP: modifying the configmap once STEP: closing the watch once it receives two notifications Jun 10 22:00:27.288: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-6971 5c95e5b5-ed5e-4007-8abf-475ee4908793 36255 0 2022-06-10 22:00:27 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] [{e2e.test Update v1 2022-06-10 22:00:27 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} Jun 10 22:00:27.288: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-6971 5c95e5b5-ed5e-4007-8abf-475ee4908793 36256 0 2022-06-10 22:00:27 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] [{e2e.test Update v1 2022-06-10 22:00:27 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: modifying the configmap a second time, while the watch is closed STEP: creating a new watch on configmaps from the last resource version observed by the first watch STEP: deleting the configmap STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed Jun 10 22:00:27.299: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-6971 5c95e5b5-ed5e-4007-8abf-475ee4908793 36257 0 2022-06-10 22:00:27 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] [{e2e.test Update v1 2022-06-10 22:00:27 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} Jun 10 22:00:27.299: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-6971 5c95e5b5-ed5e-4007-8abf-475ee4908793 36258 0 2022-06-10 22:00:27 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] [{e2e.test Update v1 2022-06-10 22:00:27 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 10 22:00:27.299: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-6971" for this suite. • ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance]","total":-1,"completed":17,"skipped":388,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 10 22:00:27.344: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:241 [It] should support --unix-socket=/path [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Starting the proxy Jun 10 22:00:27.368: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-3312 proxy --unix-socket=/tmp/kubectl-proxy-unix825690940/test' STEP: retrieving proxy /api/ output [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 10 22:00:27.467: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3312" for this suite. • ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Proxy server should support --unix-socket=/path [Conformance]","total":-1,"completed":18,"skipped":404,"failed":0} SSSSSSSSSS ------------------------------ [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 10 22:00:27.499: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:86 [It] deployment should delete old replica sets [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Jun 10 22:00:27.525: INFO: Pod name cleanup-pod: Found 0 pods out of 1 Jun 10 22:00:32.530: INFO: Pod name cleanup-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Jun 10 22:00:32.530: INFO: Creating deployment test-cleanup-deployment STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:80 Jun 10 22:00:32.544: INFO: Deployment "test-cleanup-deployment": &Deployment{ObjectMeta:{test-cleanup-deployment deployment-1121 775f8aec-1dcb-493d-a5f5-c17e88a8ba5a 36343 1 2022-06-10 22:00:32 +0000 UTC map[name:cleanup-pod] map[] [] [] [{e2e.test Update apps/v1 2022-06-10 22:00:32 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.32 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc004662658 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:0,Replicas:0,UpdatedReplicas:0,AvailableReplicas:0,UnavailableReplicas:0,Conditions:[]DeploymentCondition{},ReadyReplicas:0,CollisionCount:nil,},} Jun 10 22:00:32.547: INFO: New ReplicaSet "test-cleanup-deployment-5b4d99b59b" of Deployment "test-cleanup-deployment": &ReplicaSet{ObjectMeta:{test-cleanup-deployment-5b4d99b59b deployment-1121 bb79c858-c02f-44f6-a056-2f82a2c18057 36345 1 2022-06-10 22:00:32 +0000 UTC map[name:cleanup-pod pod-template-hash:5b4d99b59b] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-cleanup-deployment 775f8aec-1dcb-493d-a5f5-c17e88a8ba5a 0xc004662a87 0xc004662a88}] [] [{kube-controller-manager Update apps/v1 2022-06-10 22:00:32 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"775f8aec-1dcb-493d-a5f5-c17e88a8ba5a\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod-template-hash: 5b4d99b59b,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod pod-template-hash:5b4d99b59b] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.32 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc004662b18 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:0,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Jun 10 22:00:32.547: INFO: All old ReplicaSets of Deployment "test-cleanup-deployment": Jun 10 22:00:32.547: INFO: &ReplicaSet{ObjectMeta:{test-cleanup-controller deployment-1121 f745d5c3-c7df-4d9c-87c7-75354c7e09c0 36344 1 2022-06-10 22:00:27 +0000 UTC map[name:cleanup-pod pod:httpd] map[] [{apps/v1 Deployment test-cleanup-deployment 775f8aec-1dcb-493d-a5f5-c17e88a8ba5a 0xc004662977 0xc004662978}] [] [{e2e.test Update apps/v1 2022-06-10 22:00:27 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2022-06-10 22:00:32 +0000 UTC FieldsV1 {"f:metadata":{"f:ownerReferences":{".":{},"k:{\"uid\":\"775f8aec-1dcb-493d-a5f5-c17e88a8ba5a\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod pod:httpd] map[] [] [] []} {[] [] [{httpd k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc004662a18 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} Jun 10 22:00:32.550: INFO: Pod "test-cleanup-controller-tf6l8" is available: &Pod{ObjectMeta:{test-cleanup-controller-tf6l8 test-cleanup-controller- deployment-1121 7bb70643-9d5e-4b97-a0a6-f06575d2f631 36292 0 2022-06-10 22:00:27 +0000 UTC map[name:cleanup-pod pod:httpd] map[k8s.v1.cni.cncf.io/network-status:[{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.4.64" ], "mac": "06:7c:e7:bc:93:86", "default": true, "dns": {} }] k8s.v1.cni.cncf.io/networks-status:[{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.4.64" ], "mac": "06:7c:e7:bc:93:86", "default": true, "dns": {} }] kubernetes.io/psp:collectd] [{apps/v1 ReplicaSet test-cleanup-controller f745d5c3-c7df-4d9c-87c7-75354c7e09c0 0xc004662f37 0xc004662f38}] [] [{kube-controller-manager Update v1 2022-06-10 22:00:27 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f745d5c3-c7df-4d9c-87c7-75354c7e09c0\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {multus Update v1 2022-06-10 22:00:28 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:k8s.v1.cni.cncf.io/network-status":{},"f:k8s.v1.cni.cncf.io/networks-status":{}}}}} {kubelet Update v1 2022-06-10 22:00:29 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.4.64\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-mzj85,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-mzj85,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-06-10 22:00:27 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-06-10 22:00:29 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-06-10 22:00:29 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-06-10 22:00:27 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.10.190.208,PodIP:10.244.4.64,StartTime:2022-06-10 22:00:27 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2022-06-10 22:00:29 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,ImageID:docker-pullable://k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50,ContainerID:docker://7501422260bc0bdea704883302da91459e8e6030c81cd9371392d4452c774db9,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.4.64,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 10 22:00:32.550: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-1121" for this suite. • [SLOW TEST:5.060 seconds] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should delete old replica sets [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should delete old replica sets [Conformance]","total":-1,"completed":19,"skipped":414,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 10 21:59:35.213: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating projection with configMap that has name projected-configmap-test-upd-00703ff1-1929-4306-9d99-ba5f16787b5e STEP: Creating the pod Jun 10 21:59:35.255: INFO: The status of Pod pod-projected-configmaps-490f3056-8774-422c-a7b8-2c495b927271 is Pending, waiting for it to be Running (with Ready = true) Jun 10 21:59:37.259: INFO: The status of Pod pod-projected-configmaps-490f3056-8774-422c-a7b8-2c495b927271 is Pending, waiting for it to be Running (with Ready = true) Jun 10 21:59:39.263: INFO: The status of Pod pod-projected-configmaps-490f3056-8774-422c-a7b8-2c495b927271 is Running (Ready = true) STEP: Updating configmap projected-configmap-test-upd-00703ff1-1929-4306-9d99-ba5f16787b5e STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 10 22:00:41.682: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8109" for this suite. • [SLOW TEST:66.479 seconds] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance]","total":-1,"completed":11,"skipped":148,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 10 21:59:14.711: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating secret with name s-test-opt-del-5fe95e94-0f71-443b-a52f-66ae6d9de744 STEP: Creating secret with name s-test-opt-upd-ab32da21-c2ec-40c1-bbb3-4680da3c067a STEP: Creating the pod Jun 10 21:59:14.767: INFO: The status of Pod pod-secrets-c889ea2c-1696-40c8-becb-37f8920048f5 is Pending, waiting for it to be Running (with Ready = true) Jun 10 21:59:16.771: INFO: The status of Pod pod-secrets-c889ea2c-1696-40c8-becb-37f8920048f5 is Pending, waiting for it to be Running (with Ready = true) Jun 10 21:59:18.773: INFO: The status of Pod pod-secrets-c889ea2c-1696-40c8-becb-37f8920048f5 is Pending, waiting for it to be Running (with Ready = true) Jun 10 21:59:20.772: INFO: The status of Pod pod-secrets-c889ea2c-1696-40c8-becb-37f8920048f5 is Pending, waiting for it to be Running (with Ready = true) Jun 10 21:59:22.771: INFO: The status of Pod pod-secrets-c889ea2c-1696-40c8-becb-37f8920048f5 is Pending, waiting for it to be Running (with Ready = true) Jun 10 21:59:24.772: INFO: The status of Pod pod-secrets-c889ea2c-1696-40c8-becb-37f8920048f5 is Pending, waiting for it to be Running (with Ready = true) Jun 10 21:59:26.771: INFO: The status of Pod pod-secrets-c889ea2c-1696-40c8-becb-37f8920048f5 is Pending, waiting for it to be Running (with Ready = true) Jun 10 21:59:28.773: INFO: The status of Pod pod-secrets-c889ea2c-1696-40c8-becb-37f8920048f5 is Running (Ready = true) STEP: Deleting secret s-test-opt-del-5fe95e94-0f71-443b-a52f-66ae6d9de744 STEP: Updating secret s-test-opt-upd-ab32da21-c2ec-40c1-bbb3-4680da3c067a STEP: Creating secret with name s-test-opt-create-f881d671-4d10-4dca-9698-8433ff3e6d90 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 10 22:00:43.273: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-4896" for this suite. • [SLOW TEST:88.572 seconds] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance]","total":-1,"completed":13,"skipped":298,"failed":0} SSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 10 22:00:43.302: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for deployment deletion to see if the garbage collector mistakenly deletes the rs STEP: Gathering metrics Jun 10 22:00:44.392: INFO: The status of Pod kube-controller-manager-master3 is Running (Ready = true) Jun 10 22:00:44.458: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 10 22:00:44.458: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-9282" for this suite. • ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]","total":-1,"completed":14,"skipped":305,"failed":0} S ------------------------------ [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 10 22:00:44.471: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:186 [It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Jun 10 22:00:44.492: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes Jun 10 22:00:44.509: INFO: The status of Pod pod-logs-websocket-75b6b44a-62c7-4c40-ae1c-b57957317d18 is Pending, waiting for it to be Running (with Ready = true) Jun 10 22:00:46.515: INFO: The status of Pod pod-logs-websocket-75b6b44a-62c7-4c40-ae1c-b57957317d18 is Pending, waiting for it to be Running (with Ready = true) Jun 10 22:00:48.512: INFO: The status of Pod pod-logs-websocket-75b6b44a-62c7-4c40-ae1c-b57957317d18 is Pending, waiting for it to be Running (with Ready = true) Jun 10 22:00:50.512: INFO: The status of Pod pod-logs-websocket-75b6b44a-62c7-4c40-ae1c-b57957317d18 is Running (Ready = true) [AfterEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 10 22:00:50.530: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-4717" for this suite. • [SLOW TEST:6.067 seconds] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance]","total":-1,"completed":15,"skipped":306,"failed":0} SSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 10 22:00:32.596: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with downward pod [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating pod pod-subpath-test-downwardapi-mqxp STEP: Creating a pod to test atomic-volume-subpath Jun 10 22:00:32.632: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-mqxp" in namespace "subpath-8465" to be "Succeeded or Failed" Jun 10 22:00:32.635: INFO: Pod "pod-subpath-test-downwardapi-mqxp": Phase="Pending", Reason="", readiness=false. Elapsed: 2.172488ms Jun 10 22:00:34.639: INFO: Pod "pod-subpath-test-downwardapi-mqxp": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006536613s Jun 10 22:00:36.645: INFO: Pod "pod-subpath-test-downwardapi-mqxp": Phase="Running", Reason="", readiness=true. Elapsed: 4.0121749s Jun 10 22:00:38.649: INFO: Pod "pod-subpath-test-downwardapi-mqxp": Phase="Running", Reason="", readiness=true. Elapsed: 6.0164083s Jun 10 22:00:40.653: INFO: Pod "pod-subpath-test-downwardapi-mqxp": Phase="Running", Reason="", readiness=true. Elapsed: 8.020280459s Jun 10 22:00:42.657: INFO: Pod "pod-subpath-test-downwardapi-mqxp": Phase="Running", Reason="", readiness=true. Elapsed: 10.024142116s Jun 10 22:00:44.660: INFO: Pod "pod-subpath-test-downwardapi-mqxp": Phase="Running", Reason="", readiness=true. Elapsed: 12.027953712s Jun 10 22:00:46.664: INFO: Pod "pod-subpath-test-downwardapi-mqxp": Phase="Running", Reason="", readiness=true. Elapsed: 14.031294543s Jun 10 22:00:48.668: INFO: Pod "pod-subpath-test-downwardapi-mqxp": Phase="Running", Reason="", readiness=true. Elapsed: 16.035217923s Jun 10 22:00:50.671: INFO: Pod "pod-subpath-test-downwardapi-mqxp": Phase="Running", Reason="", readiness=true. Elapsed: 18.038386145s Jun 10 22:00:52.675: INFO: Pod "pod-subpath-test-downwardapi-mqxp": Phase="Running", Reason="", readiness=true. Elapsed: 20.042345575s Jun 10 22:00:54.679: INFO: Pod "pod-subpath-test-downwardapi-mqxp": Phase="Running", Reason="", readiness=true. Elapsed: 22.046246206s Jun 10 22:00:56.682: INFO: Pod "pod-subpath-test-downwardapi-mqxp": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.049294337s STEP: Saw pod success Jun 10 22:00:56.682: INFO: Pod "pod-subpath-test-downwardapi-mqxp" satisfied condition "Succeeded or Failed" Jun 10 22:00:56.685: INFO: Trying to get logs from node node2 pod pod-subpath-test-downwardapi-mqxp container test-container-subpath-downwardapi-mqxp: STEP: delete the pod Jun 10 22:00:56.701: INFO: Waiting for pod pod-subpath-test-downwardapi-mqxp to disappear Jun 10 22:00:56.702: INFO: Pod pod-subpath-test-downwardapi-mqxp no longer exists STEP: Deleting pod pod-subpath-test-downwardapi-mqxp Jun 10 22:00:56.703: INFO: Deleting pod "pod-subpath-test-downwardapi-mqxp" in namespace "subpath-8465" [AfterEach] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 10 22:00:56.705: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-8465" for this suite. • [SLOW TEST:24.116 seconds] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with downward pod [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance]","total":-1,"completed":20,"skipped":434,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 10 22:00:50.561: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:241 [It] should create and stop a working application [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating all guestbook components Jun 10 22:00:50.585: INFO: apiVersion: v1 kind: Service metadata: name: agnhost-replica labels: app: agnhost role: replica tier: backend spec: ports: - port: 6379 selector: app: agnhost role: replica tier: backend Jun 10 22:00:50.585: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-6397 create -f -' Jun 10 22:00:51.001: INFO: stderr: "" Jun 10 22:00:51.001: INFO: stdout: "service/agnhost-replica created\n" Jun 10 22:00:51.002: INFO: apiVersion: v1 kind: Service metadata: name: agnhost-primary labels: app: agnhost role: primary tier: backend spec: ports: - port: 6379 targetPort: 6379 selector: app: agnhost role: primary tier: backend Jun 10 22:00:51.002: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-6397 create -f -' Jun 10 22:00:51.301: INFO: stderr: "" Jun 10 22:00:51.302: INFO: stdout: "service/agnhost-primary created\n" Jun 10 22:00:51.302: INFO: apiVersion: v1 kind: Service metadata: name: frontend labels: app: guestbook tier: frontend spec: # if your cluster supports it, uncomment the following to automatically create # an external load-balanced IP for the frontend service. # type: LoadBalancer ports: - port: 80 selector: app: guestbook tier: frontend Jun 10 22:00:51.302: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-6397 create -f -' Jun 10 22:00:51.653: INFO: stderr: "" Jun 10 22:00:51.653: INFO: stdout: "service/frontend created\n" Jun 10 22:00:51.653: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: frontend spec: replicas: 3 selector: matchLabels: app: guestbook tier: frontend template: metadata: labels: app: guestbook tier: frontend spec: containers: - name: guestbook-frontend image: k8s.gcr.io/e2e-test-images/agnhost:2.32 args: [ "guestbook", "--backend-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 80 Jun 10 22:00:51.653: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-6397 create -f -' Jun 10 22:00:51.986: INFO: stderr: "" Jun 10 22:00:51.986: INFO: stdout: "deployment.apps/frontend created\n" Jun 10 22:00:51.986: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: agnhost-primary spec: replicas: 1 selector: matchLabels: app: agnhost role: primary tier: backend template: metadata: labels: app: agnhost role: primary tier: backend spec: containers: - name: primary image: k8s.gcr.io/e2e-test-images/agnhost:2.32 args: [ "guestbook", "--http-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 Jun 10 22:00:51.986: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-6397 create -f -' Jun 10 22:00:52.333: INFO: stderr: "" Jun 10 22:00:52.333: INFO: stdout: "deployment.apps/agnhost-primary created\n" Jun 10 22:00:52.334: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: agnhost-replica spec: replicas: 2 selector: matchLabels: app: agnhost role: replica tier: backend template: metadata: labels: app: agnhost role: replica tier: backend spec: containers: - name: replica image: k8s.gcr.io/e2e-test-images/agnhost:2.32 args: [ "guestbook", "--replicaof", "agnhost-primary", "--http-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 Jun 10 22:00:52.334: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-6397 create -f -' Jun 10 22:00:52.687: INFO: stderr: "" Jun 10 22:00:52.687: INFO: stdout: "deployment.apps/agnhost-replica created\n" STEP: validating guestbook app Jun 10 22:00:52.687: INFO: Waiting for all frontend pods to be Running. Jun 10 22:01:02.739: INFO: Waiting for frontend to serve content. Jun 10 22:01:02.747: INFO: Trying to add a new entry to the guestbook. Jun 10 22:01:02.753: INFO: Verifying that added entry can be retrieved. STEP: using delete to clean up resources Jun 10 22:01:02.762: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-6397 delete --grace-period=0 --force -f -' Jun 10 22:01:02.897: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jun 10 22:01:02.897: INFO: stdout: "service \"agnhost-replica\" force deleted\n" STEP: using delete to clean up resources Jun 10 22:01:02.897: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-6397 delete --grace-period=0 --force -f -' Jun 10 22:01:03.028: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jun 10 22:01:03.028: INFO: stdout: "service \"agnhost-primary\" force deleted\n" STEP: using delete to clean up resources Jun 10 22:01:03.028: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-6397 delete --grace-period=0 --force -f -' Jun 10 22:01:03.149: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jun 10 22:01:03.149: INFO: stdout: "service \"frontend\" force deleted\n" STEP: using delete to clean up resources Jun 10 22:01:03.149: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-6397 delete --grace-period=0 --force -f -' Jun 10 22:01:03.291: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jun 10 22:01:03.291: INFO: stdout: "deployment.apps \"frontend\" force deleted\n" STEP: using delete to clean up resources Jun 10 22:01:03.291: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-6397 delete --grace-period=0 --force -f -' Jun 10 22:01:03.427: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jun 10 22:01:03.427: INFO: stdout: "deployment.apps \"agnhost-primary\" force deleted\n" STEP: using delete to clean up resources Jun 10 22:01:03.428: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-6397 delete --grace-period=0 --force -f -' Jun 10 22:01:03.558: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jun 10 22:01:03.558: INFO: stdout: "deployment.apps \"agnhost-replica\" force deleted\n" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 10 22:01:03.558: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6397" for this suite. • [SLOW TEST:13.006 seconds] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Guestbook application /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:336 should create and stop a working application [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]","total":-1,"completed":16,"skipped":317,"failed":0} SSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 10 22:01:03.595: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating secret with name secret-test-30b16cb7-3228-4194-a62b-a88f688d3f82 STEP: Creating a pod to test consume secrets Jun 10 22:01:03.630: INFO: Waiting up to 5m0s for pod "pod-secrets-41a29e89-6c11-4baf-864f-e92d8e5d2030" in namespace "secrets-4661" to be "Succeeded or Failed" Jun 10 22:01:03.632: INFO: Pod "pod-secrets-41a29e89-6c11-4baf-864f-e92d8e5d2030": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007548ms Jun 10 22:01:05.635: INFO: Pod "pod-secrets-41a29e89-6c11-4baf-864f-e92d8e5d2030": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005296352s Jun 10 22:01:07.640: INFO: Pod "pod-secrets-41a29e89-6c11-4baf-864f-e92d8e5d2030": Phase="Pending", Reason="", readiness=false. Elapsed: 4.010246302s Jun 10 22:01:09.644: INFO: Pod "pod-secrets-41a29e89-6c11-4baf-864f-e92d8e5d2030": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.013686148s STEP: Saw pod success Jun 10 22:01:09.644: INFO: Pod "pod-secrets-41a29e89-6c11-4baf-864f-e92d8e5d2030" satisfied condition "Succeeded or Failed" Jun 10 22:01:09.647: INFO: Trying to get logs from node node1 pod pod-secrets-41a29e89-6c11-4baf-864f-e92d8e5d2030 container secret-volume-test: STEP: delete the pod Jun 10 22:01:09.661: INFO: Waiting for pod pod-secrets-41a29e89-6c11-4baf-864f-e92d8e5d2030 to disappear Jun 10 22:01:09.663: INFO: Pod pod-secrets-41a29e89-6c11-4baf-864f-e92d8e5d2030 no longer exists [AfterEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 10 22:01:09.663: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-4661" for this suite. • [SLOW TEST:6.076 seconds] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":17,"skipped":330,"failed":0} SSSSSSSSS ------------------------------ [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 10 22:00:56.743: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:86 [It] should run the lifecycle of a Deployment [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating a Deployment STEP: waiting for Deployment to be created STEP: waiting for all Replicas to be Ready Jun 10 22:00:56.774: INFO: observed Deployment test-deployment in namespace deployment-1853 with ReadyReplicas 0 and labels map[test-deployment-static:true] Jun 10 22:00:56.774: INFO: observed Deployment test-deployment in namespace deployment-1853 with ReadyReplicas 0 and labels map[test-deployment-static:true] Jun 10 22:00:56.778: INFO: observed Deployment test-deployment in namespace deployment-1853 with ReadyReplicas 0 and labels map[test-deployment-static:true] Jun 10 22:00:56.778: INFO: observed Deployment test-deployment in namespace deployment-1853 with ReadyReplicas 0 and labels map[test-deployment-static:true] Jun 10 22:00:56.785: INFO: observed Deployment test-deployment in namespace deployment-1853 with ReadyReplicas 0 and labels map[test-deployment-static:true] Jun 10 22:00:56.785: INFO: observed Deployment test-deployment in namespace deployment-1853 with ReadyReplicas 0 and labels map[test-deployment-static:true] Jun 10 22:00:56.799: INFO: observed Deployment test-deployment in namespace deployment-1853 with ReadyReplicas 0 and labels map[test-deployment-static:true] Jun 10 22:00:56.799: INFO: observed Deployment test-deployment in namespace deployment-1853 with ReadyReplicas 0 and labels map[test-deployment-static:true] Jun 10 22:01:02.216: INFO: observed Deployment test-deployment in namespace deployment-1853 with ReadyReplicas 1 and labels map[test-deployment-static:true] Jun 10 22:01:02.216: INFO: observed Deployment test-deployment in namespace deployment-1853 with ReadyReplicas 1 and labels map[test-deployment-static:true] Jun 10 22:01:02.222: INFO: observed Deployment test-deployment in namespace deployment-1853 with ReadyReplicas 2 and labels map[test-deployment-static:true] STEP: patching the Deployment Jun 10 22:01:02.229: INFO: observed event type ADDED STEP: waiting for Replicas to scale Jun 10 22:01:02.230: INFO: observed Deployment test-deployment in namespace deployment-1853 with ReadyReplicas 0 Jun 10 22:01:02.230: INFO: observed Deployment test-deployment in namespace deployment-1853 with ReadyReplicas 0 Jun 10 22:01:02.230: INFO: observed Deployment test-deployment in namespace deployment-1853 with ReadyReplicas 0 Jun 10 22:01:02.230: INFO: observed Deployment test-deployment in namespace deployment-1853 with ReadyReplicas 0 Jun 10 22:01:02.230: INFO: observed Deployment test-deployment in namespace deployment-1853 with ReadyReplicas 0 Jun 10 22:01:02.230: INFO: observed Deployment test-deployment in namespace deployment-1853 with ReadyReplicas 0 Jun 10 22:01:02.231: INFO: observed Deployment test-deployment in namespace deployment-1853 with ReadyReplicas 0 Jun 10 22:01:02.231: INFO: observed Deployment test-deployment in namespace deployment-1853 with ReadyReplicas 0 Jun 10 22:01:02.231: INFO: observed Deployment test-deployment in namespace deployment-1853 with ReadyReplicas 1 Jun 10 22:01:02.231: INFO: observed Deployment test-deployment in namespace deployment-1853 with ReadyReplicas 1 Jun 10 22:01:02.231: INFO: observed Deployment test-deployment in namespace deployment-1853 with ReadyReplicas 2 Jun 10 22:01:02.231: INFO: observed Deployment test-deployment in namespace deployment-1853 with ReadyReplicas 2 Jun 10 22:01:02.231: INFO: observed Deployment test-deployment in namespace deployment-1853 with ReadyReplicas 2 Jun 10 22:01:02.231: INFO: observed Deployment test-deployment in namespace deployment-1853 with ReadyReplicas 2 Jun 10 22:01:02.233: INFO: observed Deployment test-deployment in namespace deployment-1853 with ReadyReplicas 2 Jun 10 22:01:02.233: INFO: observed Deployment test-deployment in namespace deployment-1853 with ReadyReplicas 2 Jun 10 22:01:02.239: INFO: observed Deployment test-deployment in namespace deployment-1853 with ReadyReplicas 2 Jun 10 22:01:02.239: INFO: observed Deployment test-deployment in namespace deployment-1853 with ReadyReplicas 2 Jun 10 22:01:02.245: INFO: observed Deployment test-deployment in namespace deployment-1853 with ReadyReplicas 1 Jun 10 22:01:02.245: INFO: observed Deployment test-deployment in namespace deployment-1853 with ReadyReplicas 1 Jun 10 22:01:02.255: INFO: observed Deployment test-deployment in namespace deployment-1853 with ReadyReplicas 1 Jun 10 22:01:02.255: INFO: observed Deployment test-deployment in namespace deployment-1853 with ReadyReplicas 1 Jun 10 22:01:07.261: INFO: observed Deployment test-deployment in namespace deployment-1853 with ReadyReplicas 2 Jun 10 22:01:07.261: INFO: observed Deployment test-deployment in namespace deployment-1853 with ReadyReplicas 2 Jun 10 22:01:07.280: INFO: observed Deployment test-deployment in namespace deployment-1853 with ReadyReplicas 1 STEP: listing Deployments Jun 10 22:01:07.283: INFO: Found test-deployment with labels: map[test-deployment:patched test-deployment-static:true] STEP: updating the Deployment Jun 10 22:01:07.294: INFO: observed Deployment test-deployment in namespace deployment-1853 with ReadyReplicas 1 STEP: fetching the DeploymentStatus Jun 10 22:01:07.301: INFO: observed Deployment test-deployment in namespace deployment-1853 with ReadyReplicas 1 and labels map[test-deployment:updated test-deployment-static:true] Jun 10 22:01:07.301: INFO: observed Deployment test-deployment in namespace deployment-1853 with ReadyReplicas 1 and labels map[test-deployment:updated test-deployment-static:true] Jun 10 22:01:07.305: INFO: observed Deployment test-deployment in namespace deployment-1853 with ReadyReplicas 1 and labels map[test-deployment:updated test-deployment-static:true] Jun 10 22:01:07.312: INFO: observed Deployment test-deployment in namespace deployment-1853 with ReadyReplicas 1 and labels map[test-deployment:updated test-deployment-static:true] Jun 10 22:01:07.316: INFO: observed Deployment test-deployment in namespace deployment-1853 with ReadyReplicas 1 and labels map[test-deployment:updated test-deployment-static:true] Jun 10 22:01:10.176: INFO: observed Deployment test-deployment in namespace deployment-1853 with ReadyReplicas 2 and labels map[test-deployment:updated test-deployment-static:true] Jun 10 22:01:10.188: INFO: observed Deployment test-deployment in namespace deployment-1853 with ReadyReplicas 2 and labels map[test-deployment:updated test-deployment-static:true] Jun 10 22:01:10.193: INFO: observed Deployment test-deployment in namespace deployment-1853 with ReadyReplicas 2 and labels map[test-deployment:updated test-deployment-static:true] Jun 10 22:01:12.410: INFO: observed Deployment test-deployment in namespace deployment-1853 with ReadyReplicas 3 and labels map[test-deployment:updated test-deployment-static:true] STEP: patching the DeploymentStatus STEP: fetching the DeploymentStatus Jun 10 22:01:12.435: INFO: observed Deployment test-deployment in namespace deployment-1853 with ReadyReplicas 1 Jun 10 22:01:12.435: INFO: observed Deployment test-deployment in namespace deployment-1853 with ReadyReplicas 1 Jun 10 22:01:12.435: INFO: observed Deployment test-deployment in namespace deployment-1853 with ReadyReplicas 1 Jun 10 22:01:12.435: INFO: observed Deployment test-deployment in namespace deployment-1853 with ReadyReplicas 1 Jun 10 22:01:12.435: INFO: observed Deployment test-deployment in namespace deployment-1853 with ReadyReplicas 1 Jun 10 22:01:12.435: INFO: observed Deployment test-deployment in namespace deployment-1853 with ReadyReplicas 2 Jun 10 22:01:12.436: INFO: observed Deployment test-deployment in namespace deployment-1853 with ReadyReplicas 2 Jun 10 22:01:12.436: INFO: observed Deployment test-deployment in namespace deployment-1853 with ReadyReplicas 2 Jun 10 22:01:12.436: INFO: observed Deployment test-deployment in namespace deployment-1853 with ReadyReplicas 3 STEP: deleting the Deployment Jun 10 22:01:12.442: INFO: observed event type MODIFIED Jun 10 22:01:12.442: INFO: observed event type MODIFIED Jun 10 22:01:12.442: INFO: observed event type MODIFIED Jun 10 22:01:12.442: INFO: observed event type MODIFIED Jun 10 22:01:12.443: INFO: observed event type MODIFIED Jun 10 22:01:12.443: INFO: observed event type MODIFIED Jun 10 22:01:12.443: INFO: observed event type MODIFIED Jun 10 22:01:12.443: INFO: observed event type MODIFIED Jun 10 22:01:12.443: INFO: observed event type MODIFIED Jun 10 22:01:12.443: INFO: observed event type MODIFIED [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:80 Jun 10 22:01:12.445: INFO: Log out all the ReplicaSets if there is no deployment created [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 10 22:01:12.447: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-1853" for this suite. • [SLOW TEST:15.713 seconds] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run the lifecycle of a Deployment [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] Deployment should run the lifecycle of a Deployment [Conformance]","total":-1,"completed":21,"skipped":450,"failed":0} SSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 10 22:01:09.688: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Jun 10 22:01:10.111: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Jun 10 22:01:12.120: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63790495270, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63790495270, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63790495270, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63790495270, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jun 10 22:01:15.134: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource with different stored version [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Jun 10 22:01:15.136: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-3004-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource while v1 is storage version STEP: Patching Custom Resource Definition to set v2 as storage STEP: Patching the custom resource while v2 is storage version [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 10 22:01:23.266: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-3345" for this suite. STEP: Destroying namespace "webhook-3345-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:13.608 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource with different stored version [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","total":-1,"completed":18,"skipped":339,"failed":0} SSSSS ------------------------------ [BeforeEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 10 22:01:12.479: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-2904 A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-2904;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-2904 A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-2904;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-2904.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-2904.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-2904.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-2904.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-2904.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-2904.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-2904.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-2904.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-2904.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-2904.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-2904.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-2904.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-2904.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 77.50.233.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.233.50.77_udp@PTR;check="$$(dig +tcp +noall +answer +search 77.50.233.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.233.50.77_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-2904 A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-2904;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-2904 A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-2904;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-2904.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-2904.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-2904.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-2904.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-2904.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-2904.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-2904.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-2904.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-2904.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-2904.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-2904.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-2904.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-2904.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 77.50.233.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.233.50.77_udp@PTR;check="$$(dig +tcp +noall +answer +search 77.50.233.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.233.50.77_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Jun 10 22:01:18.541: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-2904/dns-test-f934969c-f4e0-4a33-8c21-88dbfbb366db: the server could not find the requested resource (get pods dns-test-f934969c-f4e0-4a33-8c21-88dbfbb366db) Jun 10 22:01:18.543: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-2904/dns-test-f934969c-f4e0-4a33-8c21-88dbfbb366db: the server could not find the requested resource (get pods dns-test-f934969c-f4e0-4a33-8c21-88dbfbb366db) Jun 10 22:01:18.546: INFO: Unable to read wheezy_udp@dns-test-service.dns-2904 from pod dns-2904/dns-test-f934969c-f4e0-4a33-8c21-88dbfbb366db: the server could not find the requested resource (get pods dns-test-f934969c-f4e0-4a33-8c21-88dbfbb366db) Jun 10 22:01:18.548: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2904 from pod dns-2904/dns-test-f934969c-f4e0-4a33-8c21-88dbfbb366db: the server could not find the requested resource (get pods dns-test-f934969c-f4e0-4a33-8c21-88dbfbb366db) Jun 10 22:01:18.551: INFO: Unable to read wheezy_udp@dns-test-service.dns-2904.svc from pod dns-2904/dns-test-f934969c-f4e0-4a33-8c21-88dbfbb366db: the server could not find the requested resource (get pods dns-test-f934969c-f4e0-4a33-8c21-88dbfbb366db) Jun 10 22:01:18.553: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2904.svc from pod dns-2904/dns-test-f934969c-f4e0-4a33-8c21-88dbfbb366db: the server could not find the requested resource (get pods dns-test-f934969c-f4e0-4a33-8c21-88dbfbb366db) Jun 10 22:01:18.555: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-2904.svc from pod dns-2904/dns-test-f934969c-f4e0-4a33-8c21-88dbfbb366db: the server could not find the requested resource (get pods dns-test-f934969c-f4e0-4a33-8c21-88dbfbb366db) Jun 10 22:01:18.558: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-2904.svc from pod dns-2904/dns-test-f934969c-f4e0-4a33-8c21-88dbfbb366db: the server could not find the requested resource (get pods dns-test-f934969c-f4e0-4a33-8c21-88dbfbb366db) Jun 10 22:01:18.576: INFO: Unable to read jessie_udp@dns-test-service from pod dns-2904/dns-test-f934969c-f4e0-4a33-8c21-88dbfbb366db: the server could not find the requested resource (get pods dns-test-f934969c-f4e0-4a33-8c21-88dbfbb366db) Jun 10 22:01:18.578: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-2904/dns-test-f934969c-f4e0-4a33-8c21-88dbfbb366db: the server could not find the requested resource (get pods dns-test-f934969c-f4e0-4a33-8c21-88dbfbb366db) Jun 10 22:01:18.581: INFO: Unable to read jessie_udp@dns-test-service.dns-2904 from pod dns-2904/dns-test-f934969c-f4e0-4a33-8c21-88dbfbb366db: the server could not find the requested resource (get pods dns-test-f934969c-f4e0-4a33-8c21-88dbfbb366db) Jun 10 22:01:18.583: INFO: Unable to read jessie_tcp@dns-test-service.dns-2904 from pod dns-2904/dns-test-f934969c-f4e0-4a33-8c21-88dbfbb366db: the server could not find the requested resource (get pods dns-test-f934969c-f4e0-4a33-8c21-88dbfbb366db) Jun 10 22:01:18.586: INFO: Unable to read jessie_udp@dns-test-service.dns-2904.svc from pod dns-2904/dns-test-f934969c-f4e0-4a33-8c21-88dbfbb366db: the server could not find the requested resource (get pods dns-test-f934969c-f4e0-4a33-8c21-88dbfbb366db) Jun 10 22:01:18.588: INFO: Unable to read jessie_tcp@dns-test-service.dns-2904.svc from pod dns-2904/dns-test-f934969c-f4e0-4a33-8c21-88dbfbb366db: the server could not find the requested resource (get pods dns-test-f934969c-f4e0-4a33-8c21-88dbfbb366db) Jun 10 22:01:18.591: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-2904.svc from pod dns-2904/dns-test-f934969c-f4e0-4a33-8c21-88dbfbb366db: the server could not find the requested resource (get pods dns-test-f934969c-f4e0-4a33-8c21-88dbfbb366db) Jun 10 22:01:18.593: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-2904.svc from pod dns-2904/dns-test-f934969c-f4e0-4a33-8c21-88dbfbb366db: the server could not find the requested resource (get pods dns-test-f934969c-f4e0-4a33-8c21-88dbfbb366db) Jun 10 22:01:18.610: INFO: Lookups using dns-2904/dns-test-f934969c-f4e0-4a33-8c21-88dbfbb366db failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-2904 wheezy_tcp@dns-test-service.dns-2904 wheezy_udp@dns-test-service.dns-2904.svc wheezy_tcp@dns-test-service.dns-2904.svc wheezy_udp@_http._tcp.dns-test-service.dns-2904.svc wheezy_tcp@_http._tcp.dns-test-service.dns-2904.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-2904 jessie_tcp@dns-test-service.dns-2904 jessie_udp@dns-test-service.dns-2904.svc jessie_tcp@dns-test-service.dns-2904.svc jessie_udp@_http._tcp.dns-test-service.dns-2904.svc jessie_tcp@_http._tcp.dns-test-service.dns-2904.svc] Jun 10 22:01:23.683: INFO: DNS probes using dns-2904/dns-test-f934969c-f4e0-4a33-8c21-88dbfbb366db succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 10 22:01:23.708: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-2904" for this suite. • [SLOW TEST:11.236 seconds] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]","total":-1,"completed":22,"skipped":462,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 10 22:01:23.785: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:86 [It] deployment should support proportional scaling [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Jun 10 22:01:23.807: INFO: Creating deployment "webserver-deployment" Jun 10 22:01:23.811: INFO: Waiting for observed generation 1 Jun 10 22:01:25.817: INFO: Waiting for all required pods to come up Jun 10 22:01:25.821: INFO: Pod name httpd: Found 10 pods out of 10 STEP: ensuring each pod is running Jun 10 22:01:33.829: INFO: Waiting for deployment "webserver-deployment" to complete Jun 10 22:01:33.834: INFO: Updating deployment "webserver-deployment" with a non-existent image Jun 10 22:01:33.842: INFO: Updating deployment webserver-deployment Jun 10 22:01:33.842: INFO: Waiting for observed generation 2 Jun 10 22:01:35.848: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8 Jun 10 22:01:35.851: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8 Jun 10 22:01:35.854: INFO: Waiting for the first rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas Jun 10 22:01:35.863: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0 Jun 10 22:01:35.863: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5 Jun 10 22:01:35.866: INFO: Waiting for the second rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas Jun 10 22:01:35.872: INFO: Verifying that deployment "webserver-deployment" has minimum required number of available replicas Jun 10 22:01:35.872: INFO: Scaling up the deployment "webserver-deployment" from 10 to 30 Jun 10 22:01:35.880: INFO: Updating deployment webserver-deployment Jun 10 22:01:35.880: INFO: Waiting for the replicasets of deployment "webserver-deployment" to have desired number of replicas Jun 10 22:01:35.885: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20 Jun 10 22:01:37.892: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13 [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:80 Jun 10 22:01:37.897: INFO: Deployment "webserver-deployment": &Deployment{ObjectMeta:{webserver-deployment deployment-938 ac614cb9-c841-4bef-b4ed-ce9424c6c979 37965 3 2022-06-10 22:01:23 +0000 UTC map[name:httpd] map[deployment.kubernetes.io/revision:2] [] [] [{e2e.test Update apps/v1 2022-06-10 22:01:23 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2022-06-10 22:01:33 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:availableReplicas":{},"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{},"f:unavailableReplicas":{},"f:updatedReplicas":{}}}}]},Spec:DeploymentSpec{Replicas:*30,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd] map[] [] [] []} {[] [] [{httpd webserver:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc00312f768 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:33,UpdatedReplicas:13,AvailableReplicas:8,UnavailableReplicas:25,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2022-06-10 22:01:35 +0000 UTC,LastTransitionTime:2022-06-10 22:01:35 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "webserver-deployment-795d758f88" is progressing.,LastUpdateTime:2022-06-10 22:01:36 +0000 UTC,LastTransitionTime:2022-06-10 22:01:23 +0000 UTC,},},ReadyReplicas:8,CollisionCount:nil,},} Jun 10 22:01:37.902: INFO: New ReplicaSet "webserver-deployment-795d758f88" of Deployment "webserver-deployment": &ReplicaSet{ObjectMeta:{webserver-deployment-795d758f88 deployment-938 05179f71-60df-4aab-bc9d-a7729df2b53c 37955 3 2022-06-10 22:01:33 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment webserver-deployment ac614cb9-c841-4bef-b4ed-ce9424c6c979 0xc00312fb57 0xc00312fb58}] [] [{kube-controller-manager Update apps/v1 2022-06-10 22:01:33 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ac614cb9-c841-4bef-b4ed-ce9424c6c979\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*13,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: 795d758f88,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [] [] []} {[] [] [{httpd webserver:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc00312fbd8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:13,FullyLabeledReplicas:13,ObservedGeneration:3,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Jun 10 22:01:37.902: INFO: All old ReplicaSets of Deployment "webserver-deployment": Jun 10 22:01:37.902: INFO: &ReplicaSet{ObjectMeta:{webserver-deployment-847dcfb7fb deployment-938 d632e24d-de3f-4d65-a333-40e103ee677c 37964 3 2022-06-10 22:01:23 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment webserver-deployment ac614cb9-c841-4bef-b4ed-ce9424c6c979 0xc00312fc37 0xc00312fc38}] [] [{kube-controller-manager Update apps/v1 2022-06-10 22:01:30 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ac614cb9-c841-4bef-b4ed-ce9424c6c979\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*20,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: 847dcfb7fb,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[] [] [] []} {[] [] [{httpd k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc00312fca8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:20,FullyLabeledReplicas:20,ObservedGeneration:3,ReadyReplicas:8,AvailableReplicas:8,Conditions:[]ReplicaSetCondition{},},} Jun 10 22:01:37.908: INFO: Pod "webserver-deployment-795d758f88-58vnv" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-58vnv webserver-deployment-795d758f88- deployment-938 94f268bd-741d-4ec7-ab11-6f2283a7231e 37952 0 2022-06-10 22:01:35 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[kubernetes.io/psp:collectd] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 05179f71-60df-4aab-bc9d-a7729df2b53c 0xc00396613f 0xc003966150}] [] [{kube-controller-manager Update v1 2022-06-10 22:01:35 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"05179f71-60df-4aab-bc9d-a7729df2b53c\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2022-06-10 22:01:35 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-h8gzn,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-h8gzn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node1,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-06-10 22:01:35 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-06-10 22:01:35 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-06-10 22:01:35 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-06-10 22:01:35 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.10.190.207,PodIP:,StartTime:2022-06-10 22:01:35 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jun 10 22:01:37.908: INFO: Pod "webserver-deployment-795d758f88-5hfvj" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-5hfvj webserver-deployment-795d758f88- deployment-938 ea2934f3-8962-47c4-a097-8d5c1e18f3cc 37923 0 2022-06-10 22:01:35 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[kubernetes.io/psp:collectd] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 05179f71-60df-4aab-bc9d-a7729df2b53c 0xc00396631f 0xc003966330}] [] [{kube-controller-manager Update v1 2022-06-10 22:01:35 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"05179f71-60df-4aab-bc9d-a7729df2b53c\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-4k9p5,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-4k9p5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-06-10 22:01:35 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jun 10 22:01:37.909: INFO: Pod "webserver-deployment-795d758f88-7hj7m" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-7hj7m webserver-deployment-795d758f88- deployment-938 7114b20e-789e-46d2-93e2-d14cfcdf1949 37977 0 2022-06-10 22:01:33 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[k8s.v1.cni.cncf.io/network-status:[{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.4.86" ], "mac": "22:80:30:42:f1:5e", "default": true, "dns": {} }] k8s.v1.cni.cncf.io/networks-status:[{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.4.86" ], "mac": "22:80:30:42:f1:5e", "default": true, "dns": {} }] kubernetes.io/psp:collectd] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 05179f71-60df-4aab-bc9d-a7729df2b53c 0xc00396649f 0xc0039664b0}] [] [{kube-controller-manager Update v1 2022-06-10 22:01:33 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"05179f71-60df-4aab-bc9d-a7729df2b53c\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2022-06-10 22:01:34 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:hostIP":{},"f:startTime":{}}}} {multus Update v1 2022-06-10 22:01:36 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:k8s.v1.cni.cncf.io/network-status":{},"f:k8s.v1.cni.cncf.io/networks-status":{}}},"f:status":{"f:containerStatuses":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-gqkts,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-gqkts,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-06-10 22:01:33 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-06-10 22:01:33 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-06-10 22:01:33 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-06-10 22:01:33 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.10.190.208,PodIP:,StartTime:2022-06-10 22:01:33 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:nil,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jun 10 22:01:37.909: INFO: Pod "webserver-deployment-795d758f88-chddf" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-chddf webserver-deployment-795d758f88- deployment-938 0add10d7-81ea-45cb-97ae-867d101955da 37841 0 2022-06-10 22:01:33 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[kubernetes.io/psp:collectd] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 05179f71-60df-4aab-bc9d-a7729df2b53c 0xc00396669f 0xc0039666b0}] [] [{kube-controller-manager Update v1 2022-06-10 22:01:33 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"05179f71-60df-4aab-bc9d-a7729df2b53c\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2022-06-10 22:01:33 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-vfs2d,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-vfs2d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-06-10 22:01:33 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-06-10 22:01:33 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-06-10 22:01:33 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-06-10 22:01:33 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.10.190.208,PodIP:,StartTime:2022-06-10 22:01:33 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jun 10 22:01:37.910: INFO: Pod "webserver-deployment-795d758f88-kd8fj" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-kd8fj webserver-deployment-795d758f88- deployment-938 604192b0-96d3-41d1-a656-7adfd212e24d 37951 0 2022-06-10 22:01:35 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[kubernetes.io/psp:collectd] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 05179f71-60df-4aab-bc9d-a7729df2b53c 0xc00396687f 0xc003966890}] [] [{kube-controller-manager Update v1 2022-06-10 22:01:35 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"05179f71-60df-4aab-bc9d-a7729df2b53c\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2022-06-10 22:01:35 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-9jqsv,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-9jqsv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node1,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-06-10 22:01:35 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-06-10 22:01:35 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-06-10 22:01:35 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-06-10 22:01:35 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.10.190.207,PodIP:,StartTime:2022-06-10 22:01:35 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jun 10 22:01:37.910: INFO: Pod "webserver-deployment-795d758f88-ks72b" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-ks72b webserver-deployment-795d758f88- deployment-938 ba6a0870-cb2e-4870-92d9-ebe2694329c0 37871 0 2022-06-10 22:01:33 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[kubernetes.io/psp:collectd] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 05179f71-60df-4aab-bc9d-a7729df2b53c 0xc003966a5f 0xc003966a70}] [] [{kube-controller-manager Update v1 2022-06-10 22:01:33 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"05179f71-60df-4aab-bc9d-a7729df2b53c\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2022-06-10 22:01:35 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-zfb6r,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-zfb6r,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-06-10 22:01:33 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-06-10 22:01:33 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-06-10 22:01:33 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-06-10 22:01:33 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.10.190.208,PodIP:,StartTime:2022-06-10 22:01:33 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jun 10 22:01:37.910: INFO: Pod "webserver-deployment-795d758f88-l5w7v" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-l5w7v webserver-deployment-795d758f88- deployment-938 8d006075-734b-42c3-9f6a-c7969e344469 37962 0 2022-06-10 22:01:33 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[kubernetes.io/psp:collectd] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 05179f71-60df-4aab-bc9d-a7729df2b53c 0xc003966c3f 0xc003966c50}] [] [{kube-controller-manager Update v1 2022-06-10 22:01:33 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"05179f71-60df-4aab-bc9d-a7729df2b53c\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2022-06-10 22:01:36 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-gj7wf,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-gj7wf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-06-10 22:01:33 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-06-10 22:01:33 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-06-10 22:01:33 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-06-10 22:01:33 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.10.190.208,PodIP:,StartTime:2022-06-10 22:01:33 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jun 10 22:01:37.911: INFO: Pod "webserver-deployment-795d758f88-pjqxn" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-pjqxn webserver-deployment-795d758f88- deployment-938 4a309aeb-11f4-40b5-807e-d66d77e9f11b 37899 0 2022-06-10 22:01:35 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[kubernetes.io/psp:collectd] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 05179f71-60df-4aab-bc9d-a7729df2b53c 0xc003966e1f 0xc003966e30}] [] [{kube-controller-manager Update v1 2022-06-10 22:01:35 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"05179f71-60df-4aab-bc9d-a7729df2b53c\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-xml96,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-xml96,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-06-10 22:01:35 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jun 10 22:01:37.911: INFO: Pod "webserver-deployment-795d758f88-rqglz" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-rqglz webserver-deployment-795d758f88- deployment-938 fe4a576c-0c38-404f-8138-1e301263a4e8 37857 0 2022-06-10 22:01:33 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[kubernetes.io/psp:collectd] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 05179f71-60df-4aab-bc9d-a7729df2b53c 0xc003966f9f 0xc003966fb0}] [] [{kube-controller-manager Update v1 2022-06-10 22:01:33 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"05179f71-60df-4aab-bc9d-a7729df2b53c\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-hfkcb,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-hfkcb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-06-10 22:01:33 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jun 10 22:01:37.911: INFO: Pod "webserver-deployment-795d758f88-twwg5" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-twwg5 webserver-deployment-795d758f88- deployment-938 0779b4ce-c4b9-4077-ab6d-134c3bf52947 37947 0 2022-06-10 22:01:35 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[kubernetes.io/psp:collectd] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 05179f71-60df-4aab-bc9d-a7729df2b53c 0xc00396711f 0xc003967130}] [] [{kube-controller-manager Update v1 2022-06-10 22:01:35 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"05179f71-60df-4aab-bc9d-a7729df2b53c\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-2drj5,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-2drj5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-06-10 22:01:35 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jun 10 22:01:37.911: INFO: Pod "webserver-deployment-795d758f88-wh272" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-wh272 webserver-deployment-795d758f88- deployment-938 b32deac1-a313-46f6-8542-b61212dc71ba 37897 0 2022-06-10 22:01:35 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[kubernetes.io/psp:collectd] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 05179f71-60df-4aab-bc9d-a7729df2b53c 0xc00396729f 0xc0039672b0}] [] [{kube-controller-manager Update v1 2022-06-10 22:01:35 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"05179f71-60df-4aab-bc9d-a7729df2b53c\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-rbtqf,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-rbtqf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-06-10 22:01:35 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jun 10 22:01:37.912: INFO: Pod "webserver-deployment-795d758f88-wzgwr" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-wzgwr webserver-deployment-795d758f88- deployment-938 1ab9c26e-46b2-42f5-9a36-f9780507290b 37920 0 2022-06-10 22:01:35 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[kubernetes.io/psp:collectd] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 05179f71-60df-4aab-bc9d-a7729df2b53c 0xc00396741f 0xc003967430}] [] [{kube-controller-manager Update v1 2022-06-10 22:01:35 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"05179f71-60df-4aab-bc9d-a7729df2b53c\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-n9f5n,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-n9f5n,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-06-10 22:01:35 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jun 10 22:01:37.912: INFO: Pod "webserver-deployment-795d758f88-z8np6" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-z8np6 webserver-deployment-795d758f88- deployment-938 e3fa4f95-354e-45a8-a418-1da8d302333c 37885 0 2022-06-10 22:01:35 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[kubernetes.io/psp:collectd] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 05179f71-60df-4aab-bc9d-a7729df2b53c 0xc00396759f 0xc0039675b0}] [] [{kube-controller-manager Update v1 2022-06-10 22:01:35 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"05179f71-60df-4aab-bc9d-a7729df2b53c\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-vzjch,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-vzjch,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-06-10 22:01:35 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jun 10 22:01:37.912: INFO: Pod "webserver-deployment-847dcfb7fb-44cpn" is not available: &Pod{ObjectMeta:{webserver-deployment-847dcfb7fb-44cpn webserver-deployment-847dcfb7fb- deployment-938 32659cb1-cfcd-49f4-a12a-5bb9a1d7a7a7 37943 0 2022-06-10 22:01:35 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[kubernetes.io/psp:collectd] [{apps/v1 ReplicaSet webserver-deployment-847dcfb7fb d632e24d-de3f-4d65-a333-40e103ee677c 0xc00396771f 0xc003967740}] [] [{kube-controller-manager Update v1 2022-06-10 22:01:35 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d632e24d-de3f-4d65-a333-40e103ee677c\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-rh7bm,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-rh7bm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node1,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-06-10 22:01:35 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jun 10 22:01:37.913: INFO: Pod "webserver-deployment-847dcfb7fb-7sf6w" is available: &Pod{ObjectMeta:{webserver-deployment-847dcfb7fb-7sf6w webserver-deployment-847dcfb7fb- deployment-938 dfed1317-7a22-42a0-940a-6afbb8b32734 37789 0 2022-06-10 22:01:23 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[k8s.v1.cni.cncf.io/network-status:[{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.4.83" ], "mac": "6e:8b:30:df:a6:5b", "default": true, "dns": {} }] k8s.v1.cni.cncf.io/networks-status:[{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.4.83" ], "mac": "6e:8b:30:df:a6:5b", "default": true, "dns": {} }] kubernetes.io/psp:collectd] [{apps/v1 ReplicaSet webserver-deployment-847dcfb7fb d632e24d-de3f-4d65-a333-40e103ee677c 0xc0039678af 0xc0039678c0}] [] [{kube-controller-manager Update v1 2022-06-10 22:01:23 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d632e24d-de3f-4d65-a333-40e103ee677c\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {multus Update v1 2022-06-10 22:01:30 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:k8s.v1.cni.cncf.io/network-status":{},"f:k8s.v1.cni.cncf.io/networks-status":{}}}}} {kubelet Update v1 2022-06-10 22:01:32 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.4.83\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-8xcjm,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-8xcjm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-06-10 22:01:23 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-06-10 22:01:32 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-06-10 22:01:32 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-06-10 22:01:23 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.10.190.208,PodIP:10.244.4.83,StartTime:2022-06-10 22:01:23 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2022-06-10 22:01:31 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,ImageID:docker-pullable://k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50,ContainerID:docker://b0006ebb1054c79bc694b8b7acd43444d214e41f5b364f123fd4dcfae17477b5,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.4.83,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Jun 10 22:01:37.913: INFO: Pod "webserver-deployment-847dcfb7fb-8fmzw" is not available: &Pod{ObjectMeta:{webserver-deployment-847dcfb7fb-8fmzw webserver-deployment-847dcfb7fb- deployment-938 e37bc9c8-f119-4a85-86bc-cf784c168b33 37937 0 2022-06-10 22:01:35 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[kubernetes.io/psp:collectd] [{apps/v1 ReplicaSet webserver-deployment-847dcfb7fb d632e24d-de3f-4d65-a333-40e103ee677c 0xc003967aaf 0xc003967ac0}] [] [{kube-controller-manager Update v1 2022-06-10 22:01:35 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d632e24d-de3f-4d65-a333-40e103ee677c\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-d96dz,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-d96dz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-06-10 22:01:35 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jun 10 22:01:37.913: INFO: Pod "webserver-deployment-847dcfb7fb-92wl4" is available: &Pod{ObjectMeta:{webserver-deployment-847dcfb7fb-92wl4 webserver-deployment-847dcfb7fb- deployment-938 99aae754-5488-4a26-a9af-07edfeb1c6ac 37770 0 2022-06-10 22:01:23 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[k8s.v1.cni.cncf.io/network-status:[{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.3.178" ], "mac": "6e:e4:a2:18:30:13", "default": true, "dns": {} }] k8s.v1.cni.cncf.io/networks-status:[{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.3.178" ], "mac": "6e:e4:a2:18:30:13", "default": true, "dns": {} }] kubernetes.io/psp:collectd] [{apps/v1 ReplicaSet webserver-deployment-847dcfb7fb d632e24d-de3f-4d65-a333-40e103ee677c 0xc003967c1f 0xc003967c30}] [] [{kube-controller-manager Update v1 2022-06-10 22:01:23 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d632e24d-de3f-4d65-a333-40e103ee677c\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {multus Update v1 2022-06-10 22:01:29 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:k8s.v1.cni.cncf.io/network-status":{},"f:k8s.v1.cni.cncf.io/networks-status":{}}}}} {kubelet Update v1 2022-06-10 22:01:31 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.3.178\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-kp4mz,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-kp4mz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node1,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-06-10 22:01:23 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-06-10 22:01:31 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-06-10 22:01:31 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-06-10 22:01:23 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.10.190.207,PodIP:10.244.3.178,StartTime:2022-06-10 22:01:23 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2022-06-10 22:01:30 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,ImageID:docker-pullable://k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50,ContainerID:docker://3fdbe4a85185884a2b258bfef61630ebccaa3a1877c8023988809c15fe61cf42,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.3.178,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Jun 10 22:01:37.914: INFO: Pod "webserver-deployment-847dcfb7fb-9p7cj" is not available: &Pod{ObjectMeta:{webserver-deployment-847dcfb7fb-9p7cj webserver-deployment-847dcfb7fb- deployment-938 81ce320e-aaa0-457c-8e1c-66cd11aefee1 37907 0 2022-06-10 22:01:35 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[kubernetes.io/psp:collectd] [{apps/v1 ReplicaSet webserver-deployment-847dcfb7fb d632e24d-de3f-4d65-a333-40e103ee677c 0xc003967e1f 0xc003967e30}] [] [{kube-controller-manager Update v1 2022-06-10 22:01:35 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d632e24d-de3f-4d65-a333-40e103ee677c\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-sswzm,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-sswzm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-06-10 22:01:35 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jun 10 22:01:37.914: INFO: Pod "webserver-deployment-847dcfb7fb-9t8vf" is available: &Pod{ObjectMeta:{webserver-deployment-847dcfb7fb-9t8vf webserver-deployment-847dcfb7fb- deployment-938 978da6d6-b806-44f3-a37a-d050f5d07a7e 37792 0 2022-06-10 22:01:23 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[k8s.v1.cni.cncf.io/network-status:[{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.4.81" ], "mac": "b2:5e:0c:a5:5d:39", "default": true, "dns": {} }] k8s.v1.cni.cncf.io/networks-status:[{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.4.81" ], "mac": "b2:5e:0c:a5:5d:39", "default": true, "dns": {} }] kubernetes.io/psp:collectd] [{apps/v1 ReplicaSet webserver-deployment-847dcfb7fb d632e24d-de3f-4d65-a333-40e103ee677c 0xc003967f8f 0xc003967fa0}] [] [{kube-controller-manager Update v1 2022-06-10 22:01:23 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d632e24d-de3f-4d65-a333-40e103ee677c\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {multus Update v1 2022-06-10 22:01:27 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:k8s.v1.cni.cncf.io/network-status":{},"f:k8s.v1.cni.cncf.io/networks-status":{}}}}} {kubelet Update v1 2022-06-10 22:01:32 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.4.81\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-5jdnm,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-5jdnm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-06-10 22:01:23 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-06-10 22:01:32 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-06-10 22:01:32 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-06-10 22:01:23 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.10.190.208,PodIP:10.244.4.81,StartTime:2022-06-10 22:01:23 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2022-06-10 22:01:30 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,ImageID:docker-pullable://k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50,ContainerID:docker://1192d31b7d4a334b8346d42ac957c3ae89c17ad98e9fc6f3bf96217d4eb6ecaf,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.4.81,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Jun 10 22:01:37.914: INFO: Pod "webserver-deployment-847dcfb7fb-bwtgl" is not available: &Pod{ObjectMeta:{webserver-deployment-847dcfb7fb-bwtgl webserver-deployment-847dcfb7fb- deployment-938 488ed2e4-5b7d-435a-9882-174faa1d5902 37940 0 2022-06-10 22:01:35 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[kubernetes.io/psp:collectd] [{apps/v1 ReplicaSet webserver-deployment-847dcfb7fb d632e24d-de3f-4d65-a333-40e103ee677c 0xc004a0a18f 0xc004a0a1a0}] [] [{kube-controller-manager Update v1 2022-06-10 22:01:35 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d632e24d-de3f-4d65-a333-40e103ee677c\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2022-06-10 22:01:35 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-9skzr,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-9skzr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node1,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-06-10 22:01:35 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-06-10 22:01:35 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-06-10 22:01:35 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-06-10 22:01:35 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.10.190.207,PodIP:,StartTime:2022-06-10 22:01:35 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jun 10 22:01:37.914: INFO: Pod "webserver-deployment-847dcfb7fb-czq5q" is not available: &Pod{ObjectMeta:{webserver-deployment-847dcfb7fb-czq5q webserver-deployment-847dcfb7fb- deployment-938 409941df-5f23-4722-997c-ad99cde8ebde 37945 0 2022-06-10 22:01:35 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[kubernetes.io/psp:collectd] [{apps/v1 ReplicaSet webserver-deployment-847dcfb7fb d632e24d-de3f-4d65-a333-40e103ee677c 0xc004a0a36f 0xc004a0a380}] [] [{kube-controller-manager Update v1 2022-06-10 22:01:35 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d632e24d-de3f-4d65-a333-40e103ee677c\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-f9bcf,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-f9bcf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node1,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-06-10 22:01:35 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jun 10 22:01:37.915: INFO: Pod "webserver-deployment-847dcfb7fb-d7jw8" is available: &Pod{ObjectMeta:{webserver-deployment-847dcfb7fb-d7jw8 webserver-deployment-847dcfb7fb- deployment-938 3aee6a3b-1c28-41e1-9a6e-30a2a1c9f385 37739 0 2022-06-10 22:01:23 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[k8s.v1.cni.cncf.io/network-status:[{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.3.175" ], "mac": "3e:f2:a7:64:79:7d", "default": true, "dns": {} }] k8s.v1.cni.cncf.io/networks-status:[{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.3.175" ], "mac": "3e:f2:a7:64:79:7d", "default": true, "dns": {} }] kubernetes.io/psp:collectd] [{apps/v1 ReplicaSet webserver-deployment-847dcfb7fb d632e24d-de3f-4d65-a333-40e103ee677c 0xc004a0a4df 0xc004a0a4f0}] [] [{kube-controller-manager Update v1 2022-06-10 22:01:23 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d632e24d-de3f-4d65-a333-40e103ee677c\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {multus Update v1 2022-06-10 22:01:26 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:k8s.v1.cni.cncf.io/network-status":{},"f:k8s.v1.cni.cncf.io/networks-status":{}}}}} {kubelet Update v1 2022-06-10 22:01:30 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.3.175\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-z7r7b,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-z7r7b,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node1,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-06-10 22:01:23 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-06-10 22:01:30 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-06-10 22:01:30 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-06-10 22:01:23 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.10.190.207,PodIP:10.244.3.175,StartTime:2022-06-10 22:01:23 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2022-06-10 22:01:29 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,ImageID:docker-pullable://k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50,ContainerID:docker://3fc5daa5ead81049153dd2ad8e4b197f31717f72b77a0d5d53134b2982c14eea,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.3.175,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Jun 10 22:01:37.915: INFO: Pod "webserver-deployment-847dcfb7fb-drgqw" is not available: &Pod{ObjectMeta:{webserver-deployment-847dcfb7fb-drgqw webserver-deployment-847dcfb7fb- deployment-938 32d1f3d0-3354-4d54-8ef6-1e2c21b09602 37978 0 2022-06-10 22:01:35 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[kubernetes.io/psp:collectd] [{apps/v1 ReplicaSet webserver-deployment-847dcfb7fb d632e24d-de3f-4d65-a333-40e103ee677c 0xc004a0a6df 0xc004a0a6f0}] [] [{kube-controller-manager Update v1 2022-06-10 22:01:35 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d632e24d-de3f-4d65-a333-40e103ee677c\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2022-06-10 22:01:36 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-8ksph,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-8ksph,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node1,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-06-10 22:01:35 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-06-10 22:01:35 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-06-10 22:01:35 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-06-10 22:01:35 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.10.190.207,PodIP:,StartTime:2022-06-10 22:01:35 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jun 10 22:01:37.916: INFO: Pod "webserver-deployment-847dcfb7fb-h2gjq" is not available: &Pod{ObjectMeta:{webserver-deployment-847dcfb7fb-h2gjq webserver-deployment-847dcfb7fb- deployment-938 ec499705-86b6-439f-8297-f3f3c1509bc5 37909 0 2022-06-10 22:01:35 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[kubernetes.io/psp:collectd] [{apps/v1 ReplicaSet webserver-deployment-847dcfb7fb d632e24d-de3f-4d65-a333-40e103ee677c 0xc004a0a89f 0xc004a0a8b0}] [] [{kube-controller-manager Update v1 2022-06-10 22:01:35 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d632e24d-de3f-4d65-a333-40e103ee677c\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2022-06-10 22:01:35 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-gv7r6,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-gv7r6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node1,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-06-10 22:01:35 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-06-10 22:01:35 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-06-10 22:01:35 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-06-10 22:01:35 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.10.190.207,PodIP:,StartTime:2022-06-10 22:01:35 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jun 10 22:01:37.916: INFO: Pod "webserver-deployment-847dcfb7fb-hmrb5" is not available: &Pod{ObjectMeta:{webserver-deployment-847dcfb7fb-hmrb5 webserver-deployment-847dcfb7fb- deployment-938 41b6a879-fba5-477e-b800-8e47c9109e7b 37958 0 2022-06-10 22:01:36 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[kubernetes.io/psp:collectd] [{apps/v1 ReplicaSet webserver-deployment-847dcfb7fb d632e24d-de3f-4d65-a333-40e103ee677c 0xc004a0aa5f 0xc004a0aa70}] [] [{kube-controller-manager Update v1 2022-06-10 22:01:35 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d632e24d-de3f-4d65-a333-40e103ee677c\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-hbxhr,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-hbxhr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node1,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-06-10 22:01:36 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jun 10 22:01:37.916: INFO: Pod "webserver-deployment-847dcfb7fb-hs9w6" is not available: &Pod{ObjectMeta:{webserver-deployment-847dcfb7fb-hs9w6 webserver-deployment-847dcfb7fb- deployment-938 39e53a25-e29a-4b8e-8e5c-273a0d3329ee 37928 0 2022-06-10 22:01:35 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[kubernetes.io/psp:collectd] [{apps/v1 ReplicaSet webserver-deployment-847dcfb7fb d632e24d-de3f-4d65-a333-40e103ee677c 0xc004a0abcf 0xc004a0abe0}] [] [{kube-controller-manager Update v1 2022-06-10 22:01:35 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d632e24d-de3f-4d65-a333-40e103ee677c\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2022-06-10 22:01:35 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-r9pcz,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-r9pcz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node1,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-06-10 22:01:35 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-06-10 22:01:35 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-06-10 22:01:35 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-06-10 22:01:35 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.10.190.207,PodIP:,StartTime:2022-06-10 22:01:35 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jun 10 22:01:37.916: INFO: Pod "webserver-deployment-847dcfb7fb-l8pjm" is not available: &Pod{ObjectMeta:{webserver-deployment-847dcfb7fb-l8pjm webserver-deployment-847dcfb7fb- deployment-938 f1e038e0-b653-449f-b185-dca3b62f32d3 37903 0 2022-06-10 22:01:35 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[kubernetes.io/psp:collectd] [{apps/v1 ReplicaSet webserver-deployment-847dcfb7fb d632e24d-de3f-4d65-a333-40e103ee677c 0xc004a0ad8f 0xc004a0ada0}] [] [{kube-controller-manager Update v1 2022-06-10 22:01:35 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d632e24d-de3f-4d65-a333-40e103ee677c\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-4rbmf,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-4rbmf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-06-10 22:01:35 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jun 10 22:01:37.917: INFO: Pod "webserver-deployment-847dcfb7fb-md4pc" is available: &Pod{ObjectMeta:{webserver-deployment-847dcfb7fb-md4pc webserver-deployment-847dcfb7fb- deployment-938 b68d2a61-1bc8-4b48-aa9a-0900a108b4d3 37756 0 2022-06-10 22:01:23 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[k8s.v1.cni.cncf.io/network-status:[{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.4.80" ], "mac": "5e:fe:37:be:66:06", "default": true, "dns": {} }] k8s.v1.cni.cncf.io/networks-status:[{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.4.80" ], "mac": "5e:fe:37:be:66:06", "default": true, "dns": {} }] kubernetes.io/psp:collectd] [{apps/v1 ReplicaSet webserver-deployment-847dcfb7fb d632e24d-de3f-4d65-a333-40e103ee677c 0xc004a0aeff 0xc004a0af10}] [] [{kube-controller-manager Update v1 2022-06-10 22:01:23 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d632e24d-de3f-4d65-a333-40e103ee677c\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {multus Update v1 2022-06-10 22:01:26 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:k8s.v1.cni.cncf.io/network-status":{},"f:k8s.v1.cni.cncf.io/networks-status":{}}}}} {kubelet Update v1 2022-06-10 22:01:31 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.4.80\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-kwzp8,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-kwzp8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-06-10 22:01:23 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-06-10 22:01:31 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-06-10 22:01:31 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-06-10 22:01:23 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.10.190.208,PodIP:10.244.4.80,StartTime:2022-06-10 22:01:23 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2022-06-10 22:01:29 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,ImageID:docker-pullable://k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50,ContainerID:docker://26fcd6323d2fb3bbac6dc213c8d8214cf070489ce27bca71790191b69abd2216,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.4.80,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Jun 10 22:01:37.917: INFO: Pod "webserver-deployment-847dcfb7fb-n4mzp" is available: &Pod{ObjectMeta:{webserver-deployment-847dcfb7fb-n4mzp webserver-deployment-847dcfb7fb- deployment-938 a9f11776-5666-4689-a62c-0db683f39e51 37741 0 2022-06-10 22:01:23 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[k8s.v1.cni.cncf.io/network-status:[{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.3.177" ], "mac": "f2:ac:60:1f:dd:f3", "default": true, "dns": {} }] k8s.v1.cni.cncf.io/networks-status:[{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.3.177" ], "mac": "f2:ac:60:1f:dd:f3", "default": true, "dns": {} }] kubernetes.io/psp:collectd] [{apps/v1 ReplicaSet webserver-deployment-847dcfb7fb d632e24d-de3f-4d65-a333-40e103ee677c 0xc004a0b0ff 0xc004a0b110}] [] [{kube-controller-manager Update v1 2022-06-10 22:01:23 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d632e24d-de3f-4d65-a333-40e103ee677c\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {multus Update v1 2022-06-10 22:01:29 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:k8s.v1.cni.cncf.io/network-status":{},"f:k8s.v1.cni.cncf.io/networks-status":{}}}}} {kubelet Update v1 2022-06-10 22:01:30 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.3.177\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-httxq,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-httxq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node1,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-06-10 22:01:23 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-06-10 22:01:30 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-06-10 22:01:30 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-06-10 22:01:23 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.10.190.207,PodIP:10.244.3.177,StartTime:2022-06-10 22:01:23 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2022-06-10 22:01:30 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,ImageID:docker-pullable://k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50,ContainerID:docker://86fe967e3adc7b60a8e80d47f23387e7ae73b5641f0c5d9079d4482dc9543d59,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.3.177,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Jun 10 22:01:37.917: INFO: Pod "webserver-deployment-847dcfb7fb-nvlvg" is not available: &Pod{ObjectMeta:{webserver-deployment-847dcfb7fb-nvlvg webserver-deployment-847dcfb7fb- deployment-938 95284d8f-6f0f-4508-b991-a5f50fb4eea0 37890 0 2022-06-10 22:01:35 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[kubernetes.io/psp:collectd] [{apps/v1 ReplicaSet webserver-deployment-847dcfb7fb d632e24d-de3f-4d65-a333-40e103ee677c 0xc004a0b2ff 0xc004a0b310}] [] [{kube-controller-manager Update v1 2022-06-10 22:01:35 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d632e24d-de3f-4d65-a333-40e103ee677c\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-47g4g,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-47g4g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-06-10 22:01:35 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jun 10 22:01:37.918: INFO: Pod "webserver-deployment-847dcfb7fb-rgc4q" is available: &Pod{ObjectMeta:{webserver-deployment-847dcfb7fb-rgc4q webserver-deployment-847dcfb7fb- deployment-938 3a926ab2-3033-4ff1-9353-3f3a724bd67e 37786 0 2022-06-10 22:01:23 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[k8s.v1.cni.cncf.io/network-status:[{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.4.82" ], "mac": "5e:af:f2:5f:e9:a3", "default": true, "dns": {} }] k8s.v1.cni.cncf.io/networks-status:[{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.4.82" ], "mac": "5e:af:f2:5f:e9:a3", "default": true, "dns": {} }] kubernetes.io/psp:collectd] [{apps/v1 ReplicaSet webserver-deployment-847dcfb7fb d632e24d-de3f-4d65-a333-40e103ee677c 0xc004a0b46f 0xc004a0b480}] [] [{kube-controller-manager Update v1 2022-06-10 22:01:23 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d632e24d-de3f-4d65-a333-40e103ee677c\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {multus Update v1 2022-06-10 22:01:30 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:k8s.v1.cni.cncf.io/network-status":{},"f:k8s.v1.cni.cncf.io/networks-status":{}}}}} {kubelet Update v1 2022-06-10 22:01:32 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.4.82\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-rdb2s,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-rdb2s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-06-10 22:01:23 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-06-10 22:01:32 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-06-10 22:01:32 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-06-10 22:01:23 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.10.190.208,PodIP:10.244.4.82,StartTime:2022-06-10 22:01:23 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2022-06-10 22:01:32 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,ImageID:docker-pullable://k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50,ContainerID:docker://8243b620c8638750d59639ef6bc4b89fbe2a92996bde650a1d943cd725c6b6eb,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.4.82,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Jun 10 22:01:37.918: INFO: Pod "webserver-deployment-847dcfb7fb-twwsm" is available: &Pod{ObjectMeta:{webserver-deployment-847dcfb7fb-twwsm webserver-deployment-847dcfb7fb- deployment-938 faa178a2-8f42-4a65-97f9-f99d8f8be771 37736 0 2022-06-10 22:01:23 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[k8s.v1.cni.cncf.io/network-status:[{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.3.176" ], "mac": "86:83:5b:5f:52:ad", "default": true, "dns": {} }] k8s.v1.cni.cncf.io/networks-status:[{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.3.176" ], "mac": "86:83:5b:5f:52:ad", "default": true, "dns": {} }] kubernetes.io/psp:collectd] [{apps/v1 ReplicaSet webserver-deployment-847dcfb7fb d632e24d-de3f-4d65-a333-40e103ee677c 0xc004a0b66f 0xc004a0b680}] [] [{kube-controller-manager Update v1 2022-06-10 22:01:23 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d632e24d-de3f-4d65-a333-40e103ee677c\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {multus Update v1 2022-06-10 22:01:29 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:k8s.v1.cni.cncf.io/network-status":{},"f:k8s.v1.cni.cncf.io/networks-status":{}}}}} {kubelet Update v1 2022-06-10 22:01:30 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.3.176\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-6pntl,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-6pntl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node1,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-06-10 22:01:23 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-06-10 22:01:30 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-06-10 22:01:30 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-06-10 22:01:23 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.10.190.207,PodIP:10.244.3.176,StartTime:2022-06-10 22:01:23 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2022-06-10 22:01:30 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,ImageID:docker-pullable://k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50,ContainerID:docker://845471b344fb8019a6235cb17baed53171e88c4cf00438db773b3c31b0b2d329,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.3.176,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Jun 10 22:01:37.918: INFO: Pod "webserver-deployment-847dcfb7fb-zf2gm" is not available: &Pod{ObjectMeta:{webserver-deployment-847dcfb7fb-zf2gm webserver-deployment-847dcfb7fb- deployment-938 166b2095-8248-4186-b1b0-f6395b352db9 37881 0 2022-06-10 22:01:35 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[kubernetes.io/psp:collectd] [{apps/v1 ReplicaSet webserver-deployment-847dcfb7fb d632e24d-de3f-4d65-a333-40e103ee677c 0xc004a0b86f 0xc004a0b880}] [] [{kube-controller-manager Update v1 2022-06-10 22:01:35 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d632e24d-de3f-4d65-a333-40e103ee677c\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-zz6hj,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-zz6hj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-06-10 22:01:35 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 10 22:01:37.918: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-938" for this suite. • [SLOW TEST:14.142 seconds] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support proportional scaling [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should support proportional scaling [Conformance]","total":-1,"completed":23,"skipped":499,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-auth] Certificates API [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 10 22:01:38.000: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename certificates STEP: Waiting for a default service account to be provisioned in namespace [It] should support CSR API operations [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: getting /apis STEP: getting /apis/certificates.k8s.io STEP: getting /apis/certificates.k8s.io/v1 STEP: creating STEP: getting STEP: listing STEP: watching Jun 10 22:01:38.556: INFO: starting watch STEP: patching STEP: updating Jun 10 22:01:38.564: INFO: waiting for watch events with expected annotations Jun 10 22:01:38.564: INFO: saw patched and updated annotations STEP: getting /approval STEP: patching /approval STEP: updating /approval STEP: getting /status STEP: patching /status STEP: updating /status STEP: deleting STEP: deleting a collection [AfterEach] [sig-auth] Certificates API [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 10 22:01:38.601: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "certificates-4492" for this suite. • ------------------------------ {"msg":"PASSED [sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","total":-1,"completed":24,"skipped":533,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-apps] ReplicaSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 10 22:01:38.659: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] Replicaset should have a working scale subresource [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating replica set "test-rs" that asks for more than the allowed pod quota Jun 10 22:01:38.686: INFO: Pod name sample-pod: Found 0 pods out of 1 Jun 10 22:01:43.691: INFO: Pod name sample-pod: Found 1 pods out of 1 STEP: ensuring each pod is running STEP: getting scale subresource STEP: updating a scale subresource STEP: verifying the replicaset Spec.Replicas was modified STEP: Patch a scale subresource [AfterEach] [sig-apps] ReplicaSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 10 22:01:49.716: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-3691" for this suite. • [SLOW TEST:11.064 seconds] [sig-apps] ReplicaSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 Replicaset should have a working scale subresource [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] ReplicaSet Replicaset should have a working scale subresource [Conformance]","total":-1,"completed":25,"skipped":563,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-apps] CronJob /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 10 22:00:02.540: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename cronjob STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] CronJob /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/cronjob.go:63 W0610 22:00:02.563762 28 warnings.go:70] batch/v1beta1 CronJob is deprecated in v1.21+, unavailable in v1.25+; use batch/v1 CronJob [It] should schedule multiple jobs concurrently [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a cronjob STEP: Ensuring more than one job is running at a time STEP: Ensuring at least two running jobs exists by listing jobs explicitly STEP: Removing cronjob [AfterEach] [sig-apps] CronJob /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 10 22:02:00.577: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "cronjob-253" for this suite. • [SLOW TEST:118.045 seconds] [sig-apps] CronJob /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should schedule multiple jobs concurrently [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","total":-1,"completed":17,"skipped":332,"failed":0} SSSS ------------------------------ [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 10 21:58:58.771: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 [BeforeEach] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:105 STEP: Creating service test in namespace statefulset-1340 [It] should perform rolling updates and roll backs of template modifications [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a new StatefulSet Jun 10 21:58:58.803: INFO: Found 0 stateful pods, waiting for 3 Jun 10 21:59:08.807: INFO: Found 2 stateful pods, waiting for 3 Jun 10 21:59:18.808: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Jun 10 21:59:18.808: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Jun 10 21:59:18.808: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true Jun 10 21:59:18.816: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-1340 exec ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Jun 10 21:59:19.644: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" Jun 10 21:59:19.644: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Jun 10 21:59:19.644: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' STEP: Updating StatefulSet template: update image from k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 to k8s.gcr.io/e2e-test-images/httpd:2.4.39-1 Jun 10 21:59:29.673: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Updating Pods in reverse ordinal order Jun 10 21:59:39.692: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-1340 exec ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jun 10 21:59:39.939: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n" Jun 10 21:59:39.939: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Jun 10 21:59:39.939: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Jun 10 22:00:09.961: INFO: Waiting for StatefulSet statefulset-1340/ss2 to complete update STEP: Rolling back to a previous revision Jun 10 22:00:19.969: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-1340 exec ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Jun 10 22:00:20.251: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" Jun 10 22:00:20.251: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Jun 10 22:00:20.251: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Jun 10 22:00:30.280: INFO: Updating stateful set ss2 STEP: Rolling back update in reverse ordinal order Jun 10 22:00:40.295: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-1340 exec ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jun 10 22:00:40.535: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n" Jun 10 22:00:40.535: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Jun 10 22:00:40.535: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Jun 10 22:01:10.555: INFO: Waiting for StatefulSet statefulset-1340/ss2 to complete update Jun 10 22:01:10.555: INFO: Waiting for Pod statefulset-1340/ss2-0 to have revision ss2-677d6db895 update revision ss2-5bbbc9fc94 [AfterEach] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:116 Jun 10 22:01:20.562: INFO: Deleting all statefulset in ns statefulset-1340 Jun 10 22:01:20.564: INFO: Scaling statefulset ss2 to 0 Jun 10 22:02:00.578: INFO: Waiting for statefulset status.replicas updated to 0 Jun 10 22:02:00.580: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 10 22:02:00.588: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-1340" for this suite. • [SLOW TEST:181.825 seconds] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:95 should perform rolling updates and roll backs of template modifications [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ SS ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance]","total":-1,"completed":6,"skipped":70,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 10 22:02:00.652: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be immutable if `immutable` field is set [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [AfterEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 10 22:02:00.706: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-9892" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be immutable if `immutable` field is set [Conformance]","total":-1,"completed":18,"skipped":366,"failed":0} SSSSSSS ------------------------------ [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 10 22:00:41.727: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 [BeforeEach] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:105 STEP: Creating service test in namespace statefulset-4937 [It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Initializing watcher for selector baz=blah,foo=bar STEP: Creating stateful set ss in namespace statefulset-4937 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-4937 Jun 10 22:00:41.761: INFO: Found 0 stateful pods, waiting for 1 Jun 10 22:00:51.766: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod Jun 10 22:00:51.770: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-4937 exec ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Jun 10 22:00:52.007: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" Jun 10 22:00:52.007: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Jun 10 22:00:52.007: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Jun 10 22:00:52.010: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Jun 10 22:01:02.014: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Jun 10 22:01:02.014: INFO: Waiting for statefulset status.replicas updated to 0 Jun 10 22:01:02.026: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999999475s Jun 10 22:01:03.029: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.997142618s Jun 10 22:01:04.032: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.99418781s Jun 10 22:01:05.036: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.990999808s Jun 10 22:01:06.039: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.986816207s Jun 10 22:01:07.043: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.982941818s Jun 10 22:01:08.046: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.979803279s Jun 10 22:01:09.050: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.976749735s Jun 10 22:01:10.054: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.972490044s Jun 10 22:01:11.057: INFO: Verifying statefulset ss doesn't scale past 1 for another 969.254336ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-4937 Jun 10 22:01:12.060: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-4937 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jun 10 22:01:12.389: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n" Jun 10 22:01:12.389: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Jun 10 22:01:12.389: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Jun 10 22:01:12.393: INFO: Found 1 stateful pods, waiting for 3 Jun 10 22:01:22.398: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Jun 10 22:01:22.398: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Jun 10 22:01:22.398: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Verifying that stateful set ss was scaled up in order STEP: Scale down will halt with unhealthy stateful pod Jun 10 22:01:22.405: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-4937 exec ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Jun 10 22:01:22.685: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" Jun 10 22:01:22.685: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Jun 10 22:01:22.685: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Jun 10 22:01:22.685: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-4937 exec ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Jun 10 22:01:22.935: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" Jun 10 22:01:22.935: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Jun 10 22:01:22.935: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Jun 10 22:01:22.935: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-4937 exec ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Jun 10 22:01:23.202: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" Jun 10 22:01:23.202: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Jun 10 22:01:23.202: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Jun 10 22:01:23.202: INFO: Waiting for statefulset status.replicas updated to 0 Jun 10 22:01:23.205: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 2 Jun 10 22:01:33.211: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Jun 10 22:01:33.211: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Jun 10 22:01:33.211: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Jun 10 22:01:33.221: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999999517s Jun 10 22:01:34.226: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.996132024s Jun 10 22:01:35.230: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.991721339s Jun 10 22:01:36.234: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.987408696s Jun 10 22:01:37.238: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.984183377s Jun 10 22:01:38.241: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.980095704s Jun 10 22:01:39.245: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.976804537s Jun 10 22:01:40.249: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.972631773s Jun 10 22:01:41.253: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.968744152s Jun 10 22:01:42.257: INFO: Verifying statefulset ss doesn't scale past 3 for another 964.430635ms STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-4937 Jun 10 22:01:43.262: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-4937 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jun 10 22:01:43.818: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n" Jun 10 22:01:43.818: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Jun 10 22:01:43.818: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Jun 10 22:01:43.819: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-4937 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jun 10 22:01:44.205: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n" Jun 10 22:01:44.205: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Jun 10 22:01:44.205: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Jun 10 22:01:44.205: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-4937 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jun 10 22:01:44.721: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n" Jun 10 22:01:44.721: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Jun 10 22:01:44.721: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Jun 10 22:01:44.721: INFO: Scaling statefulset ss to 0 STEP: Verifying that stateful set ss was scaled down in reverse order [AfterEach] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:116 Jun 10 22:02:04.735: INFO: Deleting all statefulset in ns statefulset-4937 Jun 10 22:02:04.737: INFO: Scaling statefulset ss to 0 Jun 10 22:02:04.746: INFO: Waiting for statefulset status.replicas updated to 0 Jun 10 22:02:04.748: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 10 22:02:04.757: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-4937" for this suite. • [SLOW TEST:83.038 seconds] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:95 Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]","total":-1,"completed":12,"skipped":166,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 10 22:01:23.309: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should run with the expected status [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpa': should get the expected 'State' STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpof': should get the expected 'State' STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpn': should get the expected 'State' STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance] [AfterEach] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 10 22:02:05.609: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-3085" for this suite. • [SLOW TEST:42.309 seconds] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 blackbox test /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:41 when starting a container that exits /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:42 should run with the expected status [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance]","total":-1,"completed":19,"skipped":344,"failed":0} SS ------------------------------ [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 10 22:02:00.625: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test downward API volume plugin Jun 10 22:02:00.657: INFO: Waiting up to 5m0s for pod "downwardapi-volume-510cf822-df48-4337-92bc-0c7f21005138" in namespace "downward-api-9860" to be "Succeeded or Failed" Jun 10 22:02:00.660: INFO: Pod "downwardapi-volume-510cf822-df48-4337-92bc-0c7f21005138": Phase="Pending", Reason="", readiness=false. Elapsed: 2.342651ms Jun 10 22:02:02.662: INFO: Pod "downwardapi-volume-510cf822-df48-4337-92bc-0c7f21005138": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005059965s Jun 10 22:02:04.667: INFO: Pod "downwardapi-volume-510cf822-df48-4337-92bc-0c7f21005138": Phase="Pending", Reason="", readiness=false. Elapsed: 4.010066273s Jun 10 22:02:06.673: INFO: Pod "downwardapi-volume-510cf822-df48-4337-92bc-0c7f21005138": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.015999682s STEP: Saw pod success Jun 10 22:02:06.673: INFO: Pod "downwardapi-volume-510cf822-df48-4337-92bc-0c7f21005138" satisfied condition "Succeeded or Failed" Jun 10 22:02:06.676: INFO: Trying to get logs from node node2 pod downwardapi-volume-510cf822-df48-4337-92bc-0c7f21005138 container client-container: STEP: delete the pod Jun 10 22:02:06.692: INFO: Waiting for pod downwardapi-volume-510cf822-df48-4337-92bc-0c7f21005138 to disappear Jun 10 22:02:06.694: INFO: Pod downwardapi-volume-510cf822-df48-4337-92bc-0c7f21005138 no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 10 22:02:06.694: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-9860" for this suite. • [SLOW TEST:6.077 seconds] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":-1,"completed":7,"skipped":83,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 10 22:02:00.727: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] creating/deleting custom resource definition objects works [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Jun 10 22:02:00.749: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 10 22:02:06.765: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-7348" for this suite. • [SLOW TEST:6.046 seconds] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Simple CustomResourceDefinition /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:48 creating/deleting custom resource definition objects works [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance]","total":-1,"completed":19,"skipped":373,"failed":0} SSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 10 22:02:05.625: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via environment variable [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating configMap configmap-5463/configmap-test-dfb2be13-0e81-40b7-8f8f-715c70e0c47f STEP: Creating a pod to test consume configMaps Jun 10 22:02:05.662: INFO: Waiting up to 5m0s for pod "pod-configmaps-6d2b0b6d-e378-41d5-aeaa-279862476bb5" in namespace "configmap-5463" to be "Succeeded or Failed" Jun 10 22:02:05.665: INFO: Pod "pod-configmaps-6d2b0b6d-e378-41d5-aeaa-279862476bb5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.092065ms Jun 10 22:02:07.668: INFO: Pod "pod-configmaps-6d2b0b6d-e378-41d5-aeaa-279862476bb5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005133623s Jun 10 22:02:09.672: INFO: Pod "pod-configmaps-6d2b0b6d-e378-41d5-aeaa-279862476bb5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.009089624s STEP: Saw pod success Jun 10 22:02:09.672: INFO: Pod "pod-configmaps-6d2b0b6d-e378-41d5-aeaa-279862476bb5" satisfied condition "Succeeded or Failed" Jun 10 22:02:09.675: INFO: Trying to get logs from node node1 pod pod-configmaps-6d2b0b6d-e378-41d5-aeaa-279862476bb5 container env-test: STEP: delete the pod Jun 10 22:02:09.689: INFO: Waiting for pod pod-configmaps-6d2b0b6d-e378-41d5-aeaa-279862476bb5 to disappear Jun 10 22:02:09.692: INFO: Pod pod-configmaps-6d2b0b6d-e378-41d5-aeaa-279862476bb5 no longer exists [AfterEach] [sig-node] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 10 22:02:09.692: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-5463" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance]","total":-1,"completed":20,"skipped":346,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 10 22:02:06.744: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/rc.go:54 [It] should release no longer matching pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Given a ReplicationController is created STEP: When the matched label of one of its pods change Jun 10 22:02:06.772: INFO: Pod name pod-release: Found 0 pods out of 1 Jun 10 22:02:11.775: INFO: Pod name pod-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 10 22:02:12.789: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-9276" for this suite. • [SLOW TEST:6.054 seconds] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should release no longer matching pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should release no longer matching pods [Conformance]","total":-1,"completed":8,"skipped":103,"failed":0} SSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 10 22:02:06.798: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test downward api env vars Jun 10 22:02:06.828: INFO: Waiting up to 5m0s for pod "downward-api-431bfbfa-920e-4a69-8a44-7ae81b96f181" in namespace "downward-api-5096" to be "Succeeded or Failed" Jun 10 22:02:06.830: INFO: Pod "downward-api-431bfbfa-920e-4a69-8a44-7ae81b96f181": Phase="Pending", Reason="", readiness=false. Elapsed: 1.982732ms Jun 10 22:02:08.835: INFO: Pod "downward-api-431bfbfa-920e-4a69-8a44-7ae81b96f181": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007593171s Jun 10 22:02:10.840: INFO: Pod "downward-api-431bfbfa-920e-4a69-8a44-7ae81b96f181": Phase="Pending", Reason="", readiness=false. Elapsed: 4.012274372s Jun 10 22:02:12.846: INFO: Pod "downward-api-431bfbfa-920e-4a69-8a44-7ae81b96f181": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.018034035s STEP: Saw pod success Jun 10 22:02:12.846: INFO: Pod "downward-api-431bfbfa-920e-4a69-8a44-7ae81b96f181" satisfied condition "Succeeded or Failed" Jun 10 22:02:12.848: INFO: Trying to get logs from node node2 pod downward-api-431bfbfa-920e-4a69-8a44-7ae81b96f181 container dapi-container: STEP: delete the pod Jun 10 22:02:13.043: INFO: Waiting for pod downward-api-431bfbfa-920e-4a69-8a44-7ae81b96f181 to disappear Jun 10 22:02:13.045: INFO: Pod downward-api-431bfbfa-920e-4a69-8a44-7ae81b96f181 no longer exists [AfterEach] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 10 22:02:13.045: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-5096" for this suite. • [SLOW TEST:6.256 seconds] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]","total":-1,"completed":20,"skipped":386,"failed":0} SSSSSS ------------------------------ [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 10 22:02:13.068: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:241 [It] should check if Kubernetes control plane services is included in cluster-info [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: validating cluster-info Jun 10 22:02:13.089: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-5906 cluster-info' Jun 10 22:02:13.252: INFO: stderr: "" Jun 10 22:02:13.252: INFO: stdout: "\x1b[0;32mKubernetes control plane\x1b[0m is running at \x1b[0;33mhttps://10.10.190.202:6443\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 10 22:02:13.252: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5906" for this suite. • ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes control plane services is included in cluster-info [Conformance]","total":-1,"completed":21,"skipped":392,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 10 22:02:09.771: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41 [It] should provide container's memory limit [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test downward API volume plugin Jun 10 22:02:09.807: INFO: Waiting up to 5m0s for pod "downwardapi-volume-32fd9017-977c-429f-8668-ee28961b2a21" in namespace "downward-api-7974" to be "Succeeded or Failed" Jun 10 22:02:09.809: INFO: Pod "downwardapi-volume-32fd9017-977c-429f-8668-ee28961b2a21": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010194ms Jun 10 22:02:11.812: INFO: Pod "downwardapi-volume-32fd9017-977c-429f-8668-ee28961b2a21": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005325771s Jun 10 22:02:13.819: INFO: Pod "downwardapi-volume-32fd9017-977c-429f-8668-ee28961b2a21": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012514314s STEP: Saw pod success Jun 10 22:02:13.819: INFO: Pod "downwardapi-volume-32fd9017-977c-429f-8668-ee28961b2a21" satisfied condition "Succeeded or Failed" Jun 10 22:02:13.822: INFO: Trying to get logs from node node1 pod downwardapi-volume-32fd9017-977c-429f-8668-ee28961b2a21 container client-container: STEP: delete the pod Jun 10 22:02:13.858: INFO: Waiting for pod downwardapi-volume-32fd9017-977c-429f-8668-ee28961b2a21 to disappear Jun 10 22:02:13.860: INFO: Pod downwardapi-volume-32fd9017-977c-429f-8668-ee28961b2a21 no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 10 22:02:13.860: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-7974" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance]","total":-1,"completed":21,"skipped":386,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 10 22:02:13.899: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/kubelet.go:38 [BeforeEach] when scheduling a busybox command that always fails in a pod /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/kubelet.go:82 [It] should be possible to delete [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [AfterEach] [sig-node] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 10 22:02:13.938: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-9330" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance]","total":-1,"completed":22,"skipped":403,"failed":0} SSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 10 22:02:13.349: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete RS created by deployment when not orphaning [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for all rs to be garbage collected STEP: expected 0 rs, got 1 rs STEP: expected 0 pods, got 2 pods STEP: Gathering metrics Jun 10 22:02:14.431: INFO: The status of Pod kube-controller-manager-master3 is Running (Ready = true) Jun 10 22:02:14.497: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 10 22:02:14.497: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-2055" for this suite. • ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance]","total":-1,"completed":22,"skipped":447,"failed":0} SS ------------------------------ [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 10 21:59:43.296: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746 [It] should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating service in namespace services-4917 Jun 10 21:59:43.336: INFO: The status of Pod kube-proxy-mode-detector is Pending, waiting for it to be Running (with Ready = true) Jun 10 21:59:45.340: INFO: The status of Pod kube-proxy-mode-detector is Pending, waiting for it to be Running (with Ready = true) Jun 10 21:59:47.340: INFO: The status of Pod kube-proxy-mode-detector is Running (Ready = true) Jun 10 21:59:47.343: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4917 exec kube-proxy-mode-detector -- /bin/sh -x -c curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode' Jun 10 21:59:47.623: INFO: stderr: "+ curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode\n" Jun 10 21:59:47.623: INFO: stdout: "iptables" Jun 10 21:59:47.623: INFO: proxyMode: iptables Jun 10 21:59:47.631: INFO: Waiting for pod kube-proxy-mode-detector to disappear Jun 10 21:59:47.633: INFO: Pod kube-proxy-mode-detector no longer exists STEP: creating service affinity-nodeport-timeout in namespace services-4917 STEP: creating replication controller affinity-nodeport-timeout in namespace services-4917 I0610 21:59:47.646439 25 runners.go:190] Created replication controller with name: affinity-nodeport-timeout, namespace: services-4917, replica count: 3 I0610 21:59:50.697812 25 runners.go:190] affinity-nodeport-timeout Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0610 21:59:53.699398 25 runners.go:190] affinity-nodeport-timeout Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jun 10 21:59:53.711: INFO: Creating new exec pod Jun 10 22:00:04.733: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4917 exec execpod-affinityllnmd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-timeout 80' Jun 10 22:00:05.136: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-nodeport-timeout 80\nConnection to affinity-nodeport-timeout 80 port [tcp/http] succeeded!\n" Jun 10 22:00:05.136: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" Jun 10 22:00:05.136: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4917 exec execpod-affinityllnmd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.233.56.24 80' Jun 10 22:00:05.387: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 10.233.56.24 80\nConnection to 10.233.56.24 80 port [tcp/http] succeeded!\n" Jun 10 22:00:05.387: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" Jun 10 22:00:05.387: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4917 exec execpod-affinityllnmd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31240' Jun 10 22:00:05.651: INFO: rc: 1 Jun 10 22:00:05.651: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4917 exec execpod-affinityllnmd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31240: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31240 nc: connect to 10.10.190.207 port 31240 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:00:06.652: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4917 exec execpod-affinityllnmd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31240' Jun 10 22:00:06.941: INFO: rc: 1 Jun 10 22:00:06.941: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4917 exec execpod-affinityllnmd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31240: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31240 nc: connect to 10.10.190.207 port 31240 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:00:07.651: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4917 exec execpod-affinityllnmd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31240' Jun 10 22:00:07.891: INFO: rc: 1 Jun 10 22:00:07.891: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4917 exec execpod-affinityllnmd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31240: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31240 + echo hostName nc: connect to 10.10.190.207 port 31240 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:00:08.651: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4917 exec execpod-affinityllnmd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31240' Jun 10 22:00:08.898: INFO: rc: 1 Jun 10 22:00:08.898: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4917 exec execpod-affinityllnmd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31240: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31240 nc: connect to 10.10.190.207 port 31240 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:00:09.651: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4917 exec execpod-affinityllnmd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31240' Jun 10 22:00:09.909: INFO: rc: 1 Jun 10 22:00:09.909: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4917 exec execpod-affinityllnmd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31240: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31240 nc: connect to 10.10.190.207 port 31240 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:00:10.652: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4917 exec execpod-affinityllnmd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31240' Jun 10 22:00:10.894: INFO: rc: 1 Jun 10 22:00:10.894: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4917 exec execpod-affinityllnmd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31240: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31240 nc: connect to 10.10.190.207 port 31240 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:00:11.651: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4917 exec execpod-affinityllnmd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31240' Jun 10 22:00:11.894: INFO: rc: 1 Jun 10 22:00:11.894: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4917 exec execpod-affinityllnmd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31240: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31240 nc: connect to 10.10.190.207 port 31240 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:00:12.652: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4917 exec execpod-affinityllnmd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31240' Jun 10 22:00:13.111: INFO: rc: 1 Jun 10 22:00:13.111: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4917 exec execpod-affinityllnmd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31240: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31240 nc: connect to 10.10.190.207 port 31240 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:00:13.652: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4917 exec execpod-affinityllnmd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31240' Jun 10 22:00:13.911: INFO: rc: 1 Jun 10 22:00:13.911: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4917 exec execpod-affinityllnmd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31240: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31240 nc: connect to 10.10.190.207 port 31240 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:00:14.652: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4917 exec execpod-affinityllnmd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31240' Jun 10 22:00:15.037: INFO: rc: 1 Jun 10 22:00:15.038: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4917 exec execpod-affinityllnmd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31240: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31240 nc: connect to 10.10.190.207 port 31240 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:00:15.652: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4917 exec execpod-affinityllnmd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31240' Jun 10 22:00:15.924: INFO: rc: 1 Jun 10 22:00:15.924: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4917 exec execpod-affinityllnmd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31240: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31240 nc: connect to 10.10.190.207 port 31240 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:00:16.652: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4917 exec execpod-affinityllnmd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31240' Jun 10 22:00:16.899: INFO: rc: 1 Jun 10 22:00:16.899: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4917 exec execpod-affinityllnmd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31240: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31240 nc: connect to 10.10.190.207 port 31240 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:00:17.652: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4917 exec execpod-affinityllnmd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31240' Jun 10 22:00:17.867: INFO: rc: 1 Jun 10 22:00:17.867: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4917 exec execpod-affinityllnmd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31240: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31240 nc: connect to 10.10.190.207 port 31240 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:00:18.652: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4917 exec execpod-affinityllnmd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31240' Jun 10 22:00:18.897: INFO: rc: 1 Jun 10 22:00:18.897: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4917 exec execpod-affinityllnmd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31240: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31240 nc: connect to 10.10.190.207 port 31240 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:00:19.652: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4917 exec execpod-affinityllnmd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31240' Jun 10 22:00:19.873: INFO: rc: 1 Jun 10 22:00:19.873: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4917 exec execpod-affinityllnmd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31240: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31240 nc: connect to 10.10.190.207 port 31240 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:00:20.652: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4917 exec execpod-affinityllnmd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31240' Jun 10 22:00:21.163: INFO: rc: 1 Jun 10 22:00:21.163: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4917 exec execpod-affinityllnmd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31240: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31240 nc: connect to 10.10.190.207 port 31240 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:00:21.653: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4917 exec execpod-affinityllnmd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31240' Jun 10 22:00:21.905: INFO: rc: 1 Jun 10 22:00:21.905: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4917 exec execpod-affinityllnmd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31240: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31240 nc: connect to 10.10.190.207 port 31240 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:00:22.651: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4917 exec execpod-affinityllnmd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31240' Jun 10 22:00:22.906: INFO: rc: 1 Jun 10 22:00:22.906: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4917 exec execpod-affinityllnmd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31240: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31240 nc: connect to 10.10.190.207 port 31240 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:00:23.653: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4917 exec execpod-affinityllnmd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31240' Jun 10 22:00:23.941: INFO: rc: 1 Jun 10 22:00:23.941: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4917 exec execpod-affinityllnmd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31240: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31240 nc: connect to 10.10.190.207 port 31240 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:00:24.653: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4917 exec execpod-affinityllnmd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31240' Jun 10 22:00:24.910: INFO: rc: 1 Jun 10 22:00:24.910: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4917 exec execpod-affinityllnmd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31240: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31240 nc: connect to 10.10.190.207 port 31240 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:00:25.652: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4917 exec execpod-affinityllnmd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31240' Jun 10 22:00:25.893: INFO: rc: 1 Jun 10 22:00:25.893: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4917 exec execpod-affinityllnmd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31240: Command stdout: stderr: + + echonc hostName -v -t -w 2 10.10.190.207 31240 nc: connect to 10.10.190.207 port 31240 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:00:26.653: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4917 exec execpod-affinityllnmd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31240' Jun 10 22:00:26.901: INFO: rc: 1 Jun 10 22:00:26.901: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4917 exec execpod-affinityllnmd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31240: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31240 nc: connect to 10.10.190.207 port 31240 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:00:27.651: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4917 exec execpod-affinityllnmd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31240' Jun 10 22:00:29.060: INFO: rc: 1 Jun 10 22:00:29.060: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4917 exec execpod-affinityllnmd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31240: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31240 nc: connect to 10.10.190.207 port 31240 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:00:29.651: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4917 exec execpod-affinityllnmd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31240' Jun 10 22:00:29.901: INFO: rc: 1 Jun 10 22:00:29.901: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4917 exec execpod-affinityllnmd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31240: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31240 nc: connect to 10.10.190.207 port 31240 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:00:30.652: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4917 exec execpod-affinityllnmd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31240' Jun 10 22:00:30.907: INFO: rc: 1 Jun 10 22:00:30.907: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4917 exec execpod-affinityllnmd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31240: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31240 nc: connect to 10.10.190.207 port 31240 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:00:31.653: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4917 exec execpod-affinityllnmd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31240' Jun 10 22:00:31.927: INFO: rc: 1 Jun 10 22:00:31.927: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4917 exec execpod-affinityllnmd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31240: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31240 nc: connect to 10.10.190.207 port 31240 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:00:32.653: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4917 exec execpod-affinityllnmd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31240' Jun 10 22:00:32.898: INFO: rc: 1 Jun 10 22:00:32.898: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4917 exec execpod-affinityllnmd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31240: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31240 nc: connect to 10.10.190.207 port 31240 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:00:33.652: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4917 exec execpod-affinityllnmd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31240' Jun 10 22:00:33.881: INFO: rc: 1 Jun 10 22:00:33.882: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4917 exec execpod-affinityllnmd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31240: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31240 nc: connect to 10.10.190.207 port 31240 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:00:34.653: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4917 exec execpod-affinityllnmd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31240' Jun 10 22:00:34.916: INFO: rc: 1 Jun 10 22:00:34.916: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4917 exec execpod-affinityllnmd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31240: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31240 nc: connect to 10.10.190.207 port 31240 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:00:35.652: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4917 exec execpod-affinityllnmd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31240' Jun 10 22:00:35.904: INFO: rc: 1 Jun 10 22:00:35.904: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4917 exec execpod-affinityllnmd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31240: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31240 nc: connect to 10.10.190.207 port 31240 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:00:36.652: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4917 exec execpod-affinityllnmd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31240' Jun 10 22:00:36.928: INFO: rc: 1 Jun 10 22:00:36.928: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4917 exec execpod-affinityllnmd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31240: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31240 nc: connect to 10.10.190.207 port 31240 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:00:37.653: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4917 exec execpod-affinityllnmd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31240' Jun 10 22:00:37.885: INFO: rc: 1 Jun 10 22:00:37.885: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4917 exec execpod-affinityllnmd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31240: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31240 nc: connect to 10.10.190.207 port 31240 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:00:38.651: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4917 exec execpod-affinityllnmd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31240' Jun 10 22:00:38.939: INFO: rc: 1 Jun 10 22:00:38.940: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4917 exec execpod-affinityllnmd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31240: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31240 nc: connect to 10.10.190.207 port 31240 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:00:39.653: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4917 exec execpod-affinityllnmd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31240' Jun 10 22:00:39.889: INFO: rc: 1 Jun 10 22:00:39.889: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4917 exec execpod-affinityllnmd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31240: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31240 nc: connect to 10.10.190.207 port 31240 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:00:40.651: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4917 exec execpod-affinityllnmd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31240' Jun 10 22:00:40.923: INFO: rc: 1 Jun 10 22:00:40.923: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4917 exec execpod-affinityllnmd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31240: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31240 nc: connect to 10.10.190.207 port 31240 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:00:41.651: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4917 exec execpod-affinityllnmd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31240' Jun 10 22:00:42.047: INFO: rc: 1 Jun 10 22:00:42.047: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4917 exec execpod-affinityllnmd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31240: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31240 nc: connect to 10.10.190.207 port 31240 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:00:42.653: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4917 exec execpod-affinityllnmd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31240' Jun 10 22:00:42.903: INFO: rc: 1 Jun 10 22:00:42.903: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4917 exec execpod-affinityllnmd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31240: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31240 nc: connect to 10.10.190.207 port 31240 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:00:43.651: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4917 exec execpod-affinityllnmd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31240' Jun 10 22:00:43.905: INFO: rc: 1 Jun 10 22:00:43.905: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4917 exec execpod-affinityllnmd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31240: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31240 nc: connect to 10.10.190.207 port 31240 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:00:44.653: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4917 exec execpod-affinityllnmd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31240' Jun 10 22:00:44.967: INFO: rc: 1 Jun 10 22:00:44.967: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4917 exec execpod-affinityllnmd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31240: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31240 nc: connect to 10.10.190.207 port 31240 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:00:45.652: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4917 exec execpod-affinityllnmd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31240' Jun 10 22:00:45.881: INFO: rc: 1 Jun 10 22:00:45.881: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4917 exec execpod-affinityllnmd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31240: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31240 nc: connect to 10.10.190.207 port 31240 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:00:46.652: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4917 exec execpod-affinityllnmd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31240' Jun 10 22:00:46.894: INFO: rc: 1 Jun 10 22:00:46.894: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4917 exec execpod-affinityllnmd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31240: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31240 nc: connect to 10.10.190.207 port 31240 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:00:47.653: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4917 exec execpod-affinityllnmd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31240' Jun 10 22:00:48.256: INFO: rc: 1 Jun 10 22:00:48.256: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4917 exec execpod-affinityllnmd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31240: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31240 nc: connect to 10.10.190.207 port 31240 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:00:48.652: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4917 exec execpod-affinityllnmd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31240' Jun 10 22:00:48.934: INFO: rc: 1 Jun 10 22:00:48.934: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4917 exec execpod-affinityllnmd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31240: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31240 nc: connect to 10.10.190.207 port 31240 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:00:49.652: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4917 exec execpod-affinityllnmd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31240' Jun 10 22:00:49.915: INFO: rc: 1 Jun 10 22:00:49.915: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4917 exec execpod-affinityllnmd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31240: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31240 nc: connect to 10.10.190.207 port 31240 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:00:50.652: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4917 exec execpod-affinityllnmd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31240' Jun 10 22:00:50.884: INFO: rc: 1 Jun 10 22:00:50.885: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4917 exec execpod-affinityllnmd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31240: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31240 nc: connect to 10.10.190.207 port 31240 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:00:51.652: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4917 exec execpod-affinityllnmd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31240' Jun 10 22:00:51.878: INFO: rc: 1 Jun 10 22:00:51.878: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4917 exec execpod-affinityllnmd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31240: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31240 nc: connect to 10.10.190.207 port 31240 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:00:52.651: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4917 exec execpod-affinityllnmd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31240' Jun 10 22:00:52.868: INFO: rc: 1 Jun 10 22:00:52.868: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4917 exec execpod-affinityllnmd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31240: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31240 nc: connect to 10.10.190.207 port 31240 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:00:53.652: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4917 exec execpod-affinityllnmd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31240' Jun 10 22:00:53.950: INFO: rc: 1 Jun 10 22:00:53.950: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4917 exec execpod-affinityllnmd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31240: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31240 nc: connect to 10.10.190.207 port 31240 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:00:54.653: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4917 exec execpod-affinityllnmd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31240' Jun 10 22:00:54.914: INFO: rc: 1 Jun 10 22:00:54.914: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4917 exec execpod-affinityllnmd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31240: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31240 nc: connect to 10.10.190.207 port 31240 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:00:55.652: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4917 exec execpod-affinityllnmd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31240' Jun 10 22:00:55.895: INFO: rc: 1 Jun 10 22:00:55.895: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4917 exec execpod-affinityllnmd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31240: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31240 nc: connect to 10.10.190.207 port 31240 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:00:56.651: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4917 exec execpod-affinityllnmd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31240' Jun 10 22:00:56.898: INFO: rc: 1 Jun 10 22:00:56.898: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4917 exec execpod-affinityllnmd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31240: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31240 nc: connect to 10.10.190.207 port 31240 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:00:57.652: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4917 exec execpod-affinityllnmd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31240' Jun 10 22:00:59.222: INFO: rc: 1 Jun 10 22:00:59.222: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4917 exec execpod-affinityllnmd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31240: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31240 nc: connect to 10.10.190.207 port 31240 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:00:59.652: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4917 exec execpod-affinityllnmd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31240' Jun 10 22:00:59.932: INFO: rc: 1 Jun 10 22:00:59.932: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4917 exec execpod-affinityllnmd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31240: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31240 nc: connect to 10.10.190.207 port 31240 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:01:00.651: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4917 exec execpod-affinityllnmd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31240' Jun 10 22:01:01.358: INFO: rc: 1 Jun 10 22:01:01.358: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4917 exec execpod-affinityllnmd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31240: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31240 nc: connect to 10.10.190.207 port 31240 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:01:01.651: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4917 exec execpod-affinityllnmd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31240' Jun 10 22:01:02.038: INFO: rc: 1 Jun 10 22:01:02.038: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4917 exec execpod-affinityllnmd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31240: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31240 nc: connect to 10.10.190.207 port 31240 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:01:02.651: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4917 exec execpod-affinityllnmd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31240' Jun 10 22:01:03.010: INFO: rc: 1 Jun 10 22:01:03.010: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4917 exec execpod-affinityllnmd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31240: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31240 nc: connect to 10.10.190.207 port 31240 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:01:03.651: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4917 exec execpod-affinityllnmd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31240' Jun 10 22:01:04.033: INFO: rc: 1 Jun 10 22:01:04.034: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4917 exec execpod-affinityllnmd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31240: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31240 nc: connect to 10.10.190.207 port 31240 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:01:04.652: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4917 exec execpod-affinityllnmd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31240' Jun 10 22:01:04.903: INFO: rc: 1 Jun 10 22:01:04.903: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4917 exec execpod-affinityllnmd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31240: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31240 nc: connect to 10.10.190.207 port 31240 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:01:05.651: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4917 exec execpod-affinityllnmd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31240' Jun 10 22:01:06.815: INFO: rc: 1 Jun 10 22:01:06.815: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4917 exec execpod-affinityllnmd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31240: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31240 nc: connect to 10.10.190.207 port 31240 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:01:07.653: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4917 exec execpod-affinityllnmd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31240' Jun 10 22:01:07.904: INFO: rc: 1 Jun 10 22:01:07.904: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4917 exec execpod-affinityllnmd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31240: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31240 nc: connect to 10.10.190.207 port 31240 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:01:08.653: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4917 exec execpod-affinityllnmd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31240' Jun 10 22:01:09.065: INFO: rc: 1 Jun 10 22:01:09.065: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4917 exec execpod-affinityllnmd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31240: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31240 nc: connect to 10.10.190.207 port 31240 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:01:09.651: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4917 exec execpod-affinityllnmd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31240' Jun 10 22:01:09.908: INFO: rc: 1 Jun 10 22:01:09.908: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4917 exec execpod-affinityllnmd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31240: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31240 nc: connect to 10.10.190.207 port 31240 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:01:10.651: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4917 exec execpod-affinityllnmd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31240' Jun 10 22:01:11.051: INFO: rc: 1 Jun 10 22:01:11.051: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4917 exec execpod-affinityllnmd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31240: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31240 nc: connect to 10.10.190.207 port 31240 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:01:11.651: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4917 exec execpod-affinityllnmd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31240' Jun 10 22:01:11.983: INFO: rc: 1 Jun 10 22:01:11.983: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4917 exec execpod-affinityllnmd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31240: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31240 nc: connect to 10.10.190.207 port 31240 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:01:12.651: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4917 exec execpod-affinityllnmd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31240' Jun 10 22:01:12.947: INFO: rc: 1 Jun 10 22:01:12.947: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4917 exec execpod-affinityllnmd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31240: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31240 nc: connect to 10.10.190.207 port 31240 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:01:13.652: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4917 exec execpod-affinityllnmd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31240' Jun 10 22:01:14.515: INFO: rc: 1 Jun 10 22:01:14.515: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4917 exec execpod-affinityllnmd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31240: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31240 nc: connect to 10.10.190.207 port 31240 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:01:14.652: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4917 exec execpod-affinityllnmd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31240' Jun 10 22:01:14.962: INFO: rc: 1 Jun 10 22:01:14.962: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4917 exec execpod-affinityllnmd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31240: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31240 + echo hostName nc: connect to 10.10.190.207 port 31240 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:01:15.651: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4917 exec execpod-affinityllnmd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31240' Jun 10 22:01:15.895: INFO: rc: 1 Jun 10 22:01:15.895: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4917 exec execpod-affinityllnmd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31240: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31240 nc: connect to 10.10.190.207 port 31240 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:01:16.651: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4917 exec execpod-affinityllnmd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31240' Jun 10 22:01:16.920: INFO: rc: 1 Jun 10 22:01:16.920: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4917 exec execpod-affinityllnmd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31240: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31240 nc: connect to 10.10.190.207 port 31240 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:01:17.652: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4917 exec execpod-affinityllnmd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31240' Jun 10 22:01:18.085: INFO: rc: 1 Jun 10 22:01:18.086: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4917 exec execpod-affinityllnmd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31240: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31240 nc: connect to 10.10.190.207 port 31240 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:01:18.651: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4917 exec execpod-affinityllnmd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31240' Jun 10 22:01:19.013: INFO: rc: 1 Jun 10 22:01:19.013: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4917 exec execpod-affinityllnmd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31240: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31240 nc: connect to 10.10.190.207 port 31240 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:01:19.652: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4917 exec execpod-affinityllnmd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31240' Jun 10 22:01:19.889: INFO: rc: 1 Jun 10 22:01:19.889: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4917 exec execpod-affinityllnmd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31240: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31240 nc: connect to 10.10.190.207 port 31240 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:01:20.651: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4917 exec execpod-affinityllnmd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31240' Jun 10 22:01:20.962: INFO: rc: 1 Jun 10 22:01:20.962: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4917 exec execpod-affinityllnmd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31240: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31240 nc: connect to 10.10.190.207 port 31240 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:01:21.651: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4917 exec execpod-affinityllnmd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31240' Jun 10 22:01:21.885: INFO: rc: 1 Jun 10 22:01:21.885: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4917 exec execpod-affinityllnmd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31240: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31240 nc: connect to 10.10.190.207 port 31240 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:01:22.652: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4917 exec execpod-affinityllnmd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31240' Jun 10 22:01:22.931: INFO: rc: 1 Jun 10 22:01:22.932: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4917 exec execpod-affinityllnmd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31240: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31240 nc: connect to 10.10.190.207 port 31240 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:01:23.652: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4917 exec execpod-affinityllnmd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31240' Jun 10 22:01:23.942: INFO: rc: 1 Jun 10 22:01:23.942: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4917 exec execpod-affinityllnmd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31240: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31240 nc: connect to 10.10.190.207 port 31240 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:01:24.651: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4917 exec execpod-affinityllnmd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31240' Jun 10 22:01:25.049: INFO: rc: 1 Jun 10 22:01:25.049: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4917 exec execpod-affinityllnmd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31240: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31240 nc: connect to 10.10.190.207 port 31240 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:01:25.652: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4917 exec execpod-affinityllnmd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31240' Jun 10 22:01:26.010: INFO: rc: 1 Jun 10 22:01:26.010: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4917 exec execpod-affinityllnmd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31240: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31240 nc: connect to 10.10.190.207 port 31240 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:01:26.652: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4917 exec execpod-affinityllnmd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31240' Jun 10 22:01:27.252: INFO: rc: 1 Jun 10 22:01:27.253: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4917 exec execpod-affinityllnmd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31240: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31240 nc: connect to 10.10.190.207 port 31240 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:01:27.652: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4917 exec execpod-affinityllnmd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31240' Jun 10 22:01:29.553: INFO: rc: 1 Jun 10 22:01:29.554: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4917 exec execpod-affinityllnmd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31240: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31240 nc: connect to 10.10.190.207 port 31240 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:01:29.652: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4917 exec execpod-affinityllnmd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31240' Jun 10 22:01:29.929: INFO: rc: 1 Jun 10 22:01:29.929: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4917 exec execpod-affinityllnmd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31240: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31240 nc: connect to 10.10.190.207 port 31240 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:01:30.652: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4917 exec execpod-affinityllnmd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31240' Jun 10 22:01:30.916: INFO: rc: 1 Jun 10 22:01:30.916: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4917 exec execpod-affinityllnmd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31240: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31240 nc: connect to 10.10.190.207 port 31240 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:01:31.652: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4917 exec execpod-affinityllnmd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31240' Jun 10 22:01:31.919: INFO: rc: 1 Jun 10 22:01:31.919: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4917 exec execpod-affinityllnmd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31240: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31240 nc: connect to 10.10.190.207 port 31240 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:01:32.653: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4917 exec execpod-affinityllnmd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31240' Jun 10 22:01:32.894: INFO: rc: 1 Jun 10 22:01:32.894: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4917 exec execpod-affinityllnmd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31240: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31240 nc: connect to 10.10.190.207 port 31240 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:01:33.652: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4917 exec execpod-affinityllnmd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31240' Jun 10 22:01:33.912: INFO: rc: 1 Jun 10 22:01:33.912: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4917 exec execpod-affinityllnmd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31240: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31240 nc: connect to 10.10.190.207 port 31240 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:01:34.651: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4917 exec execpod-affinityllnmd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31240' Jun 10 22:01:34.896: INFO: rc: 1 Jun 10 22:01:34.896: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4917 exec execpod-affinityllnmd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31240: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31240 nc: connect to 10.10.190.207 port 31240 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:01:35.652: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4917 exec execpod-affinityllnmd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31240' Jun 10 22:01:35.925: INFO: rc: 1 Jun 10 22:01:35.925: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4917 exec execpod-affinityllnmd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31240: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31240 nc: connect to 10.10.190.207 port 31240 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:01:36.653: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4917 exec execpod-affinityllnmd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31240' Jun 10 22:01:37.065: INFO: rc: 1 Jun 10 22:01:37.065: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4917 exec execpod-affinityllnmd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31240: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31240 nc: connect to 10.10.190.207 port 31240 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:01:37.651: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4917 exec execpod-affinityllnmd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31240' Jun 10 22:01:38.162: INFO: rc: 1 Jun 10 22:01:38.162: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4917 exec execpod-affinityllnmd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31240: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31240 nc: connect to 10.10.190.207 port 31240 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:01:38.651: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4917 exec execpod-affinityllnmd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31240' Jun 10 22:01:39.067: INFO: rc: 1 Jun 10 22:01:39.067: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4917 exec execpod-affinityllnmd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31240: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31240 nc: connect to 10.10.190.207 port 31240 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:01:39.652: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4917 exec execpod-affinityllnmd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31240' Jun 10 22:01:40.279: INFO: rc: 1 Jun 10 22:01:40.279: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4917 exec execpod-affinityllnmd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31240: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31240 nc: connect to 10.10.190.207 port 31240 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:01:40.652: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4917 exec execpod-affinityllnmd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31240' Jun 10 22:01:41.027: INFO: rc: 1 Jun 10 22:01:41.027: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4917 exec execpod-affinityllnmd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31240: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31240 nc: connect to 10.10.190.207 port 31240 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:01:41.652: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4917 exec execpod-affinityllnmd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31240' Jun 10 22:01:42.024: INFO: rc: 1 Jun 10 22:01:42.024: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4917 exec execpod-affinityllnmd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31240: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31240 nc: connect to 10.10.190.207 port 31240 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:01:42.652: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4917 exec execpod-affinityllnmd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31240' Jun 10 22:01:43.760: INFO: rc: 1 Jun 10 22:01:43.760: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4917 exec execpod-affinityllnmd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31240: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31240 nc: connect to 10.10.190.207 port 31240 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:01:44.652: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4917 exec execpod-affinityllnmd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31240' Jun 10 22:01:45.681: INFO: rc: 1 Jun 10 22:01:45.681: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4917 exec execpod-affinityllnmd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31240: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31240 nc: connect to 10.10.190.207 port 31240 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:01:46.653: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4917 exec execpod-affinityllnmd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31240' Jun 10 22:01:47.222: INFO: rc: 1 Jun 10 22:01:47.222: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4917 exec execpod-affinityllnmd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31240: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31240 nc: connect to 10.10.190.207 port 31240 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:01:47.652: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4917 exec execpod-affinityllnmd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31240' Jun 10 22:01:48.207: INFO: rc: 1 Jun 10 22:01:48.207: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4917 exec execpod-affinityllnmd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31240: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31240 nc: connect to 10.10.190.207 port 31240 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:01:48.652: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4917 exec execpod-affinityllnmd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31240' Jun 10 22:01:49.047: INFO: rc: 1 Jun 10 22:01:49.047: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4917 exec execpod-affinityllnmd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31240: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31240 nc: connect to 10.10.190.207 port 31240 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:01:49.651: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4917 exec execpod-affinityllnmd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31240' Jun 10 22:01:49.938: INFO: rc: 1 Jun 10 22:01:49.938: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4917 exec execpod-affinityllnmd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31240: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31240 nc: connect to 10.10.190.207 port 31240 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:01:50.651: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4917 exec execpod-affinityllnmd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31240' Jun 10 22:01:52.173: INFO: rc: 1 Jun 10 22:01:52.173: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4917 exec execpod-affinityllnmd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31240: Command stdout: stderr: + echo+ hostName nc -v -t -w 2 10.10.190.207 31240 nc: connect to 10.10.190.207 port 31240 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:01:52.652: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4917 exec execpod-affinityllnmd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31240' Jun 10 22:01:53.043: INFO: rc: 1 Jun 10 22:01:53.043: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4917 exec execpod-affinityllnmd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31240: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31240 nc: connect to 10.10.190.207 port 31240 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:01:53.651: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4917 exec execpod-affinityllnmd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31240' Jun 10 22:01:53.900: INFO: rc: 1 Jun 10 22:01:53.901: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4917 exec execpod-affinityllnmd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31240: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31240 nc: connect to 10.10.190.207 port 31240 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:01:54.651: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4917 exec execpod-affinityllnmd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31240' Jun 10 22:01:54.905: INFO: rc: 1 Jun 10 22:01:54.906: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4917 exec execpod-affinityllnmd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31240: Command stdout: stderr: + + echonc -v hostName -t -w 2 10.10.190.207 31240 nc: connect to 10.10.190.207 port 31240 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:01:55.652: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4917 exec execpod-affinityllnmd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31240' Jun 10 22:01:55.916: INFO: rc: 1 Jun 10 22:01:55.916: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4917 exec execpod-affinityllnmd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31240: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31240 nc: connect to 10.10.190.207 port 31240 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:01:56.651: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4917 exec execpod-affinityllnmd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31240' Jun 10 22:01:56.904: INFO: rc: 1 Jun 10 22:01:56.904: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4917 exec execpod-affinityllnmd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31240: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31240 nc: connect to 10.10.190.207 port 31240 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:01:57.651: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4917 exec execpod-affinityllnmd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31240' Jun 10 22:01:58.380: INFO: rc: 1 Jun 10 22:01:58.380: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4917 exec execpod-affinityllnmd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31240: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31240 nc: connect to 10.10.190.207 port 31240 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:01:58.651: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4917 exec execpod-affinityllnmd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31240' Jun 10 22:01:59.204: INFO: rc: 1 Jun 10 22:01:59.204: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4917 exec execpod-affinityllnmd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31240: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31240 nc: connect to 10.10.190.207 port 31240 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:01:59.652: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4917 exec execpod-affinityllnmd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31240' Jun 10 22:01:59.910: INFO: rc: 1 Jun 10 22:01:59.910: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4917 exec execpod-affinityllnmd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31240: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31240 nc: connect to 10.10.190.207 port 31240 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:02:00.652: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4917 exec execpod-affinityllnmd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31240' Jun 10 22:02:00.923: INFO: rc: 1 Jun 10 22:02:00.923: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4917 exec execpod-affinityllnmd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31240: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31240 nc: connect to 10.10.190.207 port 31240 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:02:01.651: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4917 exec execpod-affinityllnmd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31240' Jun 10 22:02:02.131: INFO: rc: 1 Jun 10 22:02:02.131: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4917 exec execpod-affinityllnmd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31240: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31240 nc: connect to 10.10.190.207 port 31240 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:02:02.652: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4917 exec execpod-affinityllnmd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31240' Jun 10 22:02:02.927: INFO: rc: 1 Jun 10 22:02:02.927: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4917 exec execpod-affinityllnmd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31240: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31240 + echo hostName nc: connect to 10.10.190.207 port 31240 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:02:03.653: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4917 exec execpod-affinityllnmd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31240' Jun 10 22:02:03.909: INFO: rc: 1 Jun 10 22:02:03.909: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4917 exec execpod-affinityllnmd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31240: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31240 nc: connect to 10.10.190.207 port 31240 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:02:04.653: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4917 exec execpod-affinityllnmd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31240' Jun 10 22:02:04.879: INFO: rc: 1 Jun 10 22:02:04.879: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4917 exec execpod-affinityllnmd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31240: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31240 nc: connect to 10.10.190.207 port 31240 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:02:05.652: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4917 exec execpod-affinityllnmd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31240' Jun 10 22:02:05.901: INFO: rc: 1 Jun 10 22:02:05.901: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4917 exec execpod-affinityllnmd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31240: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31240 nc: connect to 10.10.190.207 port 31240 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:02:05.901: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4917 exec execpod-affinityllnmd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31240' Jun 10 22:02:06.160: INFO: rc: 1 Jun 10 22:02:06.160: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4917 exec execpod-affinityllnmd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31240: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31240 nc: connect to 10.10.190.207 port 31240 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:02:06.161: FAIL: Unexpected error: <*errors.errorString | 0xc0036ea6e0>: { s: "service is not reachable within 2m0s timeout on endpoint 10.10.190.207:31240 over TCP protocol", } service is not reachable within 2m0s timeout on endpoint 10.10.190.207:31240 over TCP protocol occurred Full Stack Trace k8s.io/kubernetes/test/e2e/network.execAffinityTestForSessionAffinityTimeout(0xc0018c6160, 0x77b33d8, 0xc0030d82c0, 0xc000261180) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:2497 +0x751 k8s.io/kubernetes/test/e2e/network.glob..func24.26() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1846 +0x9c k8s.io/kubernetes/test/e2e.RunE2ETests(0xc000cc0f00) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x36c k8s.io/kubernetes/test/e2e.TestE2E(0xc000cc0f00) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:144 +0x2b testing.tRunner(0xc000cc0f00, 0x70f99e8) /usr/local/go/src/testing/testing.go:1193 +0xef created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1238 +0x2b3 Jun 10 22:02:06.162: INFO: Cleaning up the exec pod STEP: deleting ReplicationController affinity-nodeport-timeout in namespace services-4917, will wait for the garbage collector to delete the pods Jun 10 22:02:06.236: INFO: Deleting ReplicationController affinity-nodeport-timeout took: 3.489718ms Jun 10 22:02:06.337: INFO: Terminating ReplicationController affinity-nodeport-timeout pods took: 100.789269ms [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 STEP: Collecting events from namespace "services-4917". STEP: Found 33 events. Jun 10 22:02:17.154: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for affinity-nodeport-timeout-8fwhz: { } Scheduled: Successfully assigned services-4917/affinity-nodeport-timeout-8fwhz to node2 Jun 10 22:02:17.154: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for affinity-nodeport-timeout-gpbvx: { } Scheduled: Successfully assigned services-4917/affinity-nodeport-timeout-gpbvx to node2 Jun 10 22:02:17.154: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for affinity-nodeport-timeout-xkzkr: { } Scheduled: Successfully assigned services-4917/affinity-nodeport-timeout-xkzkr to node2 Jun 10 22:02:17.154: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for execpod-affinityllnmd: { } Scheduled: Successfully assigned services-4917/execpod-affinityllnmd to node1 Jun 10 22:02:17.154: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for kube-proxy-mode-detector: { } Scheduled: Successfully assigned services-4917/kube-proxy-mode-detector to node1 Jun 10 22:02:17.154: INFO: At 2022-06-10 21:59:44 +0000 UTC - event for kube-proxy-mode-detector: {kubelet node1} Created: Created container agnhost-container Jun 10 22:02:17.154: INFO: At 2022-06-10 21:59:44 +0000 UTC - event for kube-proxy-mode-detector: {kubelet node1} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/agnhost:2.32" in 412.295886ms Jun 10 22:02:17.154: INFO: At 2022-06-10 21:59:44 +0000 UTC - event for kube-proxy-mode-detector: {kubelet node1} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/agnhost:2.32" Jun 10 22:02:17.154: INFO: At 2022-06-10 21:59:44 +0000 UTC - event for kube-proxy-mode-detector: {kubelet node1} Started: Started container agnhost-container Jun 10 22:02:17.155: INFO: At 2022-06-10 21:59:47 +0000 UTC - event for affinity-nodeport-timeout: {replication-controller } SuccessfulCreate: Created pod: affinity-nodeport-timeout-xkzkr Jun 10 22:02:17.155: INFO: At 2022-06-10 21:59:47 +0000 UTC - event for affinity-nodeport-timeout: {replication-controller } SuccessfulCreate: Created pod: affinity-nodeport-timeout-gpbvx Jun 10 22:02:17.155: INFO: At 2022-06-10 21:59:47 +0000 UTC - event for affinity-nodeport-timeout: {replication-controller } SuccessfulCreate: Created pod: affinity-nodeport-timeout-8fwhz Jun 10 22:02:17.155: INFO: At 2022-06-10 21:59:47 +0000 UTC - event for kube-proxy-mode-detector: {kubelet node1} Killing: Stopping container agnhost-container Jun 10 22:02:17.155: INFO: At 2022-06-10 21:59:50 +0000 UTC - event for affinity-nodeport-timeout-8fwhz: {kubelet node2} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/agnhost:2.32" Jun 10 22:02:17.155: INFO: At 2022-06-10 21:59:50 +0000 UTC - event for affinity-nodeport-timeout-8fwhz: {kubelet node2} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/agnhost:2.32" in 264.480702ms Jun 10 22:02:17.155: INFO: At 2022-06-10 21:59:50 +0000 UTC - event for affinity-nodeport-timeout-gpbvx: {kubelet node2} Created: Created container affinity-nodeport-timeout Jun 10 22:02:17.155: INFO: At 2022-06-10 21:59:50 +0000 UTC - event for affinity-nodeport-timeout-gpbvx: {kubelet node2} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/agnhost:2.32" in 260.956968ms Jun 10 22:02:17.155: INFO: At 2022-06-10 21:59:50 +0000 UTC - event for affinity-nodeport-timeout-gpbvx: {kubelet node2} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/agnhost:2.32" Jun 10 22:02:17.155: INFO: At 2022-06-10 21:59:50 +0000 UTC - event for affinity-nodeport-timeout-xkzkr: {kubelet node2} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/agnhost:2.32" Jun 10 22:02:17.155: INFO: At 2022-06-10 21:59:51 +0000 UTC - event for affinity-nodeport-timeout-8fwhz: {kubelet node2} Started: Started container affinity-nodeport-timeout Jun 10 22:02:17.155: INFO: At 2022-06-10 21:59:51 +0000 UTC - event for affinity-nodeport-timeout-8fwhz: {kubelet node2} Created: Created container affinity-nodeport-timeout Jun 10 22:02:17.155: INFO: At 2022-06-10 21:59:51 +0000 UTC - event for affinity-nodeport-timeout-gpbvx: {kubelet node2} Started: Started container affinity-nodeport-timeout Jun 10 22:02:17.155: INFO: At 2022-06-10 21:59:51 +0000 UTC - event for affinity-nodeport-timeout-xkzkr: {kubelet node2} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/agnhost:2.32" in 258.971095ms Jun 10 22:02:17.155: INFO: At 2022-06-10 21:59:51 +0000 UTC - event for affinity-nodeport-timeout-xkzkr: {kubelet node2} Created: Created container affinity-nodeport-timeout Jun 10 22:02:17.155: INFO: At 2022-06-10 21:59:51 +0000 UTC - event for affinity-nodeport-timeout-xkzkr: {kubelet node2} Started: Started container affinity-nodeport-timeout Jun 10 22:02:17.155: INFO: At 2022-06-10 22:00:01 +0000 UTC - event for execpod-affinityllnmd: {kubelet node1} Started: Started container agnhost-container Jun 10 22:02:17.155: INFO: At 2022-06-10 22:00:01 +0000 UTC - event for execpod-affinityllnmd: {kubelet node1} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/agnhost:2.32" in 461.045737ms Jun 10 22:02:17.155: INFO: At 2022-06-10 22:00:01 +0000 UTC - event for execpod-affinityllnmd: {kubelet node1} Created: Created container agnhost-container Jun 10 22:02:17.155: INFO: At 2022-06-10 22:00:01 +0000 UTC - event for execpod-affinityllnmd: {kubelet node1} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/agnhost:2.32" Jun 10 22:02:17.155: INFO: At 2022-06-10 22:02:06 +0000 UTC - event for affinity-nodeport-timeout-8fwhz: {kubelet node2} Killing: Stopping container affinity-nodeport-timeout Jun 10 22:02:17.155: INFO: At 2022-06-10 22:02:06 +0000 UTC - event for affinity-nodeport-timeout-gpbvx: {kubelet node2} Killing: Stopping container affinity-nodeport-timeout Jun 10 22:02:17.155: INFO: At 2022-06-10 22:02:06 +0000 UTC - event for affinity-nodeport-timeout-xkzkr: {kubelet node2} Killing: Stopping container affinity-nodeport-timeout Jun 10 22:02:17.155: INFO: At 2022-06-10 22:02:06 +0000 UTC - event for execpod-affinityllnmd: {kubelet node1} Killing: Stopping container agnhost-container Jun 10 22:02:17.157: INFO: POD NODE PHASE GRACE CONDITIONS Jun 10 22:02:17.157: INFO: Jun 10 22:02:17.161: INFO: Logging node info for node master1 Jun 10 22:02:17.164: INFO: Node Info: &Node{ObjectMeta:{master1 e472448e-87fd-4e8d-bbb7-98d43d3d8a87 38776 0 2022-06-10 19:57:38 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master1 kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.202 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/master.version:v0.8.2 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2022-06-10 19:57:41 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}}} {flanneld Update v1 2022-06-10 20:00:25 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2022-06-10 20:00:33 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.0.0/24\"":{}},"f:taints":{}}}} {kubelet Update v1 2022-06-10 20:05:13 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}} {nfd-master Update v1 2022-06-10 20:08:15 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/master.version":{}}}}}]},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{451201003520 0} {} 440625980Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234739200 0} {} 196518300Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{406080902496 0} {} 406080902496 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324575232 0} {} 195629468Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-06-10 20:03:20 +0000 UTC,LastTransitionTime:2022-06-10 20:03:20 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-06-10 22:02:10 +0000 UTC,LastTransitionTime:2022-06-10 19:57:36 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-06-10 22:02:10 +0000 UTC,LastTransitionTime:2022-06-10 19:57:36 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-06-10 22:02:10 +0000 UTC,LastTransitionTime:2022-06-10 19:57:36 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-06-10 22:02:10 +0000 UTC,LastTransitionTime:2022-06-10 20:00:33 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.202,},NodeAddress{Type:Hostname,Address:master1,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:3faca96dd267476388422e9ecfe8ffa5,SystemUUID:00ACFB60-0631-E711-906E-0017A4403562,BootID:a8563bde-8faa-4424-940f-741c59dd35bf,KernelVersion:3.10.0-1160.66.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.17,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:727708945,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 sirot/netperf-latest:latest],SizeBytes:282025213,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[k8s.gcr.io/nfd/node-feature-discovery@sha256:74a1cbd82354f148277e20cdce25d57816e355a896bc67f67a0f722164b16945 k8s.gcr.io/nfd/node-feature-discovery:v0.8.2],SizeBytes:108486428,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[quay.io/prometheus-operator/prometheus-operator@sha256:850c86bfeda4389bc9c757a9fd17ca5a090ea6b424968178d4467492cfa13921 quay.io/prometheus-operator/prometheus-operator:v0.44.1],SizeBytes:42617274,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e k8s.gcr.io/coredns/coredns:v1.8.0],SizeBytes:42454755,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[localhost:30500/tasextender@sha256:bec743bd4fe4525edfd5f3c9bb11da21629092dfe60d396ce7f8168ac1088695 tasextender:latest localhost:30500/tasextender:0.4],SizeBytes:28910791,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[registry@sha256:1cd9409a311350c3072fe510b52046f104416376c126a479cef9a4dfe692cf57 registry:2.7.0],SizeBytes:24191168,},ContainerImage{Names:[nginx@sha256:b92d3b942c8b84da889ac3dc6e83bd20ffb8cd2d8298eba92c8b0bf88d52f03e nginx:1.20.1-alpine],SizeBytes:22721538,},ContainerImage{Names:[@ :],SizeBytes:5577654,},ContainerImage{Names:[alpine@sha256:c0e9560cda118f9ec63ddefb4a173a2b2a0347082d7dff7dc14272e7841a5b5a alpine:3.12.1],SizeBytes:5573013,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jun 10 22:02:17.165: INFO: Logging kubelet events for node master1 Jun 10 22:02:17.167: INFO: Logging pods the kubelet thinks is on node master1 Jun 10 22:02:17.204: INFO: node-feature-discovery-controller-cff799f9f-74qhv started at 2022-06-10 20:08:09 +0000 UTC (0+1 container statuses recorded) Jun 10 22:02:17.204: INFO: Container nfd-controller ready: true, restart count 0 Jun 10 22:02:17.204: INFO: prometheus-operator-585ccfb458-kkb8f started at 2022-06-10 20:13:26 +0000 UTC (0+2 container statuses recorded) Jun 10 22:02:17.204: INFO: Container kube-rbac-proxy ready: true, restart count 0 Jun 10 22:02:17.204: INFO: Container prometheus-operator ready: true, restart count 0 Jun 10 22:02:17.204: INFO: node-exporter-vc67r started at 2022-06-10 20:13:33 +0000 UTC (0+2 container statuses recorded) Jun 10 22:02:17.204: INFO: Container kube-rbac-proxy ready: true, restart count 0 Jun 10 22:02:17.204: INFO: Container node-exporter ready: true, restart count 0 Jun 10 22:02:17.204: INFO: kube-apiserver-master1 started at 2022-06-10 19:58:43 +0000 UTC (0+1 container statuses recorded) Jun 10 22:02:17.204: INFO: Container kube-apiserver ready: true, restart count 0 Jun 10 22:02:17.204: INFO: kube-controller-manager-master1 started at 2022-06-10 20:06:49 +0000 UTC (0+1 container statuses recorded) Jun 10 22:02:17.204: INFO: Container kube-controller-manager ready: true, restart count 2 Jun 10 22:02:17.204: INFO: kube-scheduler-master1 started at 2022-06-10 19:58:43 +0000 UTC (0+1 container statuses recorded) Jun 10 22:02:17.204: INFO: Container kube-scheduler ready: true, restart count 0 Jun 10 22:02:17.204: INFO: kube-proxy-rd4j7 started at 2022-06-10 19:59:24 +0000 UTC (0+1 container statuses recorded) Jun 10 22:02:17.204: INFO: Container kube-proxy ready: true, restart count 3 Jun 10 22:02:17.204: INFO: container-registry-65d7c44b96-rsh2n started at 2022-06-10 20:04:56 +0000 UTC (0+2 container statuses recorded) Jun 10 22:02:17.204: INFO: Container docker-registry ready: true, restart count 0 Jun 10 22:02:17.204: INFO: Container nginx ready: true, restart count 0 Jun 10 22:02:17.204: INFO: kube-flannel-xx9h7 started at 2022-06-10 20:00:20 +0000 UTC (1+1 container statuses recorded) Jun 10 22:02:17.204: INFO: Init container install-cni ready: true, restart count 0 Jun 10 22:02:17.204: INFO: Container kube-flannel ready: true, restart count 1 Jun 10 22:02:17.204: INFO: kube-multus-ds-amd64-t5pr7 started at 2022-06-10 20:00:29 +0000 UTC (0+1 container statuses recorded) Jun 10 22:02:17.204: INFO: Container kube-multus ready: true, restart count 1 Jun 10 22:02:17.204: INFO: dns-autoscaler-7df78bfcfb-kz7px started at 2022-06-10 20:00:58 +0000 UTC (0+1 container statuses recorded) Jun 10 22:02:17.204: INFO: Container autoscaler ready: true, restart count 1 Jun 10 22:02:17.310: INFO: Latency metrics for node master1 Jun 10 22:02:17.310: INFO: Logging node info for node master2 Jun 10 22:02:17.314: INFO: Node Info: &Node{ObjectMeta:{master2 66c7af40-c8de-462b-933d-792f10a44a43 38707 0 2022-06-10 19:58:07 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master2 kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.203 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2022-06-10 19:58:08 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}}} {flanneld Update v1 2022-06-10 20:00:25 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2022-06-10 20:00:25 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}},"f:taints":{}}}} {kubelet Update v1 2022-06-10 20:10:54 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:capacity":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.1.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{451201003520 0} {} 440625980Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234743296 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{406080902496 0} {} 406080902496 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324579328 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-06-10 20:03:20 +0000 UTC,LastTransitionTime:2022-06-10 20:03:20 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-06-10 22:02:09 +0000 UTC,LastTransitionTime:2022-06-10 19:58:07 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-06-10 22:02:09 +0000 UTC,LastTransitionTime:2022-06-10 19:58:07 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-06-10 22:02:09 +0000 UTC,LastTransitionTime:2022-06-10 19:58:07 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-06-10 22:02:09 +0000 UTC,LastTransitionTime:2022-06-10 20:00:25 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.203,},NodeAddress{Type:Hostname,Address:master2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:31687d4b1abb46329a442e068ee56c42,SystemUUID:00A0DE53-E51D-E711-906E-0017A4403562,BootID:e234d452-a6d8-4bf0-b98d-a080613c39e9,KernelVersion:3.10.0-1160.66.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.17,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:727708945,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 sirot/netperf-latest:latest],SizeBytes:282025213,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e k8s.gcr.io/coredns/coredns:v1.8.0],SizeBytes:42454755,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 aquasec/kube-bench:0.3.1],SizeBytes:19301876,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jun 10 22:02:17.314: INFO: Logging kubelet events for node master2 Jun 10 22:02:17.316: INFO: Logging pods the kubelet thinks is on node master2 Jun 10 22:02:17.334: INFO: kube-apiserver-master2 started at 2022-06-10 19:58:44 +0000 UTC (0+1 container statuses recorded) Jun 10 22:02:17.334: INFO: Container kube-apiserver ready: true, restart count 0 Jun 10 22:02:17.334: INFO: node-exporter-6fbrb started at 2022-06-10 20:13:33 +0000 UTC (0+2 container statuses recorded) Jun 10 22:02:17.334: INFO: Container kube-rbac-proxy ready: true, restart count 0 Jun 10 22:02:17.334: INFO: Container node-exporter ready: true, restart count 0 Jun 10 22:02:17.334: INFO: kube-controller-manager-master2 started at 2022-06-10 20:06:49 +0000 UTC (0+1 container statuses recorded) Jun 10 22:02:17.334: INFO: Container kube-controller-manager ready: true, restart count 1 Jun 10 22:02:17.334: INFO: kube-scheduler-master2 started at 2022-06-10 20:06:49 +0000 UTC (0+1 container statuses recorded) Jun 10 22:02:17.334: INFO: Container kube-scheduler ready: true, restart count 3 Jun 10 22:02:17.334: INFO: kube-proxy-2kbvc started at 2022-06-10 19:59:24 +0000 UTC (0+1 container statuses recorded) Jun 10 22:02:17.334: INFO: Container kube-proxy ready: true, restart count 2 Jun 10 22:02:17.334: INFO: kube-flannel-ftn9l started at 2022-06-10 20:00:20 +0000 UTC (1+1 container statuses recorded) Jun 10 22:02:17.334: INFO: Init container install-cni ready: true, restart count 2 Jun 10 22:02:17.334: INFO: Container kube-flannel ready: true, restart count 1 Jun 10 22:02:17.334: INFO: kube-multus-ds-amd64-nrmqq started at 2022-06-10 20:00:29 +0000 UTC (0+1 container statuses recorded) Jun 10 22:02:17.334: INFO: Container kube-multus ready: true, restart count 1 Jun 10 22:02:17.334: INFO: coredns-8474476ff8-hlspd started at 2022-06-10 20:01:00 +0000 UTC (0+1 container statuses recorded) Jun 10 22:02:17.334: INFO: Container coredns ready: true, restart count 1 Jun 10 22:02:17.409: INFO: Latency metrics for node master2 Jun 10 22:02:17.409: INFO: Logging node info for node master3 Jun 10 22:02:17.412: INFO: Node Info: &Node{ObjectMeta:{master3 e51505ec-e791-4bbe-aeb1-bd0671fd4464 38951 0 2022-06-10 19:58:16 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master3 kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.204 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2022-06-10 19:58:18 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}}} {flanneld Update v1 2022-06-10 20:00:25 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2022-06-10 20:00:31 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.2.0/24\"":{}},"f:taints":{}}}} {kubelet Update v1 2022-06-10 20:10:54 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:capacity":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.2.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{451201003520 0} {} 440625980Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234739200 0} {} 196518300Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{406080902496 0} {} 406080902496 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324575232 0} {} 195629468Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-06-10 20:03:14 +0000 UTC,LastTransitionTime:2022-06-10 20:03:14 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-06-10 22:02:16 +0000 UTC,LastTransitionTime:2022-06-10 19:58:16 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-06-10 22:02:16 +0000 UTC,LastTransitionTime:2022-06-10 19:58:16 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-06-10 22:02:16 +0000 UTC,LastTransitionTime:2022-06-10 19:58:16 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-06-10 22:02:16 +0000 UTC,LastTransitionTime:2022-06-10 20:00:31 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.204,},NodeAddress{Type:Hostname,Address:master3,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:1f373495c4c54f68a37fa0d50cd1da58,SystemUUID:008B1444-141E-E711-906E-0017A4403562,BootID:a719d949-f9d1-4ee4-a79b-ab3a929b7d00,KernelVersion:3.10.0-1160.66.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.17,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:727708945,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e k8s.gcr.io/coredns/coredns:v1.8.0],SizeBytes:42454755,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jun 10 22:02:17.413: INFO: Logging kubelet events for node master3 Jun 10 22:02:17.414: INFO: Logging pods the kubelet thinks is on node master3 Jun 10 22:02:17.423: INFO: kube-apiserver-master3 started at 2022-06-10 20:03:07 +0000 UTC (0+1 container statuses recorded) Jun 10 22:02:17.423: INFO: Container kube-apiserver ready: true, restart count 0 Jun 10 22:02:17.423: INFO: kube-controller-manager-master3 started at 2022-06-10 20:06:49 +0000 UTC (0+1 container statuses recorded) Jun 10 22:02:17.423: INFO: Container kube-controller-manager ready: true, restart count 2 Jun 10 22:02:17.423: INFO: kube-scheduler-master3 started at 2022-06-10 20:03:07 +0000 UTC (0+1 container statuses recorded) Jun 10 22:02:17.423: INFO: Container kube-scheduler ready: true, restart count 1 Jun 10 22:02:17.423: INFO: kube-proxy-rm9n6 started at 2022-06-10 19:59:24 +0000 UTC (0+1 container statuses recorded) Jun 10 22:02:17.423: INFO: Container kube-proxy ready: true, restart count 1 Jun 10 22:02:17.423: INFO: kube-flannel-jpd2j started at 2022-06-10 20:00:20 +0000 UTC (1+1 container statuses recorded) Jun 10 22:02:17.423: INFO: Init container install-cni ready: true, restart count 2 Jun 10 22:02:17.423: INFO: Container kube-flannel ready: true, restart count 2 Jun 10 22:02:17.423: INFO: kube-multus-ds-amd64-8b4tg started at 2022-06-10 20:00:29 +0000 UTC (0+1 container statuses recorded) Jun 10 22:02:17.423: INFO: Container kube-multus ready: true, restart count 1 Jun 10 22:02:17.423: INFO: coredns-8474476ff8-s8q89 started at 2022-06-10 20:00:56 +0000 UTC (0+1 container statuses recorded) Jun 10 22:02:17.423: INFO: Container coredns ready: true, restart count 1 Jun 10 22:02:17.423: INFO: node-exporter-q4rw6 started at 2022-06-10 20:13:33 +0000 UTC (0+2 container statuses recorded) Jun 10 22:02:17.423: INFO: Container kube-rbac-proxy ready: true, restart count 0 Jun 10 22:02:17.423: INFO: Container node-exporter ready: true, restart count 0 Jun 10 22:02:17.508: INFO: Latency metrics for node master3 Jun 10 22:02:17.508: INFO: Logging node info for node node1 Jun 10 22:02:17.511: INFO: Node Info: &Node{ObjectMeta:{node1 fa951133-0317-499e-8a0a-fc7a0636a371 38850 0 2022-06-10 19:59:19 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux cmk.intel.com/cmk-node:true feature.node.kubernetes.io/cpu-cpuid.ADX:true feature.node.kubernetes.io/cpu-cpuid.AESNI:true feature.node.kubernetes.io/cpu-cpuid.AVX:true feature.node.kubernetes.io/cpu-cpuid.AVX2:true feature.node.kubernetes.io/cpu-cpuid.AVX512BW:true feature.node.kubernetes.io/cpu-cpuid.AVX512CD:true feature.node.kubernetes.io/cpu-cpuid.AVX512DQ:true feature.node.kubernetes.io/cpu-cpuid.AVX512F:true feature.node.kubernetes.io/cpu-cpuid.AVX512VL:true feature.node.kubernetes.io/cpu-cpuid.FMA3:true feature.node.kubernetes.io/cpu-cpuid.HLE:true feature.node.kubernetes.io/cpu-cpuid.IBPB:true feature.node.kubernetes.io/cpu-cpuid.MPX:true feature.node.kubernetes.io/cpu-cpuid.RTM:true feature.node.kubernetes.io/cpu-cpuid.SSE4:true feature.node.kubernetes.io/cpu-cpuid.SSE42:true feature.node.kubernetes.io/cpu-cpuid.STIBP:true feature.node.kubernetes.io/cpu-cpuid.VMX:true feature.node.kubernetes.io/cpu-cstate.enabled:true feature.node.kubernetes.io/cpu-hardware_multithreading:true feature.node.kubernetes.io/cpu-pstate.status:active feature.node.kubernetes.io/cpu-pstate.turbo:true feature.node.kubernetes.io/cpu-rdt.RDTCMT:true feature.node.kubernetes.io/cpu-rdt.RDTL3CA:true feature.node.kubernetes.io/cpu-rdt.RDTMBA:true feature.node.kubernetes.io/cpu-rdt.RDTMBM:true feature.node.kubernetes.io/cpu-rdt.RDTMON:true feature.node.kubernetes.io/kernel-config.NO_HZ:true feature.node.kubernetes.io/kernel-config.NO_HZ_FULL:true feature.node.kubernetes.io/kernel-selinux.enabled:true feature.node.kubernetes.io/kernel-version.full:3.10.0-1160.66.1.el7.x86_64 feature.node.kubernetes.io/kernel-version.major:3 feature.node.kubernetes.io/kernel-version.minor:10 feature.node.kubernetes.io/kernel-version.revision:0 feature.node.kubernetes.io/memory-numa:true feature.node.kubernetes.io/network-sriov.capable:true feature.node.kubernetes.io/network-sriov.configured:true feature.node.kubernetes.io/pci-0300_1a03.present:true feature.node.kubernetes.io/storage-nonrotationaldisk:true feature.node.kubernetes.io/system-os_release.ID:centos feature.node.kubernetes.io/system-os_release.VERSION_ID:7 feature.node.kubernetes.io/system-os_release.VERSION_ID.major:7 kubernetes.io/arch:amd64 kubernetes.io/hostname:node1 kubernetes.io/os:linux] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.207 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/extended-resources: nfd.node.kubernetes.io/feature-labels:cpu-cpuid.ADX,cpu-cpuid.AESNI,cpu-cpuid.AVX,cpu-cpuid.AVX2,cpu-cpuid.AVX512BW,cpu-cpuid.AVX512CD,cpu-cpuid.AVX512DQ,cpu-cpuid.AVX512F,cpu-cpuid.AVX512VL,cpu-cpuid.FMA3,cpu-cpuid.HLE,cpu-cpuid.IBPB,cpu-cpuid.MPX,cpu-cpuid.RTM,cpu-cpuid.SSE4,cpu-cpuid.SSE42,cpu-cpuid.STIBP,cpu-cpuid.VMX,cpu-cstate.enabled,cpu-hardware_multithreading,cpu-pstate.status,cpu-pstate.turbo,cpu-rdt.RDTCMT,cpu-rdt.RDTL3CA,cpu-rdt.RDTMBA,cpu-rdt.RDTMBM,cpu-rdt.RDTMON,kernel-config.NO_HZ,kernel-config.NO_HZ_FULL,kernel-selinux.enabled,kernel-version.full,kernel-version.major,kernel-version.minor,kernel-version.revision,memory-numa,network-sriov.capable,network-sriov.configured,pci-0300_1a03.present,storage-nonrotationaldisk,system-os_release.ID,system-os_release.VERSION_ID,system-os_release.VERSION_ID.major nfd.node.kubernetes.io/worker.version:v0.8.2 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2022-06-10 19:59:19 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.3.0/24\"":{}}}}} {kubeadm Update v1 2022-06-10 19:59:20 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {flanneld Update v1 2022-06-10 20:00:25 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {nfd-master Update v1 2022-06-10 20:08:16 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/extended-resources":{},"f:nfd.node.kubernetes.io/feature-labels":{},"f:nfd.node.kubernetes.io/worker.version":{}},"f:labels":{"f:feature.node.kubernetes.io/cpu-cpuid.ADX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AESNI":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX2":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512BW":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512CD":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512DQ":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512F":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512VL":{},"f:feature.node.kubernetes.io/cpu-cpuid.FMA3":{},"f:feature.node.kubernetes.io/cpu-cpuid.HLE":{},"f:feature.node.kubernetes.io/cpu-cpuid.IBPB":{},"f:feature.node.kubernetes.io/cpu-cpuid.MPX":{},"f:feature.node.kubernetes.io/cpu-cpuid.RTM":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE4":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE42":{},"f:feature.node.kubernetes.io/cpu-cpuid.STIBP":{},"f:feature.node.kubernetes.io/cpu-cpuid.VMX":{},"f:feature.node.kubernetes.io/cpu-cstate.enabled":{},"f:feature.node.kubernetes.io/cpu-hardware_multithreading":{},"f:feature.node.kubernetes.io/cpu-pstate.status":{},"f:feature.node.kubernetes.io/cpu-pstate.turbo":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTCMT":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTL3CA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBM":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMON":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ_FULL":{},"f:feature.node.kubernetes.io/kernel-selinux.enabled":{},"f:feature.node.kubernetes.io/kernel-version.full":{},"f:feature.node.kubernetes.io/kernel-version.major":{},"f:feature.node.kubernetes.io/kernel-version.minor":{},"f:feature.node.kubernetes.io/kernel-version.revision":{},"f:feature.node.kubernetes.io/memory-numa":{},"f:feature.node.kubernetes.io/network-sriov.capable":{},"f:feature.node.kubernetes.io/network-sriov.configured":{},"f:feature.node.kubernetes.io/pci-0300_1a03.present":{},"f:feature.node.kubernetes.io/storage-nonrotationaldisk":{},"f:feature.node.kubernetes.io/system-os_release.ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID.major":{}}}}} {Swagger-Codegen Update v1 2022-06-10 20:11:46 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:cmk.intel.com/cmk-node":{}}},"f:status":{"f:capacity":{"f:cmk.intel.com/exclusive-cores":{}}}}} {kubelet Update v1 2022-06-10 20:11:53 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:cmk.intel.com/exclusive-cores":{},"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:capacity":{"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.3.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.3.0/24],},Status:NodeStatus{Capacity:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{451201003520 0} {} 440625980Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{201269608448 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{77 0} {} 77 DecimalSI},ephemeral-storage: {{406080902496 0} {} 406080902496 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{178884608000 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-06-10 20:03:13 +0000 UTC,LastTransitionTime:2022-06-10 20:03:13 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-06-10 22:02:13 +0000 UTC,LastTransitionTime:2022-06-10 19:59:19 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-06-10 22:02:13 +0000 UTC,LastTransitionTime:2022-06-10 19:59:19 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-06-10 22:02:13 +0000 UTC,LastTransitionTime:2022-06-10 19:59:19 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-06-10 22:02:13 +0000 UTC,LastTransitionTime:2022-06-10 20:00:27 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.207,},NodeAddress{Type:Hostname,Address:node1,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:aabc551d0ffe4cb3b41c0db91649a9a2,SystemUUID:00CDA902-D022-E711-906E-0017A4403562,BootID:fea48af7-d08f-4093-b808-340d06faf38b,KernelVersion:3.10.0-1160.66.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.17,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[opnfv/barometer-collectd@sha256:f30e965aa6195e6ac4ca2410f5a15e3704c92e4afa5208178ca22a7911975d66],SizeBytes:1075575763,},ContainerImage{Names:[@ :],SizeBytes:1003977815,},ContainerImage{Names:[localhost:30500/cmk@sha256:fa61e6e6fee0a4d296013d2993a9ff5538ff0b2e232e6b9c661a6604d93ce888 cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:727708945,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[golang@sha256:db2475a1dbb2149508e5db31d7d77a75e6600d54be645f37681f03f2762169ba golang:alpine3.12],SizeBytes:301186719,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[k8s.gcr.io/etcd@sha256:4ad90a11b55313b182afc186b9876c8e891531b8db4c9bf1541953021618d0e2 k8s.gcr.io/etcd:3.4.13-0],SizeBytes:253392289,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:702a992280fb7c3303e84a5801acbb4c9c7fcf48cffe0e9c8be3f0c60f74cf89 k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.4],SizeBytes:253371792,},ContainerImage{Names:[grafana/grafana@sha256:ba39bf5131dcc0464134a3ff0e26e8c6380415249fa725e5f619176601255172 grafana/grafana:7.5.4],SizeBytes:203572842,},ContainerImage{Names:[quay.io/prometheus/prometheus@sha256:b899dbd1b9017b9a379f76ce5b40eead01a62762c4f2057eacef945c3c22d210 quay.io/prometheus/prometheus:v2.22.1],SizeBytes:168344243,},ContainerImage{Names:[nginx@sha256:a05b0cdd4fc1be3b224ba9662ebdf98fe44c09c0c9215b45f84344c12867002e nginx:1.21.1],SizeBytes:133175493,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:716d2f68314c5c4ddd5ecdb45183fcb4ed8019015982c1321571f863989b70b0 k8s.gcr.io/e2e-test-images/httpd:2.4.39-1],SizeBytes:126894770,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:125930239,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[k8s.gcr.io/nfd/node-feature-discovery@sha256:74a1cbd82354f148277e20cdce25d57816e355a896bc67f67a0f722164b16945 k8s.gcr.io/nfd/node-feature-discovery:v0.8.2],SizeBytes:108486428,},ContainerImage{Names:[directxman12/k8s-prometheus-adapter@sha256:2b09a571757a12c0245f2f1a74db4d1b9386ff901cf57f5ce48a0a682bd0e3af directxman12/k8s-prometheus-adapter:v0.8.2],SizeBytes:68230450,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/sample-apiserver@sha256:e7fddbaac4c3451da2365ab90bad149d32f11409738034e41e0f460927f7c276 k8s.gcr.io/e2e-test-images/sample-apiserver:1.17.4],SizeBytes:58172101,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[localhost:30500/sriov-device-plugin@sha256:73408b8d6699bf382b8f7526b6d0a986fad0f037440cd9aabd8985a7e1dbea07 nfvpe/sriov-device-plugin:latest localhost:30500/sriov-device-plugin:v3.3.2],SizeBytes:42676189,},ContainerImage{Names:[localhost:30500/tasextender@sha256:bec743bd4fe4525edfd5f3c9bb11da21629092dfe60d396ce7f8168ac1088695 localhost:30500/tasextender:0.4],SizeBytes:28910791,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[prom/collectd-exporter@sha256:73fbda4d24421bff3b741c27efc36f1b6fbe7c57c378d56d4ff78101cd556654],SizeBytes:17463681,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:16032814,},ContainerImage{Names:[quay.io/prometheus-operator/prometheus-config-reloader@sha256:4dee0fcf1820355ddd6986c1317b555693776c731315544a99d6cc59a7e34ce9 quay.io/prometheus-operator/prometheus-config-reloader:v0.44.1],SizeBytes:13433274,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:11443478,},ContainerImage{Names:[appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172cead6502fec989198b98d8bb appropriate/curl:edge],SizeBytes:5654234,},ContainerImage{Names:[alpine@sha256:c75ac27b49326926b803b9ed43bf088bc220d22556de1bc5f72d742c91398f69 alpine:3.12],SizeBytes:5581590,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:1154361,},ContainerImage{Names:[busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 busybox:1.28],SizeBytes:1146369,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jun 10 22:02:17.512: INFO: Logging kubelet events for node node1 Jun 10 22:02:17.514: INFO: Logging pods the kubelet thinks is on node node1 Jun 10 22:02:17.530: INFO: prometheus-k8s-0 started at 2022-06-10 20:13:45 +0000 UTC (0+4 container statuses recorded) Jun 10 22:02:17.530: INFO: Container config-reloader ready: true, restart count 0 Jun 10 22:02:17.530: INFO: Container custom-metrics-apiserver ready: true, restart count 0 Jun 10 22:02:17.530: INFO: Container grafana ready: true, restart count 0 Jun 10 22:02:17.530: INFO: Container prometheus ready: true, restart count 1 Jun 10 22:02:17.530: INFO: liveness-caf0ee89-130d-4ec7-8b5b-93b4bb6df421 started at 2022-06-10 22:00:19 +0000 UTC (0+1 container statuses recorded) Jun 10 22:02:17.530: INFO: Container agnhost-container ready: true, restart count 0 Jun 10 22:02:17.530: INFO: cmk-init-discover-node1-hlbt6 started at 2022-06-10 20:11:42 +0000 UTC (0+3 container statuses recorded) Jun 10 22:02:17.530: INFO: Container discover ready: false, restart count 0 Jun 10 22:02:17.530: INFO: Container init ready: false, restart count 0 Jun 10 22:02:17.530: INFO: Container install ready: false, restart count 0 Jun 10 22:02:17.530: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-k4f5v started at 2022-06-10 20:09:21 +0000 UTC (0+1 container statuses recorded) Jun 10 22:02:17.530: INFO: Container kube-sriovdp ready: true, restart count 0 Jun 10 22:02:17.530: INFO: node-exporter-tk8f9 started at 2022-06-10 20:13:33 +0000 UTC (0+2 container statuses recorded) Jun 10 22:02:17.530: INFO: Container kube-rbac-proxy ready: true, restart count 0 Jun 10 22:02:17.530: INFO: Container node-exporter ready: true, restart count 0 Jun 10 22:02:17.530: INFO: concurrent-27581641-7xtf7 started at 2022-06-10 22:01:00 +0000 UTC (0+1 container statuses recorded) Jun 10 22:02:17.530: INFO: Container c ready: true, restart count 0 Jun 10 22:02:17.530: INFO: node-feature-discovery-worker-9xsdt started at 2022-06-10 20:08:09 +0000 UTC (0+1 container statuses recorded) Jun 10 22:02:17.530: INFO: Container nfd-worker ready: true, restart count 0 Jun 10 22:02:17.530: INFO: collectd-kpj5z started at 2022-06-10 20:17:30 +0000 UTC (0+3 container statuses recorded) Jun 10 22:02:17.530: INFO: Container collectd ready: true, restart count 0 Jun 10 22:02:17.530: INFO: Container collectd-exporter ready: true, restart count 0 Jun 10 22:02:17.530: INFO: Container rbac-proxy ready: true, restart count 0 Jun 10 22:02:17.530: INFO: cmk-webhook-6c9d5f8578-n9w8j started at 2022-06-10 20:12:30 +0000 UTC (0+1 container statuses recorded) Jun 10 22:02:17.530: INFO: Container cmk-webhook ready: true, restart count 0 Jun 10 22:02:17.530: INFO: cmk-qjrhs started at 2022-06-10 20:12:29 +0000 UTC (0+2 container statuses recorded) Jun 10 22:02:17.530: INFO: Container nodereport ready: true, restart count 0 Jun 10 22:02:17.530: INFO: Container reconcile ready: true, restart count 0 Jun 10 22:02:17.530: INFO: nginx-proxy-node1 started at 2022-06-10 19:59:19 +0000 UTC (0+1 container statuses recorded) Jun 10 22:02:17.530: INFO: Container nginx-proxy ready: true, restart count 2 Jun 10 22:02:17.530: INFO: kube-flannel-x926c started at 2022-06-10 20:00:20 +0000 UTC (1+1 container statuses recorded) Jun 10 22:02:17.530: INFO: Init container install-cni ready: true, restart count 2 Jun 10 22:02:17.530: INFO: Container kube-flannel ready: true, restart count 2 Jun 10 22:02:17.530: INFO: kube-multus-ds-amd64-4gckf started at 2022-06-10 20:00:29 +0000 UTC (0+1 container statuses recorded) Jun 10 22:02:17.530: INFO: Container kube-multus ready: true, restart count 1 Jun 10 22:02:17.530: INFO: tas-telemetry-aware-scheduling-84ff454dfb-lb2mn started at 2022-06-10 20:16:40 +0000 UTC (0+1 container statuses recorded) Jun 10 22:02:17.530: INFO: Container tas-extender ready: true, restart count 0 Jun 10 22:02:17.530: INFO: pod-908bd86e-8355-4891-adda-2fbd2110507c started at 2022-06-10 22:02:14 +0000 UTC (0+1 container statuses recorded) Jun 10 22:02:17.530: INFO: Container test-container ready: false, restart count 0 Jun 10 22:02:17.530: INFO: kube-proxy-5bkrr started at 2022-06-10 19:59:24 +0000 UTC (0+1 container statuses recorded) Jun 10 22:02:17.530: INFO: Container kube-proxy ready: true, restart count 1 Jun 10 22:02:17.530: INFO: busybox-369d2da1-8a15-40be-ad8f-abd61dad0530 started at 2022-06-10 21:58:28 +0000 UTC (0+1 container statuses recorded) Jun 10 22:02:17.530: INFO: Container busybox ready: true, restart count 0 Jun 10 22:02:17.530: INFO: sysctl-bab78524-984f-48c1-b11e-814f50fbdc9d started at 2022-06-10 22:02:14 +0000 UTC (0+1 container statuses recorded) Jun 10 22:02:17.530: INFO: Container test-container ready: false, restart count 0 Jun 10 22:02:18.987: INFO: Latency metrics for node node1 Jun 10 22:02:18.987: INFO: Logging node info for node node2 Jun 10 22:02:18.990: INFO: Node Info: &Node{ObjectMeta:{node2 e3ba5b73-7a35-4d3f-9138-31db06c90dc3 38799 0 2022-06-10 19:59:19 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux cmk.intel.com/cmk-node:true feature.node.kubernetes.io/cpu-cpuid.ADX:true feature.node.kubernetes.io/cpu-cpuid.AESNI:true feature.node.kubernetes.io/cpu-cpuid.AVX:true feature.node.kubernetes.io/cpu-cpuid.AVX2:true feature.node.kubernetes.io/cpu-cpuid.AVX512BW:true feature.node.kubernetes.io/cpu-cpuid.AVX512CD:true feature.node.kubernetes.io/cpu-cpuid.AVX512DQ:true feature.node.kubernetes.io/cpu-cpuid.AVX512F:true feature.node.kubernetes.io/cpu-cpuid.AVX512VL:true feature.node.kubernetes.io/cpu-cpuid.FMA3:true feature.node.kubernetes.io/cpu-cpuid.HLE:true feature.node.kubernetes.io/cpu-cpuid.IBPB:true feature.node.kubernetes.io/cpu-cpuid.MPX:true feature.node.kubernetes.io/cpu-cpuid.RTM:true feature.node.kubernetes.io/cpu-cpuid.SSE4:true feature.node.kubernetes.io/cpu-cpuid.SSE42:true feature.node.kubernetes.io/cpu-cpuid.STIBP:true feature.node.kubernetes.io/cpu-cpuid.VMX:true feature.node.kubernetes.io/cpu-cstate.enabled:true feature.node.kubernetes.io/cpu-hardware_multithreading:true feature.node.kubernetes.io/cpu-pstate.status:active feature.node.kubernetes.io/cpu-pstate.turbo:true feature.node.kubernetes.io/cpu-rdt.RDTCMT:true feature.node.kubernetes.io/cpu-rdt.RDTL3CA:true feature.node.kubernetes.io/cpu-rdt.RDTMBA:true feature.node.kubernetes.io/cpu-rdt.RDTMBM:true feature.node.kubernetes.io/cpu-rdt.RDTMON:true feature.node.kubernetes.io/kernel-config.NO_HZ:true feature.node.kubernetes.io/kernel-config.NO_HZ_FULL:true feature.node.kubernetes.io/kernel-selinux.enabled:true feature.node.kubernetes.io/kernel-version.full:3.10.0-1160.66.1.el7.x86_64 feature.node.kubernetes.io/kernel-version.major:3 feature.node.kubernetes.io/kernel-version.minor:10 feature.node.kubernetes.io/kernel-version.revision:0 feature.node.kubernetes.io/memory-numa:true feature.node.kubernetes.io/network-sriov.capable:true feature.node.kubernetes.io/network-sriov.configured:true feature.node.kubernetes.io/pci-0300_1a03.present:true feature.node.kubernetes.io/storage-nonrotationaldisk:true feature.node.kubernetes.io/system-os_release.ID:centos feature.node.kubernetes.io/system-os_release.VERSION_ID:7 feature.node.kubernetes.io/system-os_release.VERSION_ID.major:7 kubernetes.io/arch:amd64 kubernetes.io/hostname:node2 kubernetes.io/os:linux] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.208 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/extended-resources: nfd.node.kubernetes.io/feature-labels:cpu-cpuid.ADX,cpu-cpuid.AESNI,cpu-cpuid.AVX,cpu-cpuid.AVX2,cpu-cpuid.AVX512BW,cpu-cpuid.AVX512CD,cpu-cpuid.AVX512DQ,cpu-cpuid.AVX512F,cpu-cpuid.AVX512VL,cpu-cpuid.FMA3,cpu-cpuid.HLE,cpu-cpuid.IBPB,cpu-cpuid.MPX,cpu-cpuid.RTM,cpu-cpuid.SSE4,cpu-cpuid.SSE42,cpu-cpuid.STIBP,cpu-cpuid.VMX,cpu-cstate.enabled,cpu-hardware_multithreading,cpu-pstate.status,cpu-pstate.turbo,cpu-rdt.RDTCMT,cpu-rdt.RDTL3CA,cpu-rdt.RDTMBA,cpu-rdt.RDTMBM,cpu-rdt.RDTMON,kernel-config.NO_HZ,kernel-config.NO_HZ_FULL,kernel-selinux.enabled,kernel-version.full,kernel-version.major,kernel-version.minor,kernel-version.revision,memory-numa,network-sriov.capable,network-sriov.configured,pci-0300_1a03.present,storage-nonrotationaldisk,system-os_release.ID,system-os_release.VERSION_ID,system-os_release.VERSION_ID.major nfd.node.kubernetes.io/worker.version:v0.8.2 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2022-06-10 19:59:19 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.4.0/24\"":{}}}}} {kubeadm Update v1 2022-06-10 19:59:20 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {flanneld Update v1 2022-06-10 20:00:25 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {nfd-master Update v1 2022-06-10 20:08:16 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/extended-resources":{},"f:nfd.node.kubernetes.io/feature-labels":{},"f:nfd.node.kubernetes.io/worker.version":{}},"f:labels":{"f:feature.node.kubernetes.io/cpu-cpuid.ADX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AESNI":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX2":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512BW":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512CD":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512DQ":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512F":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512VL":{},"f:feature.node.kubernetes.io/cpu-cpuid.FMA3":{},"f:feature.node.kubernetes.io/cpu-cpuid.HLE":{},"f:feature.node.kubernetes.io/cpu-cpuid.IBPB":{},"f:feature.node.kubernetes.io/cpu-cpuid.MPX":{},"f:feature.node.kubernetes.io/cpu-cpuid.RTM":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE4":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE42":{},"f:feature.node.kubernetes.io/cpu-cpuid.STIBP":{},"f:feature.node.kubernetes.io/cpu-cpuid.VMX":{},"f:feature.node.kubernetes.io/cpu-cstate.enabled":{},"f:feature.node.kubernetes.io/cpu-hardware_multithreading":{},"f:feature.node.kubernetes.io/cpu-pstate.status":{},"f:feature.node.kubernetes.io/cpu-pstate.turbo":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTCMT":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTL3CA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBM":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMON":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ_FULL":{},"f:feature.node.kubernetes.io/kernel-selinux.enabled":{},"f:feature.node.kubernetes.io/kernel-version.full":{},"f:feature.node.kubernetes.io/kernel-version.major":{},"f:feature.node.kubernetes.io/kernel-version.minor":{},"f:feature.node.kubernetes.io/kernel-version.revision":{},"f:feature.node.kubernetes.io/memory-numa":{},"f:feature.node.kubernetes.io/network-sriov.capable":{},"f:feature.node.kubernetes.io/network-sriov.configured":{},"f:feature.node.kubernetes.io/pci-0300_1a03.present":{},"f:feature.node.kubernetes.io/storage-nonrotationaldisk":{},"f:feature.node.kubernetes.io/system-os_release.ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID.major":{}}}}} {Swagger-Codegen Update v1 2022-06-10 20:12:10 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:cmk.intel.com/cmk-node":{}}},"f:status":{"f:capacity":{"f:cmk.intel.com/exclusive-cores":{}}}}} {kubelet Update v1 2022-06-10 20:12:13 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:cmk.intel.com/exclusive-cores":{},"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:capacity":{"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.4.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.4.0/24],},Status:NodeStatus{Capacity:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{451201003520 0} {} 440625980Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{201269604352 0} {} 196552348Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{77 0} {} 77 DecimalSI},ephemeral-storage: {{406080902496 0} {} 406080902496 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{178884603904 0} {} 174691996Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-06-10 20:03:16 +0000 UTC,LastTransitionTime:2022-06-10 20:03:16 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-06-10 22:02:11 +0000 UTC,LastTransitionTime:2022-06-10 19:59:19 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-06-10 22:02:11 +0000 UTC,LastTransitionTime:2022-06-10 19:59:19 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-06-10 22:02:11 +0000 UTC,LastTransitionTime:2022-06-10 19:59:19 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-06-10 22:02:11 +0000 UTC,LastTransitionTime:2022-06-10 20:00:31 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.208,},NodeAddress{Type:Hostname,Address:node2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:bb5fb4a83f9949939cd41b7583e9b343,SystemUUID:80B3CD56-852F-E711-906E-0017A4403562,BootID:bd9c2046-c9ae-4b83-a147-c07e3487254e,KernelVersion:3.10.0-1160.66.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.17,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[opnfv/barometer-collectd@sha256:f30e965aa6195e6ac4ca2410f5a15e3704c92e4afa5208178ca22a7911975d66],SizeBytes:1075575763,},ContainerImage{Names:[localhost:30500/cmk@sha256:fa61e6e6fee0a4d296013d2993a9ff5538ff0b2e232e6b9c661a6604d93ce888 localhost:30500/cmk:v1.5.1],SizeBytes:727708945,},ContainerImage{Names:[cmk:v1.5.1],SizeBytes:727708945,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[aquasec/kube-hunter@sha256:2be6820bc1d7e0f57193a9a27d5a3e16b2fd93c53747b03ce8ca48c6fc323781 aquasec/kube-hunter:0.3.1],SizeBytes:347611549,},ContainerImage{Names:[sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 sirot/netperf-latest:latest],SizeBytes:282025213,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:702a992280fb7c3303e84a5801acbb4c9c7fcf48cffe0e9c8be3f0c60f74cf89 k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.4],SizeBytes:253371792,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[nginx@sha256:a05b0cdd4fc1be3b224ba9662ebdf98fe44c09c0c9215b45f84344c12867002e nginx:1.21.1],SizeBytes:133175493,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:716d2f68314c5c4ddd5ecdb45183fcb4ed8019015982c1321571f863989b70b0 k8s.gcr.io/e2e-test-images/httpd:2.4.39-1],SizeBytes:126894770,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:125930239,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[k8s.gcr.io/nfd/node-feature-discovery@sha256:74a1cbd82354f148277e20cdce25d57816e355a896bc67f67a0f722164b16945 k8s.gcr.io/nfd/node-feature-discovery:v0.8.2],SizeBytes:108486428,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[localhost:30500/sriov-device-plugin@sha256:73408b8d6699bf382b8f7526b6d0a986fad0f037440cd9aabd8985a7e1dbea07 localhost:30500/sriov-device-plugin:v3.3.2],SizeBytes:42676189,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 aquasec/kube-bench:0.3.1],SizeBytes:19301876,},ContainerImage{Names:[prom/collectd-exporter@sha256:73fbda4d24421bff3b741c27efc36f1b6fbe7c57c378d56d4ff78101cd556654],SizeBytes:17463681,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:16032814,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:11443478,},ContainerImage{Names:[appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172cead6502fec989198b98d8bb appropriate/curl:edge],SizeBytes:5654234,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:1154361,},ContainerImage{Names:[busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 busybox:1.28],SizeBytes:1146369,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jun 10 22:02:18.991: INFO: Logging kubelet events for node node2 Jun 10 22:02:18.993: INFO: Logging pods the kubelet thinks is on node node2 Jun 10 22:02:19.007: INFO: concurrent-27581642-6l7ml started at 2022-06-10 22:02:00 +0000 UTC (0+1 container statuses recorded) Jun 10 22:02:19.007: INFO: Container c ready: false, restart count 0 Jun 10 22:02:19.007: INFO: kube-proxy-4clxz started at 2022-06-10 19:59:24 +0000 UTC (0+1 container statuses recorded) Jun 10 22:02:19.007: INFO: Container kube-proxy ready: true, restart count 2 Jun 10 22:02:19.007: INFO: kube-flannel-8jl6m started at 2022-06-10 20:00:20 +0000 UTC (1+1 container statuses recorded) Jun 10 22:02:19.007: INFO: Init container install-cni ready: true, restart count 2 Jun 10 22:02:19.007: INFO: Container kube-flannel ready: true, restart count 2 Jun 10 22:02:19.007: INFO: cmk-init-discover-node2-jxvbr started at 2022-06-10 20:12:04 +0000 UTC (0+3 container statuses recorded) Jun 10 22:02:19.007: INFO: Container discover ready: false, restart count 0 Jun 10 22:02:19.007: INFO: Container init ready: false, restart count 0 Jun 10 22:02:19.007: INFO: Container install ready: false, restart count 0 Jun 10 22:02:19.007: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-z4m46 started at 2022-06-10 20:09:21 +0000 UTC (0+1 container statuses recorded) Jun 10 22:02:19.007: INFO: Container kube-sriovdp ready: true, restart count 0 Jun 10 22:02:19.007: INFO: collectd-srmjh started at 2022-06-10 20:17:30 +0000 UTC (0+3 container statuses recorded) Jun 10 22:02:19.007: INFO: Container collectd ready: true, restart count 0 Jun 10 22:02:19.007: INFO: Container collectd-exporter ready: true, restart count 0 Jun 10 22:02:19.008: INFO: Container rbac-proxy ready: true, restart count 0 Jun 10 22:02:19.008: INFO: liveness-33896925-8ce6-4d81-a18b-6df3d0ecd374 started at 2022-06-10 22:01:49 +0000 UTC (0+1 container statuses recorded) Jun 10 22:02:19.008: INFO: Container agnhost-container ready: true, restart count 0 Jun 10 22:02:19.008: INFO: kubernetes-metrics-scraper-5558854cb-pf6tn started at 2022-06-10 20:01:01 +0000 UTC (0+1 container statuses recorded) Jun 10 22:02:19.008: INFO: Container kubernetes-metrics-scraper ready: true, restart count 1 Jun 10 22:02:19.008: INFO: node-exporter-trpg7 started at 2022-06-10 20:13:33 +0000 UTC (0+2 container statuses recorded) Jun 10 22:02:19.008: INFO: Container kube-rbac-proxy ready: true, restart count 0 Jun 10 22:02:19.008: INFO: Container node-exporter ready: true, restart count 0 Jun 10 22:02:19.008: INFO: var-expansion-47c2353e-f43c-4b27-b570-2b9e4c893600 started at 2022-06-10 22:00:01 +0000 UTC (0+1 container statuses recorded) Jun 10 22:02:19.008: INFO: Container dapi-container ready: true, restart count 0 Jun 10 22:02:19.008: INFO: cmk-zpstc started at 2022-06-10 20:12:29 +0000 UTC (0+2 container statuses recorded) Jun 10 22:02:19.008: INFO: Container nodereport ready: true, restart count 0 Jun 10 22:02:19.008: INFO: Container reconcile ready: true, restart count 0 Jun 10 22:02:19.008: INFO: nginx-proxy-node2 started at 2022-06-10 19:59:19 +0000 UTC (0+1 container statuses recorded) Jun 10 22:02:19.008: INFO: Container nginx-proxy ready: true, restart count 2 Jun 10 22:02:19.008: INFO: kube-multus-ds-amd64-nj866 started at 2022-06-10 20:00:29 +0000 UTC (0+1 container statuses recorded) Jun 10 22:02:19.008: INFO: Container kube-multus ready: true, restart count 1 Jun 10 22:02:19.008: INFO: kubernetes-dashboard-785dcbb76d-7pmgn started at 2022-06-10 20:01:00 +0000 UTC (0+1 container statuses recorded) Jun 10 22:02:19.008: INFO: Container kubernetes-dashboard ready: true, restart count 1 Jun 10 22:02:19.008: INFO: agnhost-primary-49bm7 started at 2022-06-10 22:02:13 +0000 UTC (0+1 container statuses recorded) Jun 10 22:02:19.008: INFO: Container agnhost-primary ready: true, restart count 0 Jun 10 22:02:19.008: INFO: node-feature-discovery-worker-s9mwk started at 2022-06-10 20:08:09 +0000 UTC (0+1 container statuses recorded) Jun 10 22:02:19.008: INFO: Container nfd-worker ready: true, restart count 0 Jun 10 22:02:19.235: INFO: Latency metrics for node node2 Jun 10 22:02:19.235: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-4917" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750 • Failure [155.948 seconds] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance] [It] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Jun 10 22:02:06.161: Unexpected error: <*errors.errorString | 0xc0036ea6e0>: { s: "service is not reachable within 2m0s timeout on endpoint 10.10.190.207:31240 over TCP protocol", } service is not reachable within 2m0s timeout on endpoint 10.10.190.207:31240 over TCP protocol occurred /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:2497 ------------------------------ {"msg":"FAILED [sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","total":-1,"completed":6,"skipped":91,"failed":1,"failures":["[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/sysctl.go:35 [BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 10 22:02:13.969: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sysctl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/sysctl.go:64 [It] should support sysctls [MinimumKubeletVersion:1.21] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod with the kernel.shm_rmid_forced sysctl STEP: Watching for error events or started pod STEP: Waiting for pod completion STEP: Checking that the pod succeeded STEP: Getting logs from the pod STEP: Checking that the sysctl is actually updated [AfterEach] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 10 22:02:20.014: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sysctl-5502" for this suite. • [SLOW TEST:6.053 seconds] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should support sysctls [MinimumKubeletVersion:1.21] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] should support sysctls [MinimumKubeletVersion:1.21] [Conformance]","total":-1,"completed":23,"skipped":413,"failed":0} SSSSS ------------------------------ [BeforeEach] [sig-apps] CronJob /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 10 22:02:20.033: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename cronjob STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] CronJob /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/cronjob.go:63 W0610 22:02:20.076350 40 warnings.go:70] batch/v1beta1 CronJob is deprecated in v1.21+, unavailable in v1.25+; use batch/v1 CronJob [It] should support CronJob API operations [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a cronjob STEP: creating STEP: getting STEP: listing STEP: watching Jun 10 22:02:20.084: INFO: starting watch STEP: cluster-wide listing STEP: cluster-wide watching Jun 10 22:02:20.088: INFO: starting watch STEP: patching STEP: updating Jun 10 22:02:20.102: INFO: waiting for watch events with expected annotations Jun 10 22:02:20.102: INFO: saw patched and updated annotations STEP: patching /status STEP: updating /status STEP: get /status STEP: deleting STEP: deleting a collection [AfterEach] [sig-apps] CronJob /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 10 22:02:20.129: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "cronjob-385" for this suite. • ------------------------------ {"msg":"PASSED [sig-apps] CronJob should support CronJob API operations [Conformance]","total":-1,"completed":24,"skipped":418,"failed":0} SSSSSSS ------------------------------ [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 10 22:02:14.512: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test emptydir 0666 on tmpfs Jun 10 22:02:14.547: INFO: Waiting up to 5m0s for pod "pod-908bd86e-8355-4891-adda-2fbd2110507c" in namespace "emptydir-2837" to be "Succeeded or Failed" Jun 10 22:02:14.550: INFO: Pod "pod-908bd86e-8355-4891-adda-2fbd2110507c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.504033ms Jun 10 22:02:16.553: INFO: Pod "pod-908bd86e-8355-4891-adda-2fbd2110507c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005918743s Jun 10 22:02:18.560: INFO: Pod "pod-908bd86e-8355-4891-adda-2fbd2110507c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.012451648s Jun 10 22:02:20.563: INFO: Pod "pod-908bd86e-8355-4891-adda-2fbd2110507c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.016041662s STEP: Saw pod success Jun 10 22:02:20.563: INFO: Pod "pod-908bd86e-8355-4891-adda-2fbd2110507c" satisfied condition "Succeeded or Failed" Jun 10 22:02:20.566: INFO: Trying to get logs from node node1 pod pod-908bd86e-8355-4891-adda-2fbd2110507c container test-container: STEP: delete the pod Jun 10 22:02:20.589: INFO: Waiting for pod pod-908bd86e-8355-4891-adda-2fbd2110507c to disappear Jun 10 22:02:20.591: INFO: Pod pod-908bd86e-8355-4891-adda-2fbd2110507c no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 10 22:02:20.591: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-2837" for this suite. • [SLOW TEST:6.088 seconds] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":23,"skipped":449,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 10 22:02:20.653: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:186 [It] should delete a collection of pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Create set of pods Jun 10 22:02:20.690: INFO: created test-pod-1 Jun 10 22:02:20.700: INFO: created test-pod-2 Jun 10 22:02:20.710: INFO: created test-pod-3 STEP: waiting for all 3 pods to be located STEP: waiting for all pods to be deleted [AfterEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 10 22:02:20.734: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-3742" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Pods should delete a collection of pods [Conformance]","total":-1,"completed":24,"skipped":477,"failed":0} SSSSSSS ------------------------------ [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 10 22:02:12.820: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:241 [It] should create services for rc [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating Agnhost RC Jun 10 22:02:12.843: INFO: namespace kubectl-6218 Jun 10 22:02:12.843: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-6218 create -f -' Jun 10 22:02:13.248: INFO: stderr: "" Jun 10 22:02:13.248: INFO: stdout: "replicationcontroller/agnhost-primary created\n" STEP: Waiting for Agnhost primary to start. Jun 10 22:02:14.252: INFO: Selector matched 1 pods for map[app:agnhost] Jun 10 22:02:14.252: INFO: Found 0 / 1 Jun 10 22:02:15.252: INFO: Selector matched 1 pods for map[app:agnhost] Jun 10 22:02:15.252: INFO: Found 0 / 1 Jun 10 22:02:16.255: INFO: Selector matched 1 pods for map[app:agnhost] Jun 10 22:02:16.255: INFO: Found 1 / 1 Jun 10 22:02:16.255: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Jun 10 22:02:16.258: INFO: Selector matched 1 pods for map[app:agnhost] Jun 10 22:02:16.258: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Jun 10 22:02:16.258: INFO: wait on agnhost-primary startup in kubectl-6218 Jun 10 22:02:16.258: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-6218 logs agnhost-primary-49bm7 agnhost-primary' Jun 10 22:02:16.441: INFO: stderr: "" Jun 10 22:02:16.441: INFO: stdout: "Paused\n" STEP: exposing RC Jun 10 22:02:16.441: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-6218 expose rc agnhost-primary --name=rm2 --port=1234 --target-port=6379' Jun 10 22:02:16.673: INFO: stderr: "" Jun 10 22:02:16.673: INFO: stdout: "service/rm2 exposed\n" Jun 10 22:02:16.676: INFO: Service rm2 in namespace kubectl-6218 found. STEP: exposing service Jun 10 22:02:18.683: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-6218 expose service rm2 --name=rm3 --port=2345 --target-port=6379' Jun 10 22:02:18.879: INFO: stderr: "" Jun 10 22:02:18.879: INFO: stdout: "service/rm3 exposed\n" Jun 10 22:02:18.882: INFO: Service rm3 in namespace kubectl-6218 found. [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 10 22:02:20.886: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6218" for this suite. • [SLOW TEST:8.074 seconds] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl expose /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1223 should create services for rc [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl expose should create services for rc [Conformance]","total":-1,"completed":9,"skipped":114,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 10 22:01:49.772: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:54 [It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating pod liveness-33896925-8ce6-4d81-a18b-6df3d0ecd374 in namespace container-probe-3957 Jun 10 22:01:59.823: INFO: Started pod liveness-33896925-8ce6-4d81-a18b-6df3d0ecd374 in namespace container-probe-3957 STEP: checking the pod's current state and verifying that restartCount is present Jun 10 22:01:59.826: INFO: Initial restart count of pod liveness-33896925-8ce6-4d81-a18b-6df3d0ecd374 is 0 Jun 10 22:02:21.871: INFO: Restart count of pod container-probe-3957/liveness-33896925-8ce6-4d81-a18b-6df3d0ecd374 is now 1 (22.045100585s elapsed) STEP: deleting the pod [AfterEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 10 22:02:21.879: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-3957" for this suite. • [SLOW TEST:32.115 seconds] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":-1,"completed":26,"skipped":592,"failed":0} SSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 10 22:02:20.151: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating configMap with name configmap-test-volume-000709f7-9090-45cc-9c50-34452394db7e STEP: Creating a pod to test consume configMaps Jun 10 22:02:20.188: INFO: Waiting up to 5m0s for pod "pod-configmaps-e0adaaff-536d-414e-b37d-8868dc8f0393" in namespace "configmap-9563" to be "Succeeded or Failed" Jun 10 22:02:20.191: INFO: Pod "pod-configmaps-e0adaaff-536d-414e-b37d-8868dc8f0393": Phase="Pending", Reason="", readiness=false. Elapsed: 2.918399ms Jun 10 22:02:22.195: INFO: Pod "pod-configmaps-e0adaaff-536d-414e-b37d-8868dc8f0393": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006540948s Jun 10 22:02:24.202: INFO: Pod "pod-configmaps-e0adaaff-536d-414e-b37d-8868dc8f0393": Phase="Pending", Reason="", readiness=false. Elapsed: 4.014204778s Jun 10 22:02:26.208: INFO: Pod "pod-configmaps-e0adaaff-536d-414e-b37d-8868dc8f0393": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.019285443s STEP: Saw pod success Jun 10 22:02:26.208: INFO: Pod "pod-configmaps-e0adaaff-536d-414e-b37d-8868dc8f0393" satisfied condition "Succeeded or Failed" Jun 10 22:02:26.210: INFO: Trying to get logs from node node1 pod pod-configmaps-e0adaaff-536d-414e-b37d-8868dc8f0393 container agnhost-container: STEP: delete the pod Jun 10 22:02:26.221: INFO: Waiting for pod pod-configmaps-e0adaaff-536d-414e-b37d-8868dc8f0393 to disappear Jun 10 22:02:26.223: INFO: Pod pod-configmaps-e0adaaff-536d-414e-b37d-8868dc8f0393 no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 10 22:02:26.223: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-9563" for this suite. • [SLOW TEST:6.079 seconds] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":-1,"completed":25,"skipped":425,"failed":0} SSSSS ------------------------------ [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 10 22:02:19.321: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] custom resource defaulting for requests and from storage works [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Jun 10 22:02:19.343: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 10 22:02:27.453: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-9495" for this suite. • [SLOW TEST:8.140 seconds] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 custom resource defaulting for requests and from storage works [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works [Conformance]","total":-1,"completed":7,"skipped":128,"failed":1,"failures":["[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]"]} SSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 10 22:02:27.483: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create ConfigMap with empty key [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating configMap that has name configmap-test-emptyKey-9f108b88-6839-45ba-af43-2b134d71badc [AfterEach] [sig-node] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 10 22:02:27.507: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-4669" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance]","total":-1,"completed":8,"skipped":139,"failed":1,"failures":["[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]"]} SSSSS ------------------------------ [BeforeEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 10 22:02:27.528: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating secret with name projected-secret-test-f6b22bea-7337-461d-a529-dea383b3c17b STEP: Creating a pod to test consume secrets Jun 10 22:02:27.562: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-dc3d0925-55ae-4530-b9d9-4d0b501aecb1" in namespace "projected-183" to be "Succeeded or Failed" Jun 10 22:02:27.566: INFO: Pod "pod-projected-secrets-dc3d0925-55ae-4530-b9d9-4d0b501aecb1": Phase="Pending", Reason="", readiness=false. Elapsed: 3.829988ms Jun 10 22:02:29.569: INFO: Pod "pod-projected-secrets-dc3d0925-55ae-4530-b9d9-4d0b501aecb1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00693883s Jun 10 22:02:31.574: INFO: Pod "pod-projected-secrets-dc3d0925-55ae-4530-b9d9-4d0b501aecb1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011767144s STEP: Saw pod success Jun 10 22:02:31.574: INFO: Pod "pod-projected-secrets-dc3d0925-55ae-4530-b9d9-4d0b501aecb1" satisfied condition "Succeeded or Failed" Jun 10 22:02:31.576: INFO: Trying to get logs from node node2 pod pod-projected-secrets-dc3d0925-55ae-4530-b9d9-4d0b501aecb1 container secret-volume-test: STEP: delete the pod Jun 10 22:02:31.591: INFO: Waiting for pod pod-projected-secrets-dc3d0925-55ae-4530-b9d9-4d0b501aecb1 to disappear Jun 10 22:02:31.593: INFO: Pod pod-projected-secrets-dc3d0925-55ae-4530-b9d9-4d0b501aecb1 no longer exists [AfterEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 10 22:02:31.593: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-183" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":-1,"completed":9,"skipped":144,"failed":1,"failures":["[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 10 22:02:04.823: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a configMap. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ConfigMap STEP: Ensuring resource quota status captures configMap creation STEP: Deleting a ConfigMap STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 10 22:02:32.878: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-9493" for this suite. • [SLOW TEST:28.063 seconds] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a configMap. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance]","total":-1,"completed":13,"skipped":197,"failed":0} SS ------------------------------ [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 10 21:58:28.914: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:54 [It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating pod busybox-369d2da1-8a15-40be-ad8f-abd61dad0530 in namespace container-probe-7602 Jun 10 21:58:34.958: INFO: Started pod busybox-369d2da1-8a15-40be-ad8f-abd61dad0530 in namespace container-probe-7602 STEP: checking the pod's current state and verifying that restartCount is present Jun 10 21:58:34.961: INFO: Initial restart count of pod busybox-369d2da1-8a15-40be-ad8f-abd61dad0530 is 0 STEP: deleting the pod [AfterEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 10 22:02:35.498: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-7602" for this suite. • [SLOW TEST:246.593 seconds] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Probing container should *not* be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":-1,"completed":3,"skipped":74,"failed":0} SSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 10 22:02:31.654: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:241 [It] should add annotations for pods in rc [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating Agnhost RC Jun 10 22:02:31.678: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-8520 create -f -' Jun 10 22:02:32.112: INFO: stderr: "" Jun 10 22:02:32.112: INFO: stdout: "replicationcontroller/agnhost-primary created\n" STEP: Waiting for Agnhost primary to start. Jun 10 22:02:33.116: INFO: Selector matched 1 pods for map[app:agnhost] Jun 10 22:02:33.116: INFO: Found 0 / 1 Jun 10 22:02:34.117: INFO: Selector matched 1 pods for map[app:agnhost] Jun 10 22:02:34.117: INFO: Found 0 / 1 Jun 10 22:02:35.116: INFO: Selector matched 1 pods for map[app:agnhost] Jun 10 22:02:35.116: INFO: Found 0 / 1 Jun 10 22:02:36.115: INFO: Selector matched 1 pods for map[app:agnhost] Jun 10 22:02:36.115: INFO: Found 1 / 1 Jun 10 22:02:36.116: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 STEP: patching all pods Jun 10 22:02:36.118: INFO: Selector matched 1 pods for map[app:agnhost] Jun 10 22:02:36.118: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Jun 10 22:02:36.118: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-8520 patch pod agnhost-primary-vxptc -p {"metadata":{"annotations":{"x":"y"}}}' Jun 10 22:02:36.285: INFO: stderr: "" Jun 10 22:02:36.285: INFO: stdout: "pod/agnhost-primary-vxptc patched\n" STEP: checking annotations Jun 10 22:02:36.287: INFO: Selector matched 1 pods for map[app:agnhost] Jun 10 22:02:36.288: INFO: ForEach: Found 1 pods from the filter. Now looping through them. [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 10 22:02:36.288: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8520" for this suite. • ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc [Conformance]","total":-1,"completed":10,"skipped":169,"failed":1,"failures":["[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]"]} SSS ------------------------------ [BeforeEach] [sig-node] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 10 22:02:32.893: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating configMap configmap-6344/configmap-test-17b3063e-ae7c-4fd1-80e3-d820c05fb742 STEP: Creating a pod to test consume configMaps Jun 10 22:02:32.929: INFO: Waiting up to 5m0s for pod "pod-configmaps-40c509d9-2d9e-4641-8942-3f885edb1127" in namespace "configmap-6344" to be "Succeeded or Failed" Jun 10 22:02:32.932: INFO: Pod "pod-configmaps-40c509d9-2d9e-4641-8942-3f885edb1127": Phase="Pending", Reason="", readiness=false. Elapsed: 1.976082ms Jun 10 22:02:34.935: INFO: Pod "pod-configmaps-40c509d9-2d9e-4641-8942-3f885edb1127": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005469124s Jun 10 22:02:36.939: INFO: Pod "pod-configmaps-40c509d9-2d9e-4641-8942-3f885edb1127": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.009026593s STEP: Saw pod success Jun 10 22:02:36.939: INFO: Pod "pod-configmaps-40c509d9-2d9e-4641-8942-3f885edb1127" satisfied condition "Succeeded or Failed" Jun 10 22:02:36.941: INFO: Trying to get logs from node node1 pod pod-configmaps-40c509d9-2d9e-4641-8942-3f885edb1127 container env-test: STEP: delete the pod Jun 10 22:02:36.954: INFO: Waiting for pod pod-configmaps-40c509d9-2d9e-4641-8942-3f885edb1127 to disappear Jun 10 22:02:36.956: INFO: Pod pod-configmaps-40c509d9-2d9e-4641-8942-3f885edb1127 no longer exists [AfterEach] [sig-node] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 10 22:02:36.956: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-6344" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance]","total":-1,"completed":14,"skipped":199,"failed":0} SSS ------------------------------ [BeforeEach] [sig-instrumentation] Events API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 10 22:02:36.972: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-instrumentation] Events API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/instrumentation/events.go:81 [It] should ensure that an event can be fetched, patched, deleted, and listed [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating a test event STEP: listing events in all namespaces STEP: listing events in test namespace STEP: listing events with field selection filtering on source STEP: listing events with field selection filtering on reportingController STEP: getting the test event STEP: patching the test event STEP: getting the test event STEP: updating the test event STEP: getting the test event STEP: deleting the test event STEP: listing events in all namespaces STEP: listing events in test namespace [AfterEach] [sig-instrumentation] Events API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 10 22:02:37.041: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-1522" for this suite. • ------------------------------ {"msg":"PASSED [sig-instrumentation] Events API should ensure that an event can be fetched, patched, deleted, and listed [Conformance]","total":-1,"completed":15,"skipped":202,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 10 22:02:21.911: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should verify ResourceQuota with terminating scopes. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a ResourceQuota with terminating scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a ResourceQuota with not terminating scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a long running pod STEP: Ensuring resource quota with not terminating scope captures the pod usage STEP: Ensuring resource quota with terminating scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage STEP: Creating a terminating pod STEP: Ensuring resource quota with terminating scope captures the pod usage STEP: Ensuring resource quota with not terminating scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 10 22:02:38.023: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-77" for this suite. • [SLOW TEST:16.119 seconds] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should verify ResourceQuota with terminating scopes. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance]","total":-1,"completed":27,"skipped":604,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 10 22:02:35.530: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] getting/updating/patching custom resource definition status sub-resource works [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Jun 10 22:02:35.554: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 10 22:02:41.084: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-1594" for this suite. • [SLOW TEST:5.562 seconds] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Simple CustomResourceDefinition /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:48 getting/updating/patching custom resource definition status sub-resource works [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works [Conformance]","total":-1,"completed":4,"skipped":86,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 10 22:02:37.115: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test emptydir 0666 on tmpfs Jun 10 22:02:37.152: INFO: Waiting up to 5m0s for pod "pod-812de067-e753-4968-9ccf-86d2a4cf62a5" in namespace "emptydir-7902" to be "Succeeded or Failed" Jun 10 22:02:37.155: INFO: Pod "pod-812de067-e753-4968-9ccf-86d2a4cf62a5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.505108ms Jun 10 22:02:39.157: INFO: Pod "pod-812de067-e753-4968-9ccf-86d2a4cf62a5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.004943626s Jun 10 22:02:41.161: INFO: Pod "pod-812de067-e753-4968-9ccf-86d2a4cf62a5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.008423443s STEP: Saw pod success Jun 10 22:02:41.161: INFO: Pod "pod-812de067-e753-4968-9ccf-86d2a4cf62a5" satisfied condition "Succeeded or Failed" Jun 10 22:02:41.164: INFO: Trying to get logs from node node2 pod pod-812de067-e753-4968-9ccf-86d2a4cf62a5 container test-container: STEP: delete the pod Jun 10 22:02:41.175: INFO: Waiting for pod pod-812de067-e753-4968-9ccf-86d2a4cf62a5 to disappear Jun 10 22:02:41.177: INFO: Pod pod-812de067-e753-4968-9ccf-86d2a4cf62a5 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 10 22:02:41.177: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-7902" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":16,"skipped":236,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 10 22:02:41.291: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:241 [It] should support proxy with --port 0 [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: starting the proxy server Jun 10 22:02:41.313: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-5498 proxy -p 0 --disable-filter' STEP: curling proxy /api/ output [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 10 22:02:41.414: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5498" for this suite. • ------------------------------ [BeforeEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 10 22:02:36.301: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for the cluster [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-7792.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-7792.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Jun 10 22:02:42.507: INFO: DNS probes using dns-7792/dns-test-f2535f7c-5015-4dd7-bec1-443e893eec80 succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 10 22:02:42.516: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-7792" for this suite. • [SLOW TEST:6.221 seconds] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should provide DNS for the cluster [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for the cluster [Conformance]","total":-1,"completed":11,"skipped":172,"failed":1,"failures":["[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Pods Extended /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 10 22:02:42.585: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Pods Set QOS Class /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:150 [It] should be set on Pods with matching resource requests and limits for memory and cpu [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying QOS class is set on the pod [AfterEach] [sig-node] Pods Extended /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 10 22:02:42.620: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-5569" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Pods Extended Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]","total":-1,"completed":12,"skipped":208,"failed":1,"failures":["[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]"]} SSSSSS ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Proxy server should support proxy with --port 0 [Conformance]","total":-1,"completed":17,"skipped":305,"failed":0} [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 10 22:02:41.424: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41 [It] should provide container's cpu request [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test downward API volume plugin Jun 10 22:02:41.457: INFO: Waiting up to 5m0s for pod "downwardapi-volume-70f5b67d-35b6-418a-929a-8dcf779dee56" in namespace "projected-2091" to be "Succeeded or Failed" Jun 10 22:02:41.459: INFO: Pod "downwardapi-volume-70f5b67d-35b6-418a-929a-8dcf779dee56": Phase="Pending", Reason="", readiness=false. Elapsed: 2.179868ms Jun 10 22:02:43.464: INFO: Pod "downwardapi-volume-70f5b67d-35b6-418a-929a-8dcf779dee56": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007843779s Jun 10 22:02:45.469: INFO: Pod "downwardapi-volume-70f5b67d-35b6-418a-929a-8dcf779dee56": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011934492s STEP: Saw pod success Jun 10 22:02:45.469: INFO: Pod "downwardapi-volume-70f5b67d-35b6-418a-929a-8dcf779dee56" satisfied condition "Succeeded or Failed" Jun 10 22:02:45.471: INFO: Trying to get logs from node node1 pod downwardapi-volume-70f5b67d-35b6-418a-929a-8dcf779dee56 container client-container: STEP: delete the pod Jun 10 22:02:45.485: INFO: Waiting for pod downwardapi-volume-70f5b67d-35b6-418a-929a-8dcf779dee56 to disappear Jun 10 22:02:45.487: INFO: Pod downwardapi-volume-70f5b67d-35b6-418a-929a-8dcf779dee56 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 10 22:02:45.488: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2091" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance]","total":-1,"completed":18,"skipped":305,"failed":0} SSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 10 22:00:01.061: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should verify that a failing subpath expansion can be modified during the lifecycle of a container [Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating the pod with failed condition STEP: updating the pod Jun 10 22:02:01.613: INFO: Successfully updated pod "var-expansion-47c2353e-f43c-4b27-b570-2b9e4c893600" STEP: waiting for pod running STEP: deleting the pod gracefully Jun 10 22:02:05.618: INFO: Deleting pod "var-expansion-47c2353e-f43c-4b27-b570-2b9e4c893600" in namespace "var-expansion-9845" Jun 10 22:02:05.622: INFO: Wait up to 5m0s for pod "var-expansion-47c2353e-f43c-4b27-b570-2b9e4c893600" to be fully deleted [AfterEach] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 10 22:02:47.628: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-9845" for this suite. • [SLOW TEST:166.576 seconds] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should verify that a failing subpath expansion can be modified during the lifecycle of a container [Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Variable Expansion should verify that a failing subpath expansion can be modified during the lifecycle of a container [Slow] [Conformance]","total":-1,"completed":13,"skipped":142,"failed":0} S ------------------------------ [BeforeEach] [sig-node] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 10 22:02:42.640: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/init_container.go:162 [It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating the pod Jun 10 22:02:42.661: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [sig-node] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 10 22:02:48.852: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-6437" for this suite. • [SLOW TEST:6.222 seconds] [sig-node] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]","total":-1,"completed":13,"skipped":214,"failed":1,"failures":["[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 10 22:02:45.517: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating configMap with name projected-configmap-test-volume-d2b8ffcf-1e7c-4ea1-995c-252a1e8a9d9b STEP: Creating a pod to test consume configMaps Jun 10 22:02:45.555: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-34f48ba7-6317-4f10-9716-af3f4468facd" in namespace "projected-8035" to be "Succeeded or Failed" Jun 10 22:02:45.559: INFO: Pod "pod-projected-configmaps-34f48ba7-6317-4f10-9716-af3f4468facd": Phase="Pending", Reason="", readiness=false. Elapsed: 4.150705ms Jun 10 22:02:47.563: INFO: Pod "pod-projected-configmaps-34f48ba7-6317-4f10-9716-af3f4468facd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008145897s Jun 10 22:02:49.568: INFO: Pod "pod-projected-configmaps-34f48ba7-6317-4f10-9716-af3f4468facd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012554263s STEP: Saw pod success Jun 10 22:02:49.568: INFO: Pod "pod-projected-configmaps-34f48ba7-6317-4f10-9716-af3f4468facd" satisfied condition "Succeeded or Failed" Jun 10 22:02:49.570: INFO: Trying to get logs from node node1 pod pod-projected-configmaps-34f48ba7-6317-4f10-9716-af3f4468facd container projected-configmap-volume-test: STEP: delete the pod Jun 10 22:02:49.584: INFO: Waiting for pod pod-projected-configmaps-34f48ba7-6317-4f10-9716-af3f4468facd to disappear Jun 10 22:02:49.586: INFO: Pod pod-projected-configmaps-34f48ba7-6317-4f10-9716-af3f4468facd no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 10 22:02:49.586: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8035" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":-1,"completed":19,"skipped":315,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 10 22:02:26.242: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] updates the published spec when one version gets renamed [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: set up a multi version CRD Jun 10 22:02:26.264: INFO: >>> kubeConfig: /root/.kube/config STEP: rename a version STEP: check the new version name is served STEP: check the old version name is removed STEP: check the other version is not changed [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 10 22:02:51.877: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-1785" for this suite. • [SLOW TEST:25.655 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 updates the published spec when one version gets renamed [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance]","total":-1,"completed":26,"skipped":430,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 10 22:02:47.642: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not be blocked by dependency circle [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Jun 10 22:02:47.702: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"c25fafa1-9594-4064-b3dc-7c44e0c4f1b4", Controller:(*bool)(0xc0043057c2), BlockOwnerDeletion:(*bool)(0xc0043057c3)}} Jun 10 22:02:47.707: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"90e9760e-ed9d-47a2-afd5-1d6441128249", Controller:(*bool)(0xc004305a6a), BlockOwnerDeletion:(*bool)(0xc004305a6b)}} Jun 10 22:02:47.714: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"06347667-80d8-469e-abf4-7917fe1d8387", Controller:(*bool)(0xc003c7478a), BlockOwnerDeletion:(*bool)(0xc003c7478b)}} [AfterEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 10 22:02:52.723: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-5121" for this suite. • [SLOW TEST:5.089 seconds] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not be blocked by dependency circle [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance]","total":-1,"completed":14,"skipped":143,"failed":0} SSSSSS ------------------------------ [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 10 22:02:48.961: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating configMap with name configmap-test-volume-map-1322b7ee-dd3c-4c03-b29f-c5c819e5feac STEP: Creating a pod to test consume configMaps Jun 10 22:02:49.001: INFO: Waiting up to 5m0s for pod "pod-configmaps-f956b1bc-c309-468e-bebc-5ad303b1e624" in namespace "configmap-3874" to be "Succeeded or Failed" Jun 10 22:02:49.004: INFO: Pod "pod-configmaps-f956b1bc-c309-468e-bebc-5ad303b1e624": Phase="Pending", Reason="", readiness=false. Elapsed: 2.931889ms Jun 10 22:02:51.008: INFO: Pod "pod-configmaps-f956b1bc-c309-468e-bebc-5ad303b1e624": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007463229s Jun 10 22:02:53.012: INFO: Pod "pod-configmaps-f956b1bc-c309-468e-bebc-5ad303b1e624": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011129586s STEP: Saw pod success Jun 10 22:02:53.012: INFO: Pod "pod-configmaps-f956b1bc-c309-468e-bebc-5ad303b1e624" satisfied condition "Succeeded or Failed" Jun 10 22:02:53.014: INFO: Trying to get logs from node node1 pod pod-configmaps-f956b1bc-c309-468e-bebc-5ad303b1e624 container agnhost-container: STEP: delete the pod Jun 10 22:02:53.029: INFO: Waiting for pod pod-configmaps-f956b1bc-c309-468e-bebc-5ad303b1e624 to disappear Jun 10 22:02:53.031: INFO: Pod pod-configmaps-f956b1bc-c309-468e-bebc-5ad303b1e624 no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 10 22:02:53.031: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-3874" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":-1,"completed":14,"skipped":270,"failed":1,"failures":["[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]"]} SSSSSSSS ------------------------------ [BeforeEach] [sig-node] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 10 22:02:52.023: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in env vars [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating secret with name secret-test-4fb19eb6-5a79-4319-8873-abfda4e1258c STEP: Creating a pod to test consume secrets Jun 10 22:02:52.066: INFO: Waiting up to 5m0s for pod "pod-secrets-91d90779-b0e7-4432-8fb7-93e704a79ca0" in namespace "secrets-7379" to be "Succeeded or Failed" Jun 10 22:02:52.068: INFO: Pod "pod-secrets-91d90779-b0e7-4432-8fb7-93e704a79ca0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.677301ms Jun 10 22:02:54.071: INFO: Pod "pod-secrets-91d90779-b0e7-4432-8fb7-93e704a79ca0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005322856s Jun 10 22:02:56.074: INFO: Pod "pod-secrets-91d90779-b0e7-4432-8fb7-93e704a79ca0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.008809684s STEP: Saw pod success Jun 10 22:02:56.074: INFO: Pod "pod-secrets-91d90779-b0e7-4432-8fb7-93e704a79ca0" satisfied condition "Succeeded or Failed" Jun 10 22:02:56.077: INFO: Trying to get logs from node node1 pod pod-secrets-91d90779-b0e7-4432-8fb7-93e704a79ca0 container secret-env-test: STEP: delete the pod Jun 10 22:02:56.092: INFO: Waiting for pod pod-secrets-91d90779-b0e7-4432-8fb7-93e704a79ca0 to disappear Jun 10 22:02:56.094: INFO: Pod pod-secrets-91d90779-b0e7-4432-8fb7-93e704a79ca0 no longer exists [AfterEach] [sig-node] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 10 22:02:56.094: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-7379" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance]","total":-1,"completed":27,"skipped":502,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 10 22:02:41.147: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Jun 10 22:02:41.594: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Jun 10 22:02:43.604: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63790495361, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63790495361, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63790495361, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63790495361, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jun 10 22:02:46.614: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny pod and configmap creation [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Registering the webhook via the AdmissionRegistration API STEP: create a pod that should be denied by the webhook STEP: create a pod that causes the webhook to hang STEP: create a configmap that should be denied by the webhook STEP: create a configmap that should be admitted by the webhook STEP: update (PUT) the admitted configmap to a non-compliant one should be rejected by the webhook STEP: update (PATCH) the admitted configmap to a non-compliant one should be rejected by the webhook STEP: create a namespace that bypass the webhook STEP: create a configmap that violates the webhook policy but is in a whitelisted namespace [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 10 22:02:56.701: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-8033" for this suite. STEP: Destroying namespace "webhook-8033-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:15.583 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny pod and configmap creation [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","total":-1,"completed":5,"skipped":121,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 10 22:02:53.057: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating configMap with name projected-configmap-test-volume-9043f609-a787-4ddc-b140-38d795ada5ab STEP: Creating a pod to test consume configMaps Jun 10 22:02:53.110: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-9082c99d-9d5f-43c2-af2c-1298e7bf6ae5" in namespace "projected-8922" to be "Succeeded or Failed" Jun 10 22:02:53.112: INFO: Pod "pod-projected-configmaps-9082c99d-9d5f-43c2-af2c-1298e7bf6ae5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.366523ms Jun 10 22:02:55.115: INFO: Pod "pod-projected-configmaps-9082c99d-9d5f-43c2-af2c-1298e7bf6ae5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005525402s Jun 10 22:02:57.119: INFO: Pod "pod-projected-configmaps-9082c99d-9d5f-43c2-af2c-1298e7bf6ae5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.00921889s STEP: Saw pod success Jun 10 22:02:57.119: INFO: Pod "pod-projected-configmaps-9082c99d-9d5f-43c2-af2c-1298e7bf6ae5" satisfied condition "Succeeded or Failed" Jun 10 22:02:57.122: INFO: Trying to get logs from node node1 pod pod-projected-configmaps-9082c99d-9d5f-43c2-af2c-1298e7bf6ae5 container agnhost-container: STEP: delete the pod Jun 10 22:02:57.138: INFO: Waiting for pod pod-projected-configmaps-9082c99d-9d5f-43c2-af2c-1298e7bf6ae5 to disappear Jun 10 22:02:57.140: INFO: Pod pod-projected-configmaps-9082c99d-9d5f-43c2-af2c-1298e7bf6ae5 no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 10 22:02:57.140: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8922" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":-1,"completed":15,"skipped":278,"failed":1,"failures":["[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]"]} SSSSSS ------------------------------ [BeforeEach] [sig-storage] EmptyDir wrapper volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 10 22:02:56.223: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not conflict [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Jun 10 22:02:56.273: INFO: The status of Pod pod-secrets-2e14861f-fcfa-435c-85d4-c6d1b0a45855 is Pending, waiting for it to be Running (with Ready = true) Jun 10 22:02:58.277: INFO: The status of Pod pod-secrets-2e14861f-fcfa-435c-85d4-c6d1b0a45855 is Pending, waiting for it to be Running (with Ready = true) Jun 10 22:03:00.277: INFO: The status of Pod pod-secrets-2e14861f-fcfa-435c-85d4-c6d1b0a45855 is Pending, waiting for it to be Running (with Ready = true) Jun 10 22:03:02.277: INFO: The status of Pod pod-secrets-2e14861f-fcfa-435c-85d4-c6d1b0a45855 is Running (Ready = true) STEP: Cleaning up the secret STEP: Cleaning up the configmap STEP: Cleaning up the pod [AfterEach] [sig-storage] EmptyDir wrapper volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 10 22:03:02.292: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-5518" for this suite. • [SLOW TEST:6.077 seconds] [sig-storage] EmptyDir wrapper volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 should not conflict [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance]","total":-1,"completed":28,"skipped":565,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 10 22:02:49.662: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:126 STEP: Setting up server cert STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication STEP: Deploying the custom resource conversion webhook pod STEP: Wait for the deployment to be ready Jun 10 22:02:49.940: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set Jun 10 22:02:51.950: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63790495369, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63790495369, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63790495369, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63790495369, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-697cdbd8f4\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jun 10 22:02:54.967: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1 [It] should be able to convert from CR v1 to CR v2 [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Jun 10 22:02:54.970: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating a v1 custom resource STEP: v2 custom resource should be converted [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 10 22:03:03.052: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-webhook-8936" for this suite. [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:137 • [SLOW TEST:13.416 seconds] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to convert from CR v1 to CR v2 [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","total":-1,"completed":20,"skipped":359,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 10 22:03:02.351: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating secret with name secret-test-47aba166-aa3b-412a-bba0-1d2b6871d3fa STEP: Creating a pod to test consume secrets Jun 10 22:03:02.387: INFO: Waiting up to 5m0s for pod "pod-secrets-6aa8f9c4-1594-41dc-93a1-5c3b4d4d6adc" in namespace "secrets-7907" to be "Succeeded or Failed" Jun 10 22:03:02.390: INFO: Pod "pod-secrets-6aa8f9c4-1594-41dc-93a1-5c3b4d4d6adc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.676323ms Jun 10 22:03:04.393: INFO: Pod "pod-secrets-6aa8f9c4-1594-41dc-93a1-5c3b4d4d6adc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005452128s Jun 10 22:03:06.399: INFO: Pod "pod-secrets-6aa8f9c4-1594-41dc-93a1-5c3b4d4d6adc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011126813s STEP: Saw pod success Jun 10 22:03:06.399: INFO: Pod "pod-secrets-6aa8f9c4-1594-41dc-93a1-5c3b4d4d6adc" satisfied condition "Succeeded or Failed" Jun 10 22:03:06.401: INFO: Trying to get logs from node node2 pod pod-secrets-6aa8f9c4-1594-41dc-93a1-5c3b4d4d6adc container secret-volume-test: STEP: delete the pod Jun 10 22:03:06.415: INFO: Waiting for pod pod-secrets-6aa8f9c4-1594-41dc-93a1-5c3b4d4d6adc to disappear Jun 10 22:03:06.417: INFO: Pod pod-secrets-6aa8f9c4-1594-41dc-93a1-5c3b4d4d6adc no longer exists [AfterEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 10 22:03:06.417: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-7907" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance]","total":-1,"completed":29,"skipped":593,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 10 22:03:03.119: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test downward API volume plugin Jun 10 22:03:03.162: INFO: Waiting up to 5m0s for pod "downwardapi-volume-6d322768-884b-49ef-90a2-68e23301696c" in namespace "projected-7288" to be "Succeeded or Failed" Jun 10 22:03:03.164: INFO: Pod "downwardapi-volume-6d322768-884b-49ef-90a2-68e23301696c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026618ms Jun 10 22:03:05.167: INFO: Pod "downwardapi-volume-6d322768-884b-49ef-90a2-68e23301696c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00515533s Jun 10 22:03:07.172: INFO: Pod "downwardapi-volume-6d322768-884b-49ef-90a2-68e23301696c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.009990168s STEP: Saw pod success Jun 10 22:03:07.172: INFO: Pod "downwardapi-volume-6d322768-884b-49ef-90a2-68e23301696c" satisfied condition "Succeeded or Failed" Jun 10 22:03:07.175: INFO: Trying to get logs from node node1 pod downwardapi-volume-6d322768-884b-49ef-90a2-68e23301696c container client-container: STEP: delete the pod Jun 10 22:03:07.191: INFO: Waiting for pod downwardapi-volume-6d322768-884b-49ef-90a2-68e23301696c to disappear Jun 10 22:03:07.193: INFO: Pod downwardapi-volume-6d322768-884b-49ef-90a2-68e23301696c no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 10 22:03:07.193: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7288" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":21,"skipped":380,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 10 22:03:07.231: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test downward API volume plugin Jun 10 22:03:07.266: INFO: Waiting up to 5m0s for pod "downwardapi-volume-61175cab-abcd-4838-b7e8-79039584c2e4" in namespace "projected-1118" to be "Succeeded or Failed" Jun 10 22:03:07.268: INFO: Pod "downwardapi-volume-61175cab-abcd-4838-b7e8-79039584c2e4": Phase="Pending", Reason="", readiness=false. Elapsed: 1.912102ms Jun 10 22:03:09.272: INFO: Pod "downwardapi-volume-61175cab-abcd-4838-b7e8-79039584c2e4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005333739s Jun 10 22:03:11.275: INFO: Pod "downwardapi-volume-61175cab-abcd-4838-b7e8-79039584c2e4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.008977183s STEP: Saw pod success Jun 10 22:03:11.275: INFO: Pod "downwardapi-volume-61175cab-abcd-4838-b7e8-79039584c2e4" satisfied condition "Succeeded or Failed" Jun 10 22:03:11.277: INFO: Trying to get logs from node node2 pod downwardapi-volume-61175cab-abcd-4838-b7e8-79039584c2e4 container client-container: STEP: delete the pod Jun 10 22:03:11.288: INFO: Waiting for pod downwardapi-volume-61175cab-abcd-4838-b7e8-79039584c2e4 to disappear Jun 10 22:03:11.290: INFO: Pod downwardapi-volume-61175cab-abcd-4838-b7e8-79039584c2e4 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 10 22:03:11.290: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1118" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":-1,"completed":22,"skipped":396,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 10 22:03:06.520: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41 [It] should update annotations on modification [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating the pod Jun 10 22:03:06.562: INFO: The status of Pod annotationupdate58c2e29e-c039-4cc7-a7e8-e09c4153ecf9 is Pending, waiting for it to be Running (with Ready = true) Jun 10 22:03:08.568: INFO: The status of Pod annotationupdate58c2e29e-c039-4cc7-a7e8-e09c4153ecf9 is Pending, waiting for it to be Running (with Ready = true) Jun 10 22:03:10.566: INFO: The status of Pod annotationupdate58c2e29e-c039-4cc7-a7e8-e09c4153ecf9 is Running (Ready = true) Jun 10 22:03:11.086: INFO: Successfully updated pod "annotationupdate58c2e29e-c039-4cc7-a7e8-e09c4153ecf9" [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 10 22:03:15.155: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-2091" for this suite. • [SLOW TEST:8.647 seconds] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should update annotations on modification [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance]","total":-1,"completed":30,"skipped":647,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 10 22:03:11.414: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:86 [It] Deployment should have a working scale subresource [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Jun 10 22:03:11.436: INFO: Creating simple deployment test-new-deployment Jun 10 22:03:11.444: INFO: deployment "test-new-deployment" doesn't have the required revision set Jun 10 22:03:13.453: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63790495391, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63790495391, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63790495391, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63790495391, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-new-deployment-847dcfb7fb\" is progressing."}}, CollisionCount:(*int32)(nil)} Jun 10 22:03:15.457: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63790495391, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63790495391, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63790495391, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63790495391, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-new-deployment-847dcfb7fb\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: getting scale subresource STEP: updating a scale subresource STEP: verifying the deployment Spec.Replicas was modified STEP: Patch a scale subresource [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:80 Jun 10 22:03:17.476: INFO: Deployment "test-new-deployment": &Deployment{ObjectMeta:{test-new-deployment deployment-9415 d0a87666-a8b2-4085-8dd8-ee78302c69b2 40547 3 2022-06-10 22:03:11 +0000 UTC map[name:httpd] map[deployment.kubernetes.io/revision:1] [] [] [{e2e.test Update apps/v1 2022-06-10 22:03:11 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2022-06-10 22:03:15 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:availableReplicas":{},"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{},"f:updatedReplicas":{}}}}]},Spec:DeploymentSpec{Replicas:*4,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd] map[] [] [] []} {[] [] [{httpd k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc004e26ae8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2022-06-10 22:03:15 +0000 UTC,LastTransitionTime:2022-06-10 22:03:15 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-new-deployment-847dcfb7fb" has successfully progressed.,LastUpdateTime:2022-06-10 22:03:15 +0000 UTC,LastTransitionTime:2022-06-10 22:03:11 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} Jun 10 22:03:17.479: INFO: New ReplicaSet "test-new-deployment-847dcfb7fb" of Deployment "test-new-deployment": &ReplicaSet{ObjectMeta:{test-new-deployment-847dcfb7fb deployment-9415 4300d65a-dcd9-47fe-a5ae-9fbf5ebbed68 40548 2 2022-06-10 22:03:11 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[deployment.kubernetes.io/desired-replicas:2 deployment.kubernetes.io/max-replicas:3 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-new-deployment d0a87666-a8b2-4085-8dd8-ee78302c69b2 0xc004e26ee7 0xc004e26ee8}] [] [{kube-controller-manager Update apps/v1 2022-06-10 22:03:15 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d0a87666-a8b2-4085-8dd8-ee78302c69b2\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*2,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: 847dcfb7fb,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[] [] [] []} {[] [] [{httpd k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc004e27048 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} Jun 10 22:03:17.483: INFO: Pod "test-new-deployment-847dcfb7fb-6q2nw" is not available: &Pod{ObjectMeta:{test-new-deployment-847dcfb7fb-6q2nw test-new-deployment-847dcfb7fb- deployment-9415 990c23ad-f3de-45f9-97ff-3f4926e0c79c 40553 0 2022-06-10 22:03:17 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[kubernetes.io/psp:collectd] [{apps/v1 ReplicaSet test-new-deployment-847dcfb7fb 4300d65a-dcd9-47fe-a5ae-9fbf5ebbed68 0xc004e273ef 0xc004e27400}] [] [{kube-controller-manager Update v1 2022-06-10 22:03:17 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"4300d65a-dcd9-47fe-a5ae-9fbf5ebbed68\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-khkzc,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-khkzc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-06-10 22:03:17 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jun 10 22:03:17.483: INFO: Pod "test-new-deployment-847dcfb7fb-s5w9g" is available: &Pod{ObjectMeta:{test-new-deployment-847dcfb7fb-s5w9g test-new-deployment-847dcfb7fb- deployment-9415 ac7c650c-5395-4093-8a5c-6a947368e4e4 40526 0 2022-06-10 22:03:11 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[k8s.v1.cni.cncf.io/network-status:[{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.4.122" ], "mac": "22:8e:15:34:87:9a", "default": true, "dns": {} }] k8s.v1.cni.cncf.io/networks-status:[{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.4.122" ], "mac": "22:8e:15:34:87:9a", "default": true, "dns": {} }] kubernetes.io/psp:collectd] [{apps/v1 ReplicaSet test-new-deployment-847dcfb7fb 4300d65a-dcd9-47fe-a5ae-9fbf5ebbed68 0xc004e2756f 0xc004e27580}] [] [{kube-controller-manager Update v1 2022-06-10 22:03:11 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"4300d65a-dcd9-47fe-a5ae-9fbf5ebbed68\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {multus Update v1 2022-06-10 22:03:13 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:k8s.v1.cni.cncf.io/network-status":{},"f:k8s.v1.cni.cncf.io/networks-status":{}}}}} {kubelet Update v1 2022-06-10 22:03:15 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.4.122\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-kgrqk,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-kgrqk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-06-10 22:03:11 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-06-10 22:03:15 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-06-10 22:03:15 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-06-10 22:03:11 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.10.190.208,PodIP:10.244.4.122,StartTime:2022-06-10 22:03:11 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2022-06-10 22:03:14 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,ImageID:docker-pullable://k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50,ContainerID:docker://e1fa77ee5fbd558ef773b3570bda2f1d0286fd3ed07f0029200785c49b2a785d,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.4.122,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 10 22:03:17.483: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-9415" for this suite. • [SLOW TEST:6.077 seconds] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 Deployment should have a working scale subresource [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] Deployment Deployment should have a working scale subresource [Conformance]","total":-1,"completed":23,"skipped":456,"failed":0} SSSSSSSS ------------------------------ [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 10 22:03:15.200: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating configMap with name configmap-test-volume-map-5a8c71aa-7958-44d5-896e-cc893320c811 STEP: Creating a pod to test consume configMaps Jun 10 22:03:15.237: INFO: Waiting up to 5m0s for pod "pod-configmaps-0c880970-6e94-4c64-b652-30662841bf4d" in namespace "configmap-6313" to be "Succeeded or Failed" Jun 10 22:03:15.240: INFO: Pod "pod-configmaps-0c880970-6e94-4c64-b652-30662841bf4d": Phase="Pending", Reason="", readiness=false. Elapsed: 3.184527ms Jun 10 22:03:17.245: INFO: Pod "pod-configmaps-0c880970-6e94-4c64-b652-30662841bf4d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00764573s Jun 10 22:03:19.253: INFO: Pod "pod-configmaps-0c880970-6e94-4c64-b652-30662841bf4d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.015578389s STEP: Saw pod success Jun 10 22:03:19.253: INFO: Pod "pod-configmaps-0c880970-6e94-4c64-b652-30662841bf4d" satisfied condition "Succeeded or Failed" Jun 10 22:03:19.256: INFO: Trying to get logs from node node1 pod pod-configmaps-0c880970-6e94-4c64-b652-30662841bf4d container agnhost-container: STEP: delete the pod Jun 10 22:03:19.269: INFO: Waiting for pod pod-configmaps-0c880970-6e94-4c64-b652-30662841bf4d to disappear Jun 10 22:03:19.271: INFO: Pod pod-configmaps-0c880970-6e94-4c64-b652-30662841bf4d no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 10 22:03:19.271: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-6313" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":-1,"completed":31,"skipped":665,"failed":0} SSSSS ------------------------------ [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 10 22:02:57.161: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746 [It] should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating service in namespace services-9623 STEP: creating service affinity-clusterip in namespace services-9623 STEP: creating replication controller affinity-clusterip in namespace services-9623 I0610 22:02:57.193578 25 runners.go:190] Created replication controller with name: affinity-clusterip, namespace: services-9623, replica count: 3 I0610 22:03:00.245017 25 runners.go:190] affinity-clusterip Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0610 22:03:03.248299 25 runners.go:190] affinity-clusterip Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jun 10 22:03:03.252: INFO: Creating new exec pod Jun 10 22:03:10.269: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9623 exec execpod-affinity4hrmq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80' Jun 10 22:03:10.550: INFO: stderr: "+ nc -v -t -w 2 affinity-clusterip 80\n+ echo hostName\nConnection to affinity-clusterip 80 port [tcp/http] succeeded!\n" Jun 10 22:03:10.550: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" Jun 10 22:03:10.550: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9623 exec execpod-affinity4hrmq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.233.63.217 80' Jun 10 22:03:10.805: INFO: stderr: "+ nc -v -t -w 2 10.233.63.217 80\nConnection to 10.233.63.217 80 port [tcp/http] succeeded!\n+ echo hostName\n" Jun 10 22:03:10.805: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" Jun 10 22:03:10.805: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9623 exec execpod-affinity4hrmq -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://10.233.63.217:80/ ; done' Jun 10 22:03:11.260: INFO: stderr: "+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.63.217:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.63.217:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.63.217:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.63.217:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.63.217:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.63.217:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.63.217:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.63.217:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.63.217:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.63.217:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.63.217:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.63.217:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.63.217:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.63.217:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.63.217:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.63.217:80/\n" Jun 10 22:03:11.260: INFO: stdout: "\naffinity-clusterip-bmd9m\naffinity-clusterip-bmd9m\naffinity-clusterip-bmd9m\naffinity-clusterip-bmd9m\naffinity-clusterip-bmd9m\naffinity-clusterip-bmd9m\naffinity-clusterip-bmd9m\naffinity-clusterip-bmd9m\naffinity-clusterip-bmd9m\naffinity-clusterip-bmd9m\naffinity-clusterip-bmd9m\naffinity-clusterip-bmd9m\naffinity-clusterip-bmd9m\naffinity-clusterip-bmd9m\naffinity-clusterip-bmd9m\naffinity-clusterip-bmd9m" Jun 10 22:03:11.260: INFO: Received response from host: affinity-clusterip-bmd9m Jun 10 22:03:11.260: INFO: Received response from host: affinity-clusterip-bmd9m Jun 10 22:03:11.260: INFO: Received response from host: affinity-clusterip-bmd9m Jun 10 22:03:11.260: INFO: Received response from host: affinity-clusterip-bmd9m Jun 10 22:03:11.260: INFO: Received response from host: affinity-clusterip-bmd9m Jun 10 22:03:11.260: INFO: Received response from host: affinity-clusterip-bmd9m Jun 10 22:03:11.260: INFO: Received response from host: affinity-clusterip-bmd9m Jun 10 22:03:11.260: INFO: Received response from host: affinity-clusterip-bmd9m Jun 10 22:03:11.260: INFO: Received response from host: affinity-clusterip-bmd9m Jun 10 22:03:11.260: INFO: Received response from host: affinity-clusterip-bmd9m Jun 10 22:03:11.260: INFO: Received response from host: affinity-clusterip-bmd9m Jun 10 22:03:11.260: INFO: Received response from host: affinity-clusterip-bmd9m Jun 10 22:03:11.260: INFO: Received response from host: affinity-clusterip-bmd9m Jun 10 22:03:11.260: INFO: Received response from host: affinity-clusterip-bmd9m Jun 10 22:03:11.260: INFO: Received response from host: affinity-clusterip-bmd9m Jun 10 22:03:11.260: INFO: Received response from host: affinity-clusterip-bmd9m Jun 10 22:03:11.261: INFO: Cleaning up the exec pod STEP: deleting ReplicationController affinity-clusterip in namespace services-9623, will wait for the garbage collector to delete the pods Jun 10 22:03:11.323: INFO: Deleting ReplicationController affinity-clusterip took: 4.10299ms Jun 10 22:03:11.424: INFO: Terminating ReplicationController affinity-clusterip pods took: 100.409297ms [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 10 22:03:27.136: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-9623" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750 • [SLOW TEST:29.983 seconds] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","total":-1,"completed":16,"skipped":284,"failed":1,"failures":["[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]"]} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 10 22:02:20.931: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-watch STEP: Waiting for a default service account to be provisioned in namespace [It] watch on custom resource definition objects [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Jun 10 22:02:20.951: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating first CR Jun 10 22:02:28.504: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2022-06-10T22:02:28Z generation:1 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2022-06-10T22:02:28Z]] name:name1 resourceVersion:39354 uid:795f2929-6c2c-4bb2-9cd3-d92827f68013] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Creating second CR Jun 10 22:02:38.510: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2022-06-10T22:02:38Z generation:1 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2022-06-10T22:02:38Z]] name:name2 resourceVersion:39555 uid:380e776f-088c-4474-a143-0537cce187f3] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Modifying first CR Jun 10 22:02:48.516: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2022-06-10T22:02:28Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2022-06-10T22:02:48Z]] name:name1 resourceVersion:39874 uid:795f2929-6c2c-4bb2-9cd3-d92827f68013] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Modifying second CR Jun 10 22:02:58.520: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2022-06-10T22:02:38Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2022-06-10T22:02:58Z]] name:name2 resourceVersion:40164 uid:380e776f-088c-4474-a143-0537cce187f3] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Deleting first CR Jun 10 22:03:08.528: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2022-06-10T22:02:28Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2022-06-10T22:02:48Z]] name:name1 resourceVersion:40411 uid:795f2929-6c2c-4bb2-9cd3-d92827f68013] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Deleting second CR Jun 10 22:03:18.534: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2022-06-10T22:02:38Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2022-06-10T22:02:58Z]] name:name2 resourceVersion:40582 uid:380e776f-088c-4474-a143-0537cce187f3] num:map[num1:9223372036854775807 num2:1000000]]} [AfterEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 10 22:03:29.046: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-watch-5991" for this suite. • [SLOW TEST:68.124 seconds] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 CustomResourceDefinition Watch /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_watch.go:42 watch on custom resource definition objects [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 10 22:03:17.509: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD preserving unknown fields in an embedded object [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Jun 10 22:03:17.532: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties Jun 10 22:03:25.620: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-713 --namespace=crd-publish-openapi-713 create -f -' Jun 10 22:03:26.151: INFO: stderr: "" Jun 10 22:03:26.151: INFO: stdout: "e2e-test-crd-publish-openapi-3392-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n" Jun 10 22:03:26.151: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-713 --namespace=crd-publish-openapi-713 delete e2e-test-crd-publish-openapi-3392-crds test-cr' Jun 10 22:03:26.322: INFO: stderr: "" Jun 10 22:03:26.322: INFO: stdout: "e2e-test-crd-publish-openapi-3392-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n" Jun 10 22:03:26.322: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-713 --namespace=crd-publish-openapi-713 apply -f -' Jun 10 22:03:26.677: INFO: stderr: "" Jun 10 22:03:26.677: INFO: stdout: "e2e-test-crd-publish-openapi-3392-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n" Jun 10 22:03:26.677: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-713 --namespace=crd-publish-openapi-713 delete e2e-test-crd-publish-openapi-3392-crds test-cr' Jun 10 22:03:26.855: INFO: stderr: "" Jun 10 22:03:26.855: INFO: stdout: "e2e-test-crd-publish-openapi-3392-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR Jun 10 22:03:26.855: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-713 explain e2e-test-crd-publish-openapi-3392-crds' Jun 10 22:03:27.229: INFO: stderr: "" Jun 10 22:03:27.229: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-3392-crd\nVERSION: crd-publish-openapi-test-unknown-in-nested.example.com/v1\n\nDESCRIPTION:\n preserve-unknown-properties in nested field for Testing\n\nFIELDS:\n apiVersion\t\n APIVersion defines the versioned schema of this representation of an\n object. Servers should convert recognized schemas to the latest internal\n value, and may reject unrecognized values. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n kind\t\n Kind is a string value representing the REST resource this object\n represents. Servers may infer this from the endpoint the client submits\n requests to. Cannot be updated. In CamelCase. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n metadata\t\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n spec\t<>\n Specification of Waldo\n\n status\t\n Status of Waldo\n\n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 10 22:03:31.804: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-713" for this suite. • [SLOW TEST:14.315 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD preserving unknown fields in an embedded object [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance]","total":-1,"completed":24,"skipped":464,"failed":0} SSSSS ------------------------------ [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 10 22:03:31.835: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test emptydir 0777 on node default medium Jun 10 22:03:31.870: INFO: Waiting up to 5m0s for pod "pod-1a12e21c-70a4-4c19-b4ef-d34d7e24926e" in namespace "emptydir-6509" to be "Succeeded or Failed" Jun 10 22:03:31.872: INFO: Pod "pod-1a12e21c-70a4-4c19-b4ef-d34d7e24926e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.341346ms Jun 10 22:03:33.877: INFO: Pod "pod-1a12e21c-70a4-4c19-b4ef-d34d7e24926e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006859136s Jun 10 22:03:35.882: INFO: Pod "pod-1a12e21c-70a4-4c19-b4ef-d34d7e24926e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012469192s STEP: Saw pod success Jun 10 22:03:35.882: INFO: Pod "pod-1a12e21c-70a4-4c19-b4ef-d34d7e24926e" satisfied condition "Succeeded or Failed" Jun 10 22:03:35.885: INFO: Trying to get logs from node node1 pod pod-1a12e21c-70a4-4c19-b4ef-d34d7e24926e container test-container: STEP: delete the pod Jun 10 22:03:35.897: INFO: Waiting for pod pod-1a12e21c-70a4-4c19-b4ef-d34d7e24926e to disappear Jun 10 22:03:35.900: INFO: Pod pod-1a12e21c-70a4-4c19-b4ef-d34d7e24926e no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 10 22:03:35.900: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-6509" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":25,"skipped":469,"failed":0} SSSSSSSSSSSS ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance]","total":-1,"completed":10,"skipped":133,"failed":0} [BeforeEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 10 22:03:29.058: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: create the rc1 STEP: create the rc2 STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well STEP: delete the rc simpletest-rc-to-be-deleted STEP: wait for the rc to be deleted STEP: Gathering metrics Jun 10 22:03:39.168: INFO: The status of Pod kube-controller-manager-master3 is Running (Ready = true) Jun 10 22:03:39.236: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: Jun 10 22:03:39.236: INFO: Deleting pod "simpletest-rc-to-be-deleted-5gqhb" in namespace "gc-5567" Jun 10 22:03:39.244: INFO: Deleting pod "simpletest-rc-to-be-deleted-5n2zb" in namespace "gc-5567" Jun 10 22:03:39.249: INFO: Deleting pod "simpletest-rc-to-be-deleted-87mjg" in namespace "gc-5567" Jun 10 22:03:39.256: INFO: Deleting pod "simpletest-rc-to-be-deleted-bmzt7" in namespace "gc-5567" Jun 10 22:03:39.264: INFO: Deleting pod "simpletest-rc-to-be-deleted-f7tjs" in namespace "gc-5567" [AfterEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 10 22:03:39.270: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-5567" for this suite. • [SLOW TEST:10.221 seconds] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]","total":-1,"completed":11,"skipped":133,"failed":0} SS ------------------------------ [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 10 21:57:41.941: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset W0610 21:57:41.963233 30 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Jun 10 21:57:41.963: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Jun 10 21:57:41.965: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 [BeforeEach] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:105 STEP: Creating service test in namespace statefulset-9412 [It] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating stateful set ss in namespace statefulset-9412 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-9412 Jun 10 21:57:41.979: INFO: Found 0 stateful pods, waiting for 1 Jun 10 21:57:51.982: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod Jun 10 21:57:51.986: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-9412 exec ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Jun 10 21:57:52.241: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" Jun 10 21:57:52.241: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Jun 10 21:57:52.241: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Jun 10 21:57:52.243: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Jun 10 21:58:02.248: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Jun 10 21:58:02.248: INFO: Waiting for statefulset status.replicas updated to 0 Jun 10 21:58:02.262: INFO: POD NODE PHASE GRACE CONDITIONS Jun 10 21:58:02.262: INFO: ss-0 node1 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-06-10 21:57:42 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-06-10 21:57:52 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-06-10 21:57:52 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-06-10 21:57:41 +0000 UTC }] Jun 10 21:58:02.262: INFO: Jun 10 21:58:02.262: INFO: StatefulSet ss has not reached scale 3, at 1 Jun 10 21:58:03.265: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.997198883s Jun 10 21:58:04.269: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.993827934s Jun 10 21:58:05.272: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.989753447s Jun 10 21:58:06.277: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.986683569s Jun 10 21:58:07.284: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.980986705s Jun 10 21:58:08.289: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.975112563s Jun 10 21:58:09.293: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.969386603s Jun 10 21:58:10.305: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.966321509s Jun 10 21:58:11.308: INFO: Verifying statefulset ss doesn't scale past 3 for another 953.762979ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-9412 Jun 10 21:58:12.312: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-9412 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jun 10 21:58:12.570: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n" Jun 10 21:58:12.570: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Jun 10 21:58:12.570: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Jun 10 21:58:12.570: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-9412 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jun 10 21:58:13.188: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\n" Jun 10 21:58:13.188: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Jun 10 21:58:13.188: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Jun 10 21:58:13.188: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-9412 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jun 10 21:58:13.446: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\n" Jun 10 21:58:13.446: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Jun 10 21:58:13.446: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Jun 10 21:58:13.449: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Jun 10 21:58:13.449: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Jun 10 21:58:13.449: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Scale down will not halt with unhealthy stateful pod Jun 10 21:58:13.451: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-9412 exec ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Jun 10 21:58:13.734: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" Jun 10 21:58:13.734: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Jun 10 21:58:13.734: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Jun 10 21:58:13.735: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-9412 exec ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Jun 10 21:58:13.960: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" Jun 10 21:58:13.960: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Jun 10 21:58:13.960: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Jun 10 21:58:13.960: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-9412 exec ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Jun 10 21:58:14.379: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" Jun 10 21:58:14.379: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Jun 10 21:58:14.379: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Jun 10 21:58:14.379: INFO: Waiting for statefulset status.replicas updated to 0 Jun 10 21:58:14.381: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 3 Jun 10 21:58:24.389: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Jun 10 21:58:24.389: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Jun 10 21:58:24.389: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Jun 10 21:58:24.397: INFO: POD NODE PHASE GRACE CONDITIONS Jun 10 21:58:24.397: INFO: ss-0 node1 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-06-10 21:57:42 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-06-10 21:58:14 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-06-10 21:58:14 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-06-10 21:57:41 +0000 UTC }] Jun 10 21:58:24.398: INFO: ss-1 node2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-06-10 21:58:02 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-06-10 21:58:14 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-06-10 21:58:14 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-06-10 21:58:02 +0000 UTC }] Jun 10 21:58:24.398: INFO: ss-2 node1 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-06-10 21:58:02 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-06-10 21:58:15 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-06-10 21:58:15 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-06-10 21:58:02 +0000 UTC }] Jun 10 21:58:24.398: INFO: Jun 10 21:58:24.398: INFO: StatefulSet ss has not reached scale 0, at 3 Jun 10 21:58:25.401: INFO: POD NODE PHASE GRACE CONDITIONS Jun 10 21:58:25.401: INFO: ss-0 node1 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-06-10 21:57:42 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-06-10 21:58:14 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-06-10 21:58:14 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-06-10 21:57:41 +0000 UTC }] Jun 10 21:58:25.401: INFO: ss-1 node2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-06-10 21:58:02 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-06-10 21:58:14 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-06-10 21:58:14 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-06-10 21:58:02 +0000 UTC }] Jun 10 21:58:25.401: INFO: ss-2 node1 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-06-10 21:58:02 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-06-10 21:58:15 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-06-10 21:58:15 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-06-10 21:58:02 +0000 UTC }] Jun 10 21:58:25.401: INFO: Jun 10 21:58:25.401: INFO: StatefulSet ss has not reached scale 0, at 3 Jun 10 21:58:26.405: INFO: POD NODE PHASE GRACE CONDITIONS Jun 10 21:58:26.405: INFO: ss-0 node1 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-06-10 21:57:42 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-06-10 21:58:14 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-06-10 21:58:14 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-06-10 21:57:41 +0000 UTC }] Jun 10 21:58:26.405: INFO: ss-1 node2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-06-10 21:58:02 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-06-10 21:58:14 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-06-10 21:58:14 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-06-10 21:58:02 +0000 UTC }] Jun 10 21:58:26.405: INFO: ss-2 node1 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-06-10 21:58:02 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-06-10 21:58:15 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-06-10 21:58:15 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-06-10 21:58:02 +0000 UTC }] Jun 10 21:58:26.405: INFO: Jun 10 21:58:26.405: INFO: StatefulSet ss has not reached scale 0, at 3 Jun 10 21:58:27.409: INFO: POD NODE PHASE GRACE CONDITIONS Jun 10 21:58:27.409: INFO: ss-0 node1 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-06-10 21:57:42 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-06-10 21:58:14 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-06-10 21:58:14 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-06-10 21:57:41 +0000 UTC }] Jun 10 21:58:27.409: INFO: ss-1 node2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-06-10 21:58:02 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-06-10 21:58:14 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-06-10 21:58:14 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-06-10 21:58:02 +0000 UTC }] Jun 10 21:58:27.409: INFO: ss-2 node1 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-06-10 21:58:02 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-06-10 21:58:15 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-06-10 21:58:15 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-06-10 21:58:02 +0000 UTC }] Jun 10 21:58:27.409: INFO: Jun 10 21:58:27.409: INFO: StatefulSet ss has not reached scale 0, at 3 Jun 10 21:58:28.413: INFO: POD NODE PHASE GRACE CONDITIONS Jun 10 21:58:28.413: INFO: ss-0 node1 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-06-10 21:57:42 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-06-10 21:58:14 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-06-10 21:58:14 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-06-10 21:57:41 +0000 UTC }] Jun 10 21:58:28.413: INFO: ss-2 node1 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-06-10 21:58:02 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-06-10 21:58:15 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-06-10 21:58:15 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-06-10 21:58:02 +0000 UTC }] Jun 10 21:58:28.414: INFO: Jun 10 21:58:28.414: INFO: StatefulSet ss has not reached scale 0, at 2 Jun 10 21:58:29.418: INFO: POD NODE PHASE GRACE CONDITIONS Jun 10 21:58:29.418: INFO: ss-0 node1 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-06-10 21:57:42 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-06-10 21:58:14 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-06-10 21:58:14 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-06-10 21:57:41 +0000 UTC }] Jun 10 21:58:29.418: INFO: ss-2 node1 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-06-10 21:58:02 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-06-10 21:58:15 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-06-10 21:58:15 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-06-10 21:58:02 +0000 UTC }] Jun 10 21:58:29.418: INFO: Jun 10 21:58:29.418: INFO: StatefulSet ss has not reached scale 0, at 2 Jun 10 21:58:30.422: INFO: POD NODE PHASE GRACE CONDITIONS Jun 10 21:58:30.422: INFO: ss-0 node1 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-06-10 21:57:42 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-06-10 21:58:14 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-06-10 21:58:14 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-06-10 21:57:41 +0000 UTC }] Jun 10 21:58:30.422: INFO: Jun 10 21:58:30.422: INFO: StatefulSet ss has not reached scale 0, at 1 Jun 10 21:58:31.425: INFO: POD NODE PHASE GRACE CONDITIONS Jun 10 21:58:31.425: INFO: ss-0 node1 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-06-10 21:57:42 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-06-10 21:58:14 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-06-10 21:58:14 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-06-10 21:57:41 +0000 UTC }] Jun 10 21:58:31.425: INFO: Jun 10 21:58:31.425: INFO: StatefulSet ss has not reached scale 0, at 1 Jun 10 21:58:32.429: INFO: POD NODE PHASE GRACE CONDITIONS Jun 10 21:58:32.429: INFO: ss-0 node1 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-06-10 21:57:42 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-06-10 21:58:14 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-06-10 21:58:14 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-06-10 21:57:41 +0000 UTC }] Jun 10 21:58:32.429: INFO: Jun 10 21:58:32.429: INFO: StatefulSet ss has not reached scale 0, at 1 Jun 10 21:58:33.433: INFO: POD NODE PHASE GRACE CONDITIONS Jun 10 21:58:33.433: INFO: ss-0 node1 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-06-10 21:57:42 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-06-10 21:58:14 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-06-10 21:58:14 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-06-10 21:57:41 +0000 UTC }] Jun 10 21:58:33.433: INFO: Jun 10 21:58:33.433: INFO: StatefulSet ss has not reached scale 0, at 1 STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-9412 Jun 10 21:58:34.438: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-9412 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jun 10 21:58:34.651: INFO: rc: 1 Jun 10 21:58:34.651: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-9412 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: error: unable to upgrade connection: container not found ("webserver") error: exit status 1 Jun 10 21:58:44.651: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-9412 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jun 10 21:58:44.802: INFO: rc: 1 Jun 10 21:58:44.802: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-9412 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jun 10 21:58:54.805: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-9412 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jun 10 21:58:54.964: INFO: rc: 1 Jun 10 21:58:54.964: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-9412 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jun 10 21:59:04.965: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-9412 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jun 10 21:59:05.114: INFO: rc: 1 Jun 10 21:59:05.114: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-9412 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jun 10 21:59:15.114: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-9412 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jun 10 21:59:15.273: INFO: rc: 1 Jun 10 21:59:15.273: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-9412 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jun 10 21:59:25.274: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-9412 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jun 10 21:59:25.423: INFO: rc: 1 Jun 10 21:59:25.423: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-9412 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jun 10 21:59:35.424: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-9412 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jun 10 21:59:35.578: INFO: rc: 1 Jun 10 21:59:35.578: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-9412 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jun 10 21:59:45.579: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-9412 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jun 10 21:59:45.734: INFO: rc: 1 Jun 10 21:59:45.734: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-9412 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jun 10 21:59:55.735: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-9412 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jun 10 21:59:55.887: INFO: rc: 1 Jun 10 21:59:55.887: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-9412 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jun 10 22:00:05.887: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-9412 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jun 10 22:00:06.026: INFO: rc: 1 Jun 10 22:00:06.026: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-9412 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jun 10 22:00:16.028: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-9412 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jun 10 22:00:16.179: INFO: rc: 1 Jun 10 22:00:16.179: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-9412 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jun 10 22:00:26.180: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-9412 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jun 10 22:00:26.316: INFO: rc: 1 Jun 10 22:00:26.316: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-9412 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jun 10 22:00:36.320: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-9412 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jun 10 22:00:36.480: INFO: rc: 1 Jun 10 22:00:36.481: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-9412 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jun 10 22:00:46.483: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-9412 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jun 10 22:00:46.630: INFO: rc: 1 Jun 10 22:00:46.630: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-9412 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jun 10 22:00:56.633: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-9412 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jun 10 22:00:56.803: INFO: rc: 1 Jun 10 22:00:56.803: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-9412 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jun 10 22:01:06.804: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-9412 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jun 10 22:01:06.948: INFO: rc: 1 Jun 10 22:01:06.948: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-9412 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jun 10 22:01:16.948: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-9412 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jun 10 22:01:17.111: INFO: rc: 1 Jun 10 22:01:17.111: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-9412 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jun 10 22:01:27.112: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-9412 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jun 10 22:01:27.272: INFO: rc: 1 Jun 10 22:01:27.272: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-9412 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jun 10 22:01:37.273: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-9412 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jun 10 22:01:37.428: INFO: rc: 1 Jun 10 22:01:37.428: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-9412 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jun 10 22:01:47.429: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-9412 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jun 10 22:01:47.583: INFO: rc: 1 Jun 10 22:01:47.583: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-9412 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jun 10 22:01:57.584: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-9412 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jun 10 22:01:57.740: INFO: rc: 1 Jun 10 22:01:57.740: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-9412 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jun 10 22:02:07.742: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-9412 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jun 10 22:02:07.923: INFO: rc: 1 Jun 10 22:02:07.924: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-9412 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jun 10 22:02:17.925: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-9412 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jun 10 22:02:18.081: INFO: rc: 1 Jun 10 22:02:18.081: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-9412 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jun 10 22:02:28.083: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-9412 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jun 10 22:02:28.228: INFO: rc: 1 Jun 10 22:02:28.228: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-9412 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jun 10 22:02:38.231: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-9412 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jun 10 22:02:38.385: INFO: rc: 1 Jun 10 22:02:38.385: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-9412 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jun 10 22:02:48.386: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-9412 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jun 10 22:02:48.503: INFO: rc: 1 Jun 10 22:02:48.503: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-9412 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jun 10 22:02:58.505: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-9412 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jun 10 22:02:58.673: INFO: rc: 1 Jun 10 22:02:58.673: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-9412 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jun 10 22:03:08.676: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-9412 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jun 10 22:03:08.839: INFO: rc: 1 Jun 10 22:03:08.839: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-9412 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jun 10 22:03:18.841: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-9412 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jun 10 22:03:18.996: INFO: rc: 1 Jun 10 22:03:18.996: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-9412 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jun 10 22:03:28.998: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-9412 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jun 10 22:03:29.151: INFO: rc: 1 Jun 10 22:03:29.151: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-9412 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jun 10 22:03:39.153: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-9412 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jun 10 22:03:39.312: INFO: rc: 1 Jun 10 22:03:39.312: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: Jun 10 22:03:39.312: INFO: Scaling statefulset ss to 0 Jun 10 22:03:39.336: INFO: Waiting for statefulset status.replicas updated to 0 [AfterEach] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:116 Jun 10 22:03:39.341: INFO: Deleting all statefulset in ns statefulset-9412 Jun 10 22:03:39.345: INFO: Scaling statefulset ss to 0 Jun 10 22:03:39.358: INFO: Waiting for statefulset status.replicas updated to 0 Jun 10 22:03:39.360: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 10 22:03:39.368: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-9412" for this suite. • [SLOW TEST:357.436 seconds] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:95 Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]","total":-1,"completed":1,"skipped":3,"failed":0} SSSS ------------------------------ [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 10 22:03:35.932: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:186 [It] should be updated [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating the pod STEP: submitting the pod to kubernetes Jun 10 22:03:35.969: INFO: The status of Pod pod-update-dbceda92-aa43-411b-9ff0-1c9ecd00d6e6 is Pending, waiting for it to be Running (with Ready = true) Jun 10 22:03:37.973: INFO: The status of Pod pod-update-dbceda92-aa43-411b-9ff0-1c9ecd00d6e6 is Pending, waiting for it to be Running (with Ready = true) Jun 10 22:03:39.973: INFO: The status of Pod pod-update-dbceda92-aa43-411b-9ff0-1c9ecd00d6e6 is Running (Ready = true) STEP: verifying the pod is in kubernetes STEP: updating the pod Jun 10 22:03:40.487: INFO: Successfully updated pod "pod-update-dbceda92-aa43-411b-9ff0-1c9ecd00d6e6" STEP: verifying the updated pod is in kubernetes Jun 10 22:03:40.492: INFO: Pod update OK [AfterEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 10 22:03:40.492: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-7998" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Pods should be updated [NodeConformance] [Conformance]","total":-1,"completed":26,"skipped":481,"failed":0} [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 10 22:03:40.502: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:241 [It] should check if kubectl diff finds a difference for Deployments [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: create deployment with httpd image Jun 10 22:03:40.527: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-7487 create -f -' Jun 10 22:03:40.919: INFO: stderr: "" Jun 10 22:03:40.920: INFO: stdout: "deployment.apps/httpd-deployment created\n" STEP: verify diff finds difference between live and declared image Jun 10 22:03:40.920: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-7487 diff -f -' Jun 10 22:03:41.284: INFO: rc: 1 Jun 10 22:03:41.284: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-7487 delete -f -' Jun 10 22:03:41.403: INFO: stderr: "" Jun 10 22:03:41.403: INFO: stdout: "deployment.apps \"httpd-deployment\" deleted\n" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 10 22:03:41.403: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7487" for this suite. • ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl diff should check if kubectl diff finds a difference for Deployments [Conformance]","total":-1,"completed":27,"skipped":481,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 10 22:03:39.286: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Jun 10 22:03:44.362: INFO: Expected: &{} to match Container's Termination Message: -- STEP: delete the container [AfterEach] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 10 22:03:44.371: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-279" for this suite. • [SLOW TEST:5.094 seconds] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 blackbox test /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:41 on terminated container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:134 should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":-1,"completed":12,"skipped":135,"failed":0} SSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] Watchers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 10 22:03:44.405: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to start watching from a specific resource version [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating a new configmap STEP: modifying the configmap once STEP: modifying the configmap a second time STEP: deleting the configmap STEP: creating a watch on configmaps from the resource version returned by the first update STEP: Expecting to observe notifications for all changes to the configmap after the first update Jun 10 22:03:44.468: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version watch-268 0c3fd5b9-fe5a-4b4f-8e20-1cbd35fa5f42 41192 0 2022-06-10 22:03:44 +0000 UTC map[watch-this-configmap:from-resource-version] map[] [] [] [{e2e.test Update v1 2022-06-10 22:03:44 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} Jun 10 22:03:44.468: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version watch-268 0c3fd5b9-fe5a-4b4f-8e20-1cbd35fa5f42 41195 0 2022-06-10 22:03:44 +0000 UTC map[watch-this-configmap:from-resource-version] map[] [] [] [{e2e.test Update v1 2022-06-10 22:03:44 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 10 22:03:44.468: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-268" for this suite. • ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance]","total":-1,"completed":13,"skipped":148,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-apps] DisruptionController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 10 22:03:39.389: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename disruption STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] DisruptionController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/disruption.go:69 [It] should update/patch PodDisruptionBudget status [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Waiting for the pdb to be processed STEP: Updating PodDisruptionBudget status STEP: Waiting for all pods to be running Jun 10 22:03:41.431: INFO: running pods: 0 < 1 Jun 10 22:03:43.434: INFO: running pods: 0 < 1 STEP: locating a running pod STEP: Waiting for the pdb to be processed STEP: Patching PodDisruptionBudget status STEP: Waiting for the pdb to be processed [AfterEach] [sig-apps] DisruptionController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 10 22:03:45.461: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "disruption-9098" for this suite. • [SLOW TEST:6.080 seconds] [sig-apps] DisruptionController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should update/patch PodDisruptionBudget status [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] DisruptionController should update/patch PodDisruptionBudget status [Conformance]","total":-1,"completed":2,"skipped":7,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 10 22:02:56.767: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should succeed in writing subpaths in container [Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating the pod STEP: waiting for pod running STEP: creating a file in subpath Jun 10 22:03:00.818: INFO: ExecWithOptions {Command:[/bin/sh -c touch /volume_mount/mypath/foo/test.log] Namespace:var-expansion-5859 PodName:var-expansion-4e36b5a6-559e-4b18-b60f-566cbef36b66 ContainerName:dapi-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 10 22:03:00.818: INFO: >>> kubeConfig: /root/.kube/config STEP: test for file in mounted path Jun 10 22:03:00.908: INFO: ExecWithOptions {Command:[/bin/sh -c test -f /subpath_mount/test.log] Namespace:var-expansion-5859 PodName:var-expansion-4e36b5a6-559e-4b18-b60f-566cbef36b66 ContainerName:dapi-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 10 22:03:00.908: INFO: >>> kubeConfig: /root/.kube/config STEP: updating the annotation value Jun 10 22:03:01.501: INFO: Successfully updated pod "var-expansion-4e36b5a6-559e-4b18-b60f-566cbef36b66" STEP: waiting for annotated pod running STEP: deleting the pod gracefully Jun 10 22:03:01.511: INFO: Deleting pod "var-expansion-4e36b5a6-559e-4b18-b60f-566cbef36b66" in namespace "var-expansion-5859" Jun 10 22:03:01.516: INFO: Wait up to 5m0s for pod "var-expansion-4e36b5a6-559e-4b18-b60f-566cbef36b66" to be fully deleted [AfterEach] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 10 22:03:47.524: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-5859" for this suite. • [SLOW TEST:50.766 seconds] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should succeed in writing subpaths in container [Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Variable Expansion should succeed in writing subpaths in container [Slow] [Conformance]","total":-1,"completed":6,"skipped":138,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Docker Containers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 10 22:03:44.531: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test override arguments Jun 10 22:03:44.564: INFO: Waiting up to 5m0s for pod "client-containers-948dbe57-4348-4e0f-a63d-11168a89f765" in namespace "containers-4558" to be "Succeeded or Failed" Jun 10 22:03:44.566: INFO: Pod "client-containers-948dbe57-4348-4e0f-a63d-11168a89f765": Phase="Pending", Reason="", readiness=false. Elapsed: 2.38242ms Jun 10 22:03:46.569: INFO: Pod "client-containers-948dbe57-4348-4e0f-a63d-11168a89f765": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005327478s Jun 10 22:03:48.574: INFO: Pod "client-containers-948dbe57-4348-4e0f-a63d-11168a89f765": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.009700074s STEP: Saw pod success Jun 10 22:03:48.574: INFO: Pod "client-containers-948dbe57-4348-4e0f-a63d-11168a89f765" satisfied condition "Succeeded or Failed" Jun 10 22:03:48.577: INFO: Trying to get logs from node node1 pod client-containers-948dbe57-4348-4e0f-a63d-11168a89f765 container agnhost-container: STEP: delete the pod Jun 10 22:03:48.590: INFO: Waiting for pod client-containers-948dbe57-4348-4e0f-a63d-11168a89f765 to disappear Jun 10 22:03:48.592: INFO: Pod client-containers-948dbe57-4348-4e0f-a63d-11168a89f765 no longer exists [AfterEach] [sig-node] Docker Containers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 10 22:03:48.592: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-4558" for this suite. • ------------------------------ [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 10 22:03:41.452: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Jun 10 22:03:42.073: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Jun 10 22:03:44.087: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63790495422, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63790495422, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63790495422, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63790495422, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} Jun 10 22:03:46.091: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63790495422, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63790495422, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63790495422, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63790495422, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jun 10 22:03:49.102: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate pod and apply defaults after mutation [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Registering the mutating pod webhook via the AdmissionRegistration API STEP: create a pod that should be updated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 10 22:03:49.139: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-6979" for this suite. STEP: Destroying namespace "webhook-6979-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:7.720 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate pod and apply defaults after mutation [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","total":-1,"completed":28,"skipped":503,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 10 22:03:45.525: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test emptydir 0777 on tmpfs Jun 10 22:03:45.556: INFO: Waiting up to 5m0s for pod "pod-fd1dc745-a6f1-4fa0-ae0d-15897114398c" in namespace "emptydir-956" to be "Succeeded or Failed" Jun 10 22:03:45.558: INFO: Pod "pod-fd1dc745-a6f1-4fa0-ae0d-15897114398c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.203291ms Jun 10 22:03:47.561: INFO: Pod "pod-fd1dc745-a6f1-4fa0-ae0d-15897114398c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005669354s Jun 10 22:03:49.565: INFO: Pod "pod-fd1dc745-a6f1-4fa0-ae0d-15897114398c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.009472762s STEP: Saw pod success Jun 10 22:03:49.565: INFO: Pod "pod-fd1dc745-a6f1-4fa0-ae0d-15897114398c" satisfied condition "Succeeded or Failed" Jun 10 22:03:49.567: INFO: Trying to get logs from node node2 pod pod-fd1dc745-a6f1-4fa0-ae0d-15897114398c container test-container: STEP: delete the pod Jun 10 22:03:49.593: INFO: Waiting for pod pod-fd1dc745-a6f1-4fa0-ae0d-15897114398c to disappear Jun 10 22:03:49.595: INFO: Pod pod-fd1dc745-a6f1-4fa0-ae0d-15897114398c no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 10 22:03:49.595: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-956" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":3,"skipped":39,"failed":0} SSSS ------------------------------ {"msg":"PASSED [sig-node] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]","total":-1,"completed":14,"skipped":182,"failed":0} [BeforeEach] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 10 22:03:48.603: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's args [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test substitution in container's args Jun 10 22:03:48.637: INFO: Waiting up to 5m0s for pod "var-expansion-d174fcec-d07e-4275-97f2-2e51de0fa00b" in namespace "var-expansion-5057" to be "Succeeded or Failed" Jun 10 22:03:48.639: INFO: Pod "var-expansion-d174fcec-d07e-4275-97f2-2e51de0fa00b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.660219ms Jun 10 22:03:50.643: INFO: Pod "var-expansion-d174fcec-d07e-4275-97f2-2e51de0fa00b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006311327s Jun 10 22:03:52.646: INFO: Pod "var-expansion-d174fcec-d07e-4275-97f2-2e51de0fa00b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.009592866s STEP: Saw pod success Jun 10 22:03:52.646: INFO: Pod "var-expansion-d174fcec-d07e-4275-97f2-2e51de0fa00b" satisfied condition "Succeeded or Failed" Jun 10 22:03:52.648: INFO: Trying to get logs from node node2 pod var-expansion-d174fcec-d07e-4275-97f2-2e51de0fa00b container dapi-container: STEP: delete the pod Jun 10 22:03:52.661: INFO: Waiting for pod var-expansion-d174fcec-d07e-4275-97f2-2e51de0fa00b to disappear Jun 10 22:03:52.663: INFO: Pod var-expansion-d174fcec-d07e-4275-97f2-2e51de0fa00b no longer exists [AfterEach] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 10 22:03:52.663: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-5057" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance]","total":-1,"completed":15,"skipped":182,"failed":0} SS ------------------------------ [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 10 22:03:47.566: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a service. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Service STEP: Creating a NodePort Service STEP: Not allowing a LoadBalancer Service with NodePort to be created that exceeds remaining quota STEP: Ensuring resource quota status captures service creation STEP: Deleting Services STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 10 22:03:58.665: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-4130" for this suite. • [SLOW TEST:11.108 seconds] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a service. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance]","total":-1,"completed":7,"skipped":156,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 10 22:03:49.614: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a secret. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Discovering how many secrets are in namespace by default STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Secret STEP: Ensuring resource quota status captures secret creation STEP: Deleting a secret STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 10 22:04:06.671: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-3817" for this suite. • [SLOW TEST:17.068 seconds] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a secret. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance]","total":-1,"completed":4,"skipped":43,"failed":0} SSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 10 22:03:58.736: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Jun 10 22:03:59.148: INFO: new replicaset for deployment "sample-webhook-deployment" is yet to be created Jun 10 22:04:01.159: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63790495439, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63790495439, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63790495439, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63790495439, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jun 10 22:04:04.170: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny custom resource creation, update and deletion [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Jun 10 22:04:04.173: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the custom resource webhook via the AdmissionRegistration API STEP: Creating a custom resource that should be denied by the webhook STEP: Creating a custom resource whose deletion would be denied by the webhook STEP: Updating the custom resource with disallowed data should be denied STEP: Deleting the custom resource should be denied STEP: Remove the offending key and value from the custom resource data STEP: Deleting the updated custom resource should be successful [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 10 22:04:12.267: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-9763" for this suite. STEP: Destroying namespace "webhook-9763-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:13.563 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny custom resource creation, update and deletion [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","total":-1,"completed":8,"skipped":188,"failed":0} S ------------------------------ [BeforeEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 10 22:03:49.241: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Performing setup for networking test in namespace pod-network-test-31 STEP: creating a selector STEP: Creating the service pods in kubernetes Jun 10 22:03:49.262: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Jun 10 22:03:49.294: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jun 10 22:03:51.298: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jun 10 22:03:53.298: INFO: The status of Pod netserver-0 is Running (Ready = false) Jun 10 22:03:55.297: INFO: The status of Pod netserver-0 is Running (Ready = false) Jun 10 22:03:57.300: INFO: The status of Pod netserver-0 is Running (Ready = false) Jun 10 22:03:59.298: INFO: The status of Pod netserver-0 is Running (Ready = false) Jun 10 22:04:01.298: INFO: The status of Pod netserver-0 is Running (Ready = false) Jun 10 22:04:03.298: INFO: The status of Pod netserver-0 is Running (Ready = false) Jun 10 22:04:05.298: INFO: The status of Pod netserver-0 is Running (Ready = false) Jun 10 22:04:07.298: INFO: The status of Pod netserver-0 is Running (Ready = false) Jun 10 22:04:09.298: INFO: The status of Pod netserver-0 is Running (Ready = true) Jun 10 22:04:09.305: INFO: The status of Pod netserver-1 is Running (Ready = false) Jun 10 22:04:11.311: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods Jun 10 22:04:15.348: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2 Jun 10 22:04:15.348: INFO: Going to poll 10.244.3.214 on port 8080 at least 0 times, with a maximum of 34 tries before failing Jun 10 22:04:15.351: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.3.214:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-31 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 10 22:04:15.351: INFO: >>> kubeConfig: /root/.kube/config Jun 10 22:04:15.456: INFO: Found all 1 expected endpoints: [netserver-0] Jun 10 22:04:15.456: INFO: Going to poll 10.244.4.137 on port 8080 at least 0 times, with a maximum of 34 tries before failing Jun 10 22:04:15.458: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.4.137:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-31 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 10 22:04:15.458: INFO: >>> kubeConfig: /root/.kube/config Jun 10 22:04:15.540: INFO: Found all 1 expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 10 22:04:15.540: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-31" for this suite. • [SLOW TEST:26.307 seconds] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/network/framework.go:23 Granular Checks: Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/network/networking.go:30 should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":29,"skipped":543,"failed":0} SSSS ------------------------------ [BeforeEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 10 22:04:12.304: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating projection with secret that has name projected-secret-test-map-3df4c631-9110-4379-bc68-eadd2e7b5560 STEP: Creating a pod to test consume secrets Jun 10 22:04:12.341: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-17ef3a78-7a96-4082-8de6-3f87120b901d" in namespace "projected-3899" to be "Succeeded or Failed" Jun 10 22:04:12.344: INFO: Pod "pod-projected-secrets-17ef3a78-7a96-4082-8de6-3f87120b901d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.617612ms Jun 10 22:04:14.348: INFO: Pod "pod-projected-secrets-17ef3a78-7a96-4082-8de6-3f87120b901d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006684487s Jun 10 22:04:16.361: INFO: Pod "pod-projected-secrets-17ef3a78-7a96-4082-8de6-3f87120b901d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.019856623s STEP: Saw pod success Jun 10 22:04:16.361: INFO: Pod "pod-projected-secrets-17ef3a78-7a96-4082-8de6-3f87120b901d" satisfied condition "Succeeded or Failed" Jun 10 22:04:16.363: INFO: Trying to get logs from node node1 pod pod-projected-secrets-17ef3a78-7a96-4082-8de6-3f87120b901d container projected-secret-volume-test: STEP: delete the pod Jun 10 22:04:16.378: INFO: Waiting for pod pod-projected-secrets-17ef3a78-7a96-4082-8de6-3f87120b901d to disappear Jun 10 22:04:16.380: INFO: Pod pod-projected-secrets-17ef3a78-7a96-4082-8de6-3f87120b901d no longer exists [AfterEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 10 22:04:16.380: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3899" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":-1,"completed":9,"skipped":189,"failed":0} SSSS ------------------------------ [BeforeEach] [sig-network] IngressClass API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 10 22:04:16.400: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename ingressclass STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] IngressClass API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/ingressclass.go:149 [It] should support creating IngressClass API operations [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: getting /apis STEP: getting /apis/networking.k8s.io STEP: getting /apis/networking.k8s.iov1 STEP: creating STEP: getting STEP: listing STEP: watching Jun 10 22:04:16.443: INFO: starting watch STEP: patching STEP: updating Jun 10 22:04:16.453: INFO: waiting for watch events with expected annotations Jun 10 22:04:16.453: INFO: saw patched and updated annotations STEP: deleting STEP: deleting a collection [AfterEach] [sig-network] IngressClass API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 10 22:04:16.478: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "ingressclass-4988" for this suite. • ------------------------------ {"msg":"PASSED [sig-network] IngressClass API should support creating IngressClass API operations [Conformance]","total":-1,"completed":10,"skipped":193,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 10 22:03:52.677: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of same group and version but different kinds [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: CRs in the same group and version but different kinds (two CRDs) show up in OpenAPI documentation Jun 10 22:03:52.699: INFO: >>> kubeConfig: /root/.kube/config Jun 10 22:04:00.817: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 10 22:04:19.005: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-5303" for this suite. • [SLOW TEST:26.347 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of same group and version but different kinds [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance]","total":-1,"completed":16,"skipped":184,"failed":0} SSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] Watchers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 10 22:03:19.292: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe add, update, and delete watch notifications on configmaps [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating a watch on configmaps with label A STEP: creating a watch on configmaps with label B STEP: creating a watch on configmaps with label A or B STEP: creating a configmap with label A and ensuring the correct watchers observe the notification Jun 10 22:03:19.322: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-4847 a31786c1-c34b-4a3f-b849-e80d1949f741 40596 0 2022-06-10 22:03:19 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2022-06-10 22:03:19 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} Jun 10 22:03:19.323: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-4847 a31786c1-c34b-4a3f-b849-e80d1949f741 40596 0 2022-06-10 22:03:19 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2022-06-10 22:03:19 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} STEP: modifying configmap A and ensuring the correct watchers observe the notification Jun 10 22:03:29.331: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-4847 a31786c1-c34b-4a3f-b849-e80d1949f741 40788 0 2022-06-10 22:03:19 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2022-06-10 22:03:29 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} Jun 10 22:03:29.331: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-4847 a31786c1-c34b-4a3f-b849-e80d1949f741 40788 0 2022-06-10 22:03:19 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2022-06-10 22:03:29 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: modifying configmap A again and ensuring the correct watchers observe the notification Jun 10 22:03:39.344: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-4847 a31786c1-c34b-4a3f-b849-e80d1949f741 41029 0 2022-06-10 22:03:19 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2022-06-10 22:03:29 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} Jun 10 22:03:39.344: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-4847 a31786c1-c34b-4a3f-b849-e80d1949f741 41029 0 2022-06-10 22:03:19 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2022-06-10 22:03:29 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: deleting configmap A and ensuring the correct watchers observe the notification Jun 10 22:03:49.351: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-4847 a31786c1-c34b-4a3f-b849-e80d1949f741 41378 0 2022-06-10 22:03:19 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2022-06-10 22:03:29 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} Jun 10 22:03:49.351: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-4847 a31786c1-c34b-4a3f-b849-e80d1949f741 41378 0 2022-06-10 22:03:19 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2022-06-10 22:03:29 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: creating a configmap with label B and ensuring the correct watchers observe the notification Jun 10 22:03:59.360: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-4847 ede9e786-bcb9-4c70-9da6-f6cbe3c205cc 41604 0 2022-06-10 22:03:59 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] [{e2e.test Update v1 2022-06-10 22:03:59 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} Jun 10 22:03:59.360: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-4847 ede9e786-bcb9-4c70-9da6-f6cbe3c205cc 41604 0 2022-06-10 22:03:59 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] [{e2e.test Update v1 2022-06-10 22:03:59 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} STEP: deleting configmap B and ensuring the correct watchers observe the notification Jun 10 22:04:09.366: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-4847 ede9e786-bcb9-4c70-9da6-f6cbe3c205cc 41683 0 2022-06-10 22:03:59 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] [{e2e.test Update v1 2022-06-10 22:03:59 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} Jun 10 22:04:09.367: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-4847 ede9e786-bcb9-4c70-9da6-f6cbe3c205cc 41683 0 2022-06-10 22:03:59 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] [{e2e.test Update v1 2022-06-10 22:03:59 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 10 22:04:19.370: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-4847" for this suite. • [SLOW TEST:60.087 seconds] [sig-api-machinery] Watchers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe add, update, and delete watch notifications on configmaps [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance]","total":-1,"completed":32,"skipped":670,"failed":0} SSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 10 22:04:15.558: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating configMap with name configmap-test-volume-map-81f6be94-ec88-4323-9e9c-9c9017c12d01 STEP: Creating a pod to test consume configMaps Jun 10 22:04:15.610: INFO: Waiting up to 5m0s for pod "pod-configmaps-e4ec90a3-4bde-441c-a3e0-f60e1a08ece9" in namespace "configmap-918" to be "Succeeded or Failed" Jun 10 22:04:15.615: INFO: Pod "pod-configmaps-e4ec90a3-4bde-441c-a3e0-f60e1a08ece9": Phase="Pending", Reason="", readiness=false. Elapsed: 5.162395ms Jun 10 22:04:17.619: INFO: Pod "pod-configmaps-e4ec90a3-4bde-441c-a3e0-f60e1a08ece9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00863686s Jun 10 22:04:19.622: INFO: Pod "pod-configmaps-e4ec90a3-4bde-441c-a3e0-f60e1a08ece9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01241052s STEP: Saw pod success Jun 10 22:04:19.622: INFO: Pod "pod-configmaps-e4ec90a3-4bde-441c-a3e0-f60e1a08ece9" satisfied condition "Succeeded or Failed" Jun 10 22:04:19.624: INFO: Trying to get logs from node node1 pod pod-configmaps-e4ec90a3-4bde-441c-a3e0-f60e1a08ece9 container agnhost-container: STEP: delete the pod Jun 10 22:04:19.637: INFO: Waiting for pod pod-configmaps-e4ec90a3-4bde-441c-a3e0-f60e1a08ece9 to disappear Jun 10 22:04:19.639: INFO: Pod pod-configmaps-e4ec90a3-4bde-441c-a3e0-f60e1a08ece9 no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 10 22:04:19.639: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-918" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":30,"skipped":547,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] PodTemplates /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 10 22:04:19.679: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename podtemplate STEP: Waiting for a default service account to be provisioned in namespace [It] should run the lifecycle of PodTemplates [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [AfterEach] [sig-node] PodTemplates /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 10 22:04:19.721: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "podtemplate-9777" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] PodTemplates should run the lifecycle of PodTemplates [Conformance]","total":-1,"completed":31,"skipped":564,"failed":0} SSSSSSSSS ------------------------------ [BeforeEach] [sig-node] PreStop /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 10 22:04:06.702: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename prestop STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] PreStop /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:157 [It] should call prestop when killing a pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating server pod server in namespace prestop-9276 STEP: Waiting for pods to come up. STEP: Creating tester pod tester in namespace prestop-9276 STEP: Deleting pre-stop pod Jun 10 22:04:19.780: INFO: Saw: { "Hostname": "server", "Sent": null, "Received": { "prestop": 1 }, "Errors": null, "Log": [ "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up." ], "StillContactingPeers": true } STEP: Deleting the server pod [AfterEach] [sig-node] PreStop /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 10 22:04:19.787: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "prestop-9276" for this suite. • [SLOW TEST:13.092 seconds] [sig-node] PreStop /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 should call prestop when killing a pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] PreStop should call prestop when killing a pod [Conformance]","total":-1,"completed":5,"skipped":52,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 10 22:00:19.780: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:54 [It] should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating pod liveness-caf0ee89-130d-4ec7-8b5b-93b4bb6df421 in namespace container-probe-3260 Jun 10 22:00:23.822: INFO: Started pod liveness-caf0ee89-130d-4ec7-8b5b-93b4bb6df421 in namespace container-probe-3260 STEP: checking the pod's current state and verifying that restartCount is present Jun 10 22:00:23.825: INFO: Initial restart count of pod liveness-caf0ee89-130d-4ec7-8b5b-93b4bb6df421 is 0 STEP: deleting the pod [AfterEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 10 22:04:24.514: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-3260" for this suite. • [SLOW TEST:244.742 seconds] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Probing container should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance]","total":-1,"completed":11,"skipped":257,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 10 22:04:19.406: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating projection with secret that has name projected-secret-test-8c11ef26-c7bd-40b9-a084-8f0fdc5e6fad STEP: Creating a pod to test consume secrets Jun 10 22:04:19.443: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-8991d8cf-23fa-4d40-9e9a-a8986cf5d639" in namespace "projected-621" to be "Succeeded or Failed" Jun 10 22:04:19.446: INFO: Pod "pod-projected-secrets-8991d8cf-23fa-4d40-9e9a-a8986cf5d639": Phase="Pending", Reason="", readiness=false. Elapsed: 2.812446ms Jun 10 22:04:21.449: INFO: Pod "pod-projected-secrets-8991d8cf-23fa-4d40-9e9a-a8986cf5d639": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006126582s Jun 10 22:04:23.454: INFO: Pod "pod-projected-secrets-8991d8cf-23fa-4d40-9e9a-a8986cf5d639": Phase="Pending", Reason="", readiness=false. Elapsed: 4.010684454s Jun 10 22:04:25.458: INFO: Pod "pod-projected-secrets-8991d8cf-23fa-4d40-9e9a-a8986cf5d639": Phase="Pending", Reason="", readiness=false. Elapsed: 6.015196344s Jun 10 22:04:27.462: INFO: Pod "pod-projected-secrets-8991d8cf-23fa-4d40-9e9a-a8986cf5d639": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.01880849s STEP: Saw pod success Jun 10 22:04:27.462: INFO: Pod "pod-projected-secrets-8991d8cf-23fa-4d40-9e9a-a8986cf5d639" satisfied condition "Succeeded or Failed" Jun 10 22:04:27.464: INFO: Trying to get logs from node node2 pod pod-projected-secrets-8991d8cf-23fa-4d40-9e9a-a8986cf5d639 container projected-secret-volume-test: STEP: delete the pod Jun 10 22:04:27.478: INFO: Waiting for pod pod-projected-secrets-8991d8cf-23fa-4d40-9e9a-a8986cf5d639 to disappear Jun 10 22:04:27.481: INFO: Pod pod-projected-secrets-8991d8cf-23fa-4d40-9e9a-a8986cf5d639 no longer exists [AfterEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 10 22:04:27.481: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-621" for this suite. • [SLOW TEST:8.082 seconds] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance]","total":-1,"completed":33,"skipped":684,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 10 22:04:16.523: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD without validation schema [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Jun 10 22:04:16.544: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties Jun 10 22:04:25.202: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3857 --namespace=crd-publish-openapi-3857 create -f -' Jun 10 22:04:25.786: INFO: stderr: "" Jun 10 22:04:25.786: INFO: stdout: "e2e-test-crd-publish-openapi-5102-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n" Jun 10 22:04:25.787: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3857 --namespace=crd-publish-openapi-3857 delete e2e-test-crd-publish-openapi-5102-crds test-cr' Jun 10 22:04:25.957: INFO: stderr: "" Jun 10 22:04:25.957: INFO: stdout: "e2e-test-crd-publish-openapi-5102-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n" Jun 10 22:04:25.957: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3857 --namespace=crd-publish-openapi-3857 apply -f -' Jun 10 22:04:26.311: INFO: stderr: "" Jun 10 22:04:26.311: INFO: stdout: "e2e-test-crd-publish-openapi-5102-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n" Jun 10 22:04:26.311: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3857 --namespace=crd-publish-openapi-3857 delete e2e-test-crd-publish-openapi-5102-crds test-cr' Jun 10 22:04:26.487: INFO: stderr: "" Jun 10 22:04:26.487: INFO: stdout: "e2e-test-crd-publish-openapi-5102-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR without validation schema Jun 10 22:04:26.487: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3857 explain e2e-test-crd-publish-openapi-5102-crds' Jun 10 22:04:26.849: INFO: stderr: "" Jun 10 22:04:26.849: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-5102-crd\nVERSION: crd-publish-openapi-test-empty.example.com/v1\n\nDESCRIPTION:\n \n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 10 22:04:30.546: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-3857" for this suite. • [SLOW TEST:14.044 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD without validation schema [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance]","total":-1,"completed":11,"skipped":214,"failed":0} SS ------------------------------ [BeforeEach] [sig-node] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 10 22:04:19.836: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/init_container.go:162 [It] should invoke init containers on a RestartAlways pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating the pod Jun 10 22:04:19.863: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [sig-node] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 10 22:04:30.823: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-8635" for this suite. • [SLOW TEST:10.996 seconds] [sig-node] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should invoke init containers on a RestartAlways pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance]","total":-1,"completed":6,"skipped":75,"failed":0} SSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 10 22:04:19.749: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46 [It] should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Jun 10 22:04:19.780: INFO: Waiting up to 5m0s for pod "busybox-readonly-false-53e7e369-14f3-4275-ba6a-8a0df0bfc27f" in namespace "security-context-test-2712" to be "Succeeded or Failed" Jun 10 22:04:19.783: INFO: Pod "busybox-readonly-false-53e7e369-14f3-4275-ba6a-8a0df0bfc27f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.200145ms Jun 10 22:04:21.785: INFO: Pod "busybox-readonly-false-53e7e369-14f3-4275-ba6a-8a0df0bfc27f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.004939997s Jun 10 22:04:23.790: INFO: Pod "busybox-readonly-false-53e7e369-14f3-4275-ba6a-8a0df0bfc27f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.009299912s Jun 10 22:04:25.792: INFO: Pod "busybox-readonly-false-53e7e369-14f3-4275-ba6a-8a0df0bfc27f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.011826358s Jun 10 22:04:27.796: INFO: Pod "busybox-readonly-false-53e7e369-14f3-4275-ba6a-8a0df0bfc27f": Phase="Pending", Reason="", readiness=false. Elapsed: 8.015402946s Jun 10 22:04:29.799: INFO: Pod "busybox-readonly-false-53e7e369-14f3-4275-ba6a-8a0df0bfc27f": Phase="Pending", Reason="", readiness=false. Elapsed: 10.018956455s Jun 10 22:04:31.804: INFO: Pod "busybox-readonly-false-53e7e369-14f3-4275-ba6a-8a0df0bfc27f": Phase="Pending", Reason="", readiness=false. Elapsed: 12.023034329s Jun 10 22:04:33.808: INFO: Pod "busybox-readonly-false-53e7e369-14f3-4275-ba6a-8a0df0bfc27f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.027895518s Jun 10 22:04:33.808: INFO: Pod "busybox-readonly-false-53e7e369-14f3-4275-ba6a-8a0df0bfc27f" satisfied condition "Succeeded or Failed" [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 10 22:04:33.809: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-2712" for this suite. • [SLOW TEST:14.070 seconds] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 When creating a pod with readOnlyRootFilesystem /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:171 should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]","total":-1,"completed":32,"skipped":573,"failed":0} SSSSS ------------------------------ [BeforeEach] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 10 22:04:30.574: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's command [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test substitution in container's command Jun 10 22:04:30.609: INFO: Waiting up to 5m0s for pod "var-expansion-42de69bb-93a1-4bd7-8db9-ccbe3d13060b" in namespace "var-expansion-3177" to be "Succeeded or Failed" Jun 10 22:04:30.612: INFO: Pod "var-expansion-42de69bb-93a1-4bd7-8db9-ccbe3d13060b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.250235ms Jun 10 22:04:32.615: INFO: Pod "var-expansion-42de69bb-93a1-4bd7-8db9-ccbe3d13060b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005846426s Jun 10 22:04:34.620: INFO: Pod "var-expansion-42de69bb-93a1-4bd7-8db9-ccbe3d13060b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010110047s STEP: Saw pod success Jun 10 22:04:34.620: INFO: Pod "var-expansion-42de69bb-93a1-4bd7-8db9-ccbe3d13060b" satisfied condition "Succeeded or Failed" Jun 10 22:04:34.622: INFO: Trying to get logs from node node2 pod var-expansion-42de69bb-93a1-4bd7-8db9-ccbe3d13060b container dapi-container: STEP: delete the pod Jun 10 22:04:34.633: INFO: Waiting for pod var-expansion-42de69bb-93a1-4bd7-8db9-ccbe3d13060b to disappear Jun 10 22:04:34.635: INFO: Pod var-expansion-42de69bb-93a1-4bd7-8db9-ccbe3d13060b no longer exists [AfterEach] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 10 22:04:34.636: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-3177" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance]","total":-1,"completed":12,"skipped":216,"failed":0} SSS ------------------------------ [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 10 22:04:34.653: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be immutable if `immutable` field is set [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 10 22:04:34.722: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-6130" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be immutable if `immutable` field is set [Conformance]","total":-1,"completed":13,"skipped":219,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 10 22:04:30.852: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated [AfterEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 10 22:04:37.890: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-9357" for this suite. • [SLOW TEST:7.048 seconds] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]","total":-1,"completed":7,"skipped":84,"failed":0} SSS ------------------------------ [BeforeEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 10 22:04:33.831: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating configMap with name projected-configmap-test-volume-921b78e6-dc1d-43cc-90ef-1cc1b54ba915 STEP: Creating a pod to test consume configMaps Jun 10 22:04:33.870: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-a210a139-cec6-40a2-b2d1-a8efbacfdb32" in namespace "projected-2154" to be "Succeeded or Failed" Jun 10 22:04:33.873: INFO: Pod "pod-projected-configmaps-a210a139-cec6-40a2-b2d1-a8efbacfdb32": Phase="Pending", Reason="", readiness=false. Elapsed: 2.334359ms Jun 10 22:04:35.877: INFO: Pod "pod-projected-configmaps-a210a139-cec6-40a2-b2d1-a8efbacfdb32": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006420977s Jun 10 22:04:37.881: INFO: Pod "pod-projected-configmaps-a210a139-cec6-40a2-b2d1-a8efbacfdb32": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010852052s STEP: Saw pod success Jun 10 22:04:37.881: INFO: Pod "pod-projected-configmaps-a210a139-cec6-40a2-b2d1-a8efbacfdb32" satisfied condition "Succeeded or Failed" Jun 10 22:04:37.885: INFO: Trying to get logs from node node2 pod pod-projected-configmaps-a210a139-cec6-40a2-b2d1-a8efbacfdb32 container agnhost-container: STEP: delete the pod Jun 10 22:04:37.899: INFO: Waiting for pod pod-projected-configmaps-a210a139-cec6-40a2-b2d1-a8efbacfdb32 to disappear Jun 10 22:04:37.901: INFO: Pod pod-projected-configmaps-a210a139-cec6-40a2-b2d1-a8efbacfdb32 no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 10 22:04:37.901: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2154" for this suite. •S ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":33,"skipped":578,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] KubeletManagedEtcHosts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 10 22:04:24.605: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts STEP: Waiting for a default service account to be provisioned in namespace [It] should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Setting up the test STEP: Creating hostNetwork=false pod Jun 10 22:04:24.639: INFO: The status of Pod test-pod is Pending, waiting for it to be Running (with Ready = true) Jun 10 22:04:26.643: INFO: The status of Pod test-pod is Pending, waiting for it to be Running (with Ready = true) Jun 10 22:04:28.643: INFO: The status of Pod test-pod is Pending, waiting for it to be Running (with Ready = true) Jun 10 22:04:30.643: INFO: The status of Pod test-pod is Pending, waiting for it to be Running (with Ready = true) Jun 10 22:04:32.642: INFO: The status of Pod test-pod is Running (Ready = true) STEP: Creating hostNetwork=true pod Jun 10 22:04:32.659: INFO: The status of Pod test-host-network-pod is Pending, waiting for it to be Running (with Ready = true) Jun 10 22:04:34.663: INFO: The status of Pod test-host-network-pod is Pending, waiting for it to be Running (with Ready = true) Jun 10 22:04:36.664: INFO: The status of Pod test-host-network-pod is Running (Ready = true) STEP: Running the test STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false Jun 10 22:04:36.667: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-6121 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 10 22:04:36.667: INFO: >>> kubeConfig: /root/.kube/config Jun 10 22:04:36.839: INFO: Exec stderr: "" Jun 10 22:04:36.839: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-6121 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 10 22:04:36.839: INFO: >>> kubeConfig: /root/.kube/config Jun 10 22:04:36.927: INFO: Exec stderr: "" Jun 10 22:04:36.927: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-6121 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 10 22:04:36.927: INFO: >>> kubeConfig: /root/.kube/config Jun 10 22:04:37.051: INFO: Exec stderr: "" Jun 10 22:04:37.051: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-6121 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 10 22:04:37.051: INFO: >>> kubeConfig: /root/.kube/config Jun 10 22:04:37.240: INFO: Exec stderr: "" STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount Jun 10 22:04:37.240: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-6121 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 10 22:04:37.240: INFO: >>> kubeConfig: /root/.kube/config Jun 10 22:04:37.504: INFO: Exec stderr: "" Jun 10 22:04:37.504: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-6121 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 10 22:04:37.504: INFO: >>> kubeConfig: /root/.kube/config Jun 10 22:04:37.608: INFO: Exec stderr: "" STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true Jun 10 22:04:37.608: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-6121 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 10 22:04:37.608: INFO: >>> kubeConfig: /root/.kube/config Jun 10 22:04:38.013: INFO: Exec stderr: "" Jun 10 22:04:38.013: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-6121 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 10 22:04:38.013: INFO: >>> kubeConfig: /root/.kube/config Jun 10 22:04:38.152: INFO: Exec stderr: "" Jun 10 22:04:38.152: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-6121 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 10 22:04:38.152: INFO: >>> kubeConfig: /root/.kube/config Jun 10 22:04:38.641: INFO: Exec stderr: "" Jun 10 22:04:38.641: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-6121 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 10 22:04:38.641: INFO: >>> kubeConfig: /root/.kube/config Jun 10 22:04:38.730: INFO: Exec stderr: "" [AfterEach] [sig-node] KubeletManagedEtcHosts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 10 22:04:38.730: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-kubelet-etc-hosts-6121" for this suite. • [SLOW TEST:14.135 seconds] [sig-node] KubeletManagedEtcHosts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":12,"skipped":309,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 10 22:04:34.764: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/rc.go:54 [It] should adopt matching pods on creation [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Given a Pod with a 'name' label pod-adoption is created Jun 10 22:04:34.800: INFO: The status of Pod pod-adoption is Pending, waiting for it to be Running (with Ready = true) Jun 10 22:04:36.804: INFO: The status of Pod pod-adoption is Pending, waiting for it to be Running (with Ready = true) Jun 10 22:04:38.804: INFO: The status of Pod pod-adoption is Pending, waiting for it to be Running (with Ready = true) Jun 10 22:04:40.803: INFO: The status of Pod pod-adoption is Running (Ready = true) STEP: When a replication controller with a matching selector is created STEP: Then the orphan pod is adopted [AfterEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 10 22:04:41.816: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-1766" for this suite. • [SLOW TEST:7.061 seconds] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching pods on creation [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should adopt matching pods on creation [Conformance]","total":-1,"completed":14,"skipped":236,"failed":0} S ------------------------------ [BeforeEach] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 10 22:04:38.781: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test downward api env vars Jun 10 22:04:38.818: INFO: Waiting up to 5m0s for pod "downward-api-bf56c82b-7b4f-4ab0-838e-21ffa6d946ee" in namespace "downward-api-3201" to be "Succeeded or Failed" Jun 10 22:04:38.820: INFO: Pod "downward-api-bf56c82b-7b4f-4ab0-838e-21ffa6d946ee": Phase="Pending", Reason="", readiness=false. Elapsed: 2.260432ms Jun 10 22:04:40.823: INFO: Pod "downward-api-bf56c82b-7b4f-4ab0-838e-21ffa6d946ee": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005307281s Jun 10 22:04:42.828: INFO: Pod "downward-api-bf56c82b-7b4f-4ab0-838e-21ffa6d946ee": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.009859257s STEP: Saw pod success Jun 10 22:04:42.828: INFO: Pod "downward-api-bf56c82b-7b4f-4ab0-838e-21ffa6d946ee" satisfied condition "Succeeded or Failed" Jun 10 22:04:42.830: INFO: Trying to get logs from node node2 pod downward-api-bf56c82b-7b4f-4ab0-838e-21ffa6d946ee container dapi-container: STEP: delete the pod Jun 10 22:04:42.861: INFO: Waiting for pod downward-api-bf56c82b-7b4f-4ab0-838e-21ffa6d946ee to disappear Jun 10 22:04:42.863: INFO: Pod downward-api-bf56c82b-7b4f-4ab0-838e-21ffa6d946ee no longer exists [AfterEach] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 10 22:04:42.863: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-3201" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]","total":-1,"completed":13,"skipped":330,"failed":0} SSSSSSSSS ------------------------------ [BeforeEach] [sig-apps] Job /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 10 22:04:27.526: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a job STEP: Ensuring job reaches completions [AfterEach] [sig-apps] Job /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 10 22:04:45.561: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-2366" for this suite. • [SLOW TEST:18.044 seconds] [sig-apps] Job /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]","total":-1,"completed":34,"skipped":703,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Events /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 10 22:04:37.917: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [It] should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: retrieving the pod Jun 10 22:04:41.963: INFO: &Pod{ObjectMeta:{send-events-4763f0d9-6f97-4f1c-b981-dfd921eb14ae events-7031 cbd35820-4dc3-47c6-9287-82414473c835 42541 0 2022-06-10 22:04:37 +0000 UTC map[name:foo time:939876365] map[k8s.v1.cni.cncf.io/network-status:[{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.4.152" ], "mac": "56:a3:99:a3:0e:e6", "default": true, "dns": {} }] k8s.v1.cni.cncf.io/networks-status:[{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.4.152" ], "mac": "56:a3:99:a3:0e:e6", "default": true, "dns": {} }] kubernetes.io/psp:collectd] [] [] [{e2e.test Update v1 2022-06-10 22:04:37 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{},"f:time":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"p\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:ports":{".":{},"k:{\"containerPort\":80,\"protocol\":\"TCP\"}":{".":{},"f:containerPort":{},"f:protocol":{}}},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {multus Update v1 2022-06-10 22:04:40 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:k8s.v1.cni.cncf.io/network-status":{},"f:k8s.v1.cni.cncf.io/networks-status":{}}}}} {kubelet Update v1 2022-06-10 22:04:41 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.4.152\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-4cgt2,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:p,Image:k8s.gcr.io/e2e-test-images/agnhost:2.32,Command:[],Args:[serve-hostname],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:,HostPort:0,ContainerPort:80,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-4cgt2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-06-10 22:04:37 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-06-10 22:04:41 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-06-10 22:04:41 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-06-10 22:04:37 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.10.190.208,PodIP:10.244.4.152,StartTime:2022-06-10 22:04:37 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:p,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2022-06-10 22:04:41 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/agnhost:2.32,ImageID:docker-pullable://k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1,ContainerID:docker://a94807108f91ce9a823155ace008b19c747cc60e27aae4f012ce1dfc79dcdcbb,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.4.152,},},EphemeralContainerStatuses:[]ContainerStatus{},},} STEP: checking for scheduler event about the pod Jun 10 22:04:43.969: INFO: Saw scheduler event for our pod. STEP: checking for kubelet event about the pod Jun 10 22:04:45.972: INFO: Saw kubelet event for our pod. STEP: deleting the pod [AfterEach] [sig-node] Events /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 10 22:04:45.977: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-7031" for this suite. • [SLOW TEST:8.069 seconds] [sig-node] Events /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance]","total":-1,"completed":8,"skipped":91,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 10 22:04:37.955: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating configMap with name cm-test-opt-del-db4f1bb8-757f-4581-976c-cad2ceb08a95 STEP: Creating configMap with name cm-test-opt-upd-36788697-8044-4b39-b6b5-89de39da5f79 STEP: Creating the pod Jun 10 22:04:38.001: INFO: The status of Pod pod-projected-configmaps-55524286-3b49-4b4b-bdd8-da5e34ee9814 is Pending, waiting for it to be Running (with Ready = true) Jun 10 22:04:40.006: INFO: The status of Pod pod-projected-configmaps-55524286-3b49-4b4b-bdd8-da5e34ee9814 is Pending, waiting for it to be Running (with Ready = true) Jun 10 22:04:42.004: INFO: The status of Pod pod-projected-configmaps-55524286-3b49-4b4b-bdd8-da5e34ee9814 is Running (Ready = true) STEP: Deleting configmap cm-test-opt-del-db4f1bb8-757f-4581-976c-cad2ceb08a95 STEP: Updating configmap cm-test-opt-upd-36788697-8044-4b39-b6b5-89de39da5f79 STEP: Creating configMap with name cm-test-opt-create-c93927d8-f956-4519-ad33-619c308196ef STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 10 22:04:46.072: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2357" for this suite. • [SLOW TEST:8.124 seconds] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 10 22:04:42.891: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test emptydir 0644 on node default medium Jun 10 22:04:42.928: INFO: Waiting up to 5m0s for pod "pod-63006263-4407-4793-9bcc-6c06a7f063d7" in namespace "emptydir-4861" to be "Succeeded or Failed" Jun 10 22:04:42.930: INFO: Pod "pod-63006263-4407-4793-9bcc-6c06a7f063d7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.547628ms Jun 10 22:04:44.934: INFO: Pod "pod-63006263-4407-4793-9bcc-6c06a7f063d7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006817389s Jun 10 22:04:46.937: INFO: Pod "pod-63006263-4407-4793-9bcc-6c06a7f063d7": Phase="Pending", Reason="", readiness=false. Elapsed: 4.009714141s Jun 10 22:04:48.942: INFO: Pod "pod-63006263-4407-4793-9bcc-6c06a7f063d7": Phase="Pending", Reason="", readiness=false. Elapsed: 6.014265185s Jun 10 22:04:50.947: INFO: Pod "pod-63006263-4407-4793-9bcc-6c06a7f063d7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.019352322s STEP: Saw pod success Jun 10 22:04:50.947: INFO: Pod "pod-63006263-4407-4793-9bcc-6c06a7f063d7" satisfied condition "Succeeded or Failed" Jun 10 22:04:50.951: INFO: Trying to get logs from node node2 pod pod-63006263-4407-4793-9bcc-6c06a7f063d7 container test-container: STEP: delete the pod Jun 10 22:04:50.964: INFO: Waiting for pod pod-63006263-4407-4793-9bcc-6c06a7f063d7 to disappear Jun 10 22:04:50.966: INFO: Pod pod-63006263-4407-4793-9bcc-6c06a7f063d7 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 10 22:04:50.967: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-4861" for this suite. • [SLOW TEST:8.084 seconds] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":14,"skipped":339,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected combined /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 10 22:04:46.070: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating configMap with name configmap-projected-all-test-volume-83252654-cc33-4b2c-abc9-a659c2edae03 STEP: Creating secret with name secret-projected-all-test-volume-89a7c270-d8ea-4f6f-95aa-5464081cb98a STEP: Creating a pod to test Check all projections for projected volume plugin Jun 10 22:04:46.108: INFO: Waiting up to 5m0s for pod "projected-volume-951b03d6-cb8c-42d0-b0fd-757d57f4de53" in namespace "projected-2653" to be "Succeeded or Failed" Jun 10 22:04:46.110: INFO: Pod "projected-volume-951b03d6-cb8c-42d0-b0fd-757d57f4de53": Phase="Pending", Reason="", readiness=false. Elapsed: 2.237468ms Jun 10 22:04:48.114: INFO: Pod "projected-volume-951b03d6-cb8c-42d0-b0fd-757d57f4de53": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005747504s Jun 10 22:04:50.117: INFO: Pod "projected-volume-951b03d6-cb8c-42d0-b0fd-757d57f4de53": Phase="Pending", Reason="", readiness=false. Elapsed: 4.009084544s Jun 10 22:04:52.120: INFO: Pod "projected-volume-951b03d6-cb8c-42d0-b0fd-757d57f4de53": Phase="Pending", Reason="", readiness=false. Elapsed: 6.011651107s Jun 10 22:04:54.124: INFO: Pod "projected-volume-951b03d6-cb8c-42d0-b0fd-757d57f4de53": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.015727055s STEP: Saw pod success Jun 10 22:04:54.124: INFO: Pod "projected-volume-951b03d6-cb8c-42d0-b0fd-757d57f4de53" satisfied condition "Succeeded or Failed" Jun 10 22:04:54.126: INFO: Trying to get logs from node node2 pod projected-volume-951b03d6-cb8c-42d0-b0fd-757d57f4de53 container projected-all-volume-test: STEP: delete the pod Jun 10 22:04:54.258: INFO: Waiting for pod projected-volume-951b03d6-cb8c-42d0-b0fd-757d57f4de53 to disappear Jun 10 22:04:54.260: INFO: Pod projected-volume-951b03d6-cb8c-42d0-b0fd-757d57f4de53 no longer exists [AfterEach] [sig-storage] Projected combined /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 10 22:04:54.260: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2653" for this suite. • [SLOW TEST:8.198 seconds] [sig-storage] Projected combined /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance]","total":-1,"completed":9,"skipped":143,"failed":0} SSSS ------------------------------ [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 10 22:04:45.614: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:241 [BeforeEach] Kubectl label /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1308 STEP: creating the pod Jun 10 22:04:45.636: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-7103 create -f -' Jun 10 22:04:46.044: INFO: stderr: "" Jun 10 22:04:46.044: INFO: stdout: "pod/pause created\n" Jun 10 22:04:46.044: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause] Jun 10 22:04:46.045: INFO: Waiting up to 5m0s for pod "pause" in namespace "kubectl-7103" to be "running and ready" Jun 10 22:04:46.047: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.464644ms Jun 10 22:04:48.058: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012966223s Jun 10 22:04:50.062: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 4.017047843s Jun 10 22:04:52.065: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 6.020126138s Jun 10 22:04:54.068: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 8.023645419s Jun 10 22:04:56.071: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 10.026775819s Jun 10 22:04:56.071: INFO: Pod "pause" satisfied condition "running and ready" Jun 10 22:04:56.071: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause] [It] should update the label on a resource [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: adding the label testing-label with value testing-label-value to a pod Jun 10 22:04:56.072: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-7103 label pods pause testing-label=testing-label-value' Jun 10 22:04:56.243: INFO: stderr: "" Jun 10 22:04:56.243: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod has the label testing-label with the value testing-label-value Jun 10 22:04:56.243: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-7103 get pod pause -L testing-label' Jun 10 22:04:56.406: INFO: stderr: "" Jun 10 22:04:56.406: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 10s testing-label-value\n" STEP: removing the label testing-label of a pod Jun 10 22:04:56.406: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-7103 label pods pause testing-label-' Jun 10 22:04:56.569: INFO: stderr: "" Jun 10 22:04:56.569: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod doesn't have the label testing-label Jun 10 22:04:56.569: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-7103 get pod pause -L testing-label' Jun 10 22:04:56.766: INFO: stderr: "" Jun 10 22:04:56.766: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 10s \n" [AfterEach] Kubectl label /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1314 STEP: using delete to clean up resources Jun 10 22:04:56.766: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-7103 delete --grace-period=0 --force -f -' Jun 10 22:04:56.923: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jun 10 22:04:56.923: INFO: stdout: "pod \"pause\" force deleted\n" Jun 10 22:04:56.923: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-7103 get rc,svc -l name=pause --no-headers' Jun 10 22:04:57.143: INFO: stderr: "No resources found in kubectl-7103 namespace.\n" Jun 10 22:04:57.143: INFO: stdout: "" Jun 10 22:04:57.144: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-7103 get pods -l name=pause -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Jun 10 22:04:57.320: INFO: stderr: "" Jun 10 22:04:57.320: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 10 22:04:57.320: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7103" for this suite. • [SLOW TEST:11.716 seconds] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl label /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1306 should update the label on a resource [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl label should update the label on a resource [Conformance]","total":-1,"completed":35,"skipped":724,"failed":0} SSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 10 22:04:19.046: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan pods created by rc if delete options say so [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods STEP: Gathering metrics Jun 10 22:04:59.115: INFO: The status of Pod kube-controller-manager-master3 is Running (Ready = true) Jun 10 22:04:59.177: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: Jun 10 22:04:59.177: INFO: Deleting pod "simpletest.rc-9lfmg" in namespace "gc-3518" Jun 10 22:04:59.185: INFO: Deleting pod "simpletest.rc-glgm4" in namespace "gc-3518" Jun 10 22:04:59.192: INFO: Deleting pod "simpletest.rc-jpx5m" in namespace "gc-3518" Jun 10 22:04:59.197: INFO: Deleting pod "simpletest.rc-kvdrb" in namespace "gc-3518" Jun 10 22:04:59.203: INFO: Deleting pod "simpletest.rc-p4f94" in namespace "gc-3518" Jun 10 22:04:59.209: INFO: Deleting pod "simpletest.rc-qdqdj" in namespace "gc-3518" Jun 10 22:04:59.215: INFO: Deleting pod "simpletest.rc-r494d" in namespace "gc-3518" Jun 10 22:04:59.221: INFO: Deleting pod "simpletest.rc-shkdc" in namespace "gc-3518" Jun 10 22:04:59.227: INFO: Deleting pod "simpletest.rc-snx7n" in namespace "gc-3518" Jun 10 22:04:59.233: INFO: Deleting pod "simpletest.rc-znmv9" in namespace "gc-3518" [AfterEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 10 22:04:59.240: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-3518" for this suite. • [SLOW TEST:40.201 seconds] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should orphan pods created by rc if delete options say so [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance]","total":-1,"completed":17,"skipped":194,"failed":0} SSSSSSSS ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":-1,"completed":34,"skipped":601,"failed":0} [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 10 22:04:46.081: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:241 [BeforeEach] Update Demo /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:293 [It] should create and stop a replication controller [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating a replication controller Jun 10 22:04:46.103: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-8179 create -f -' Jun 10 22:04:46.460: INFO: stderr: "" Jun 10 22:04:46.460: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Jun 10 22:04:46.460: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-8179 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Jun 10 22:04:46.644: INFO: stderr: "" Jun 10 22:04:46.644: INFO: stdout: "update-demo-nautilus-946rn update-demo-nautilus-glnfp " Jun 10 22:04:46.644: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-8179 get pods update-demo-nautilus-946rn -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Jun 10 22:04:46.824: INFO: stderr: "" Jun 10 22:04:46.825: INFO: stdout: "" Jun 10 22:04:46.825: INFO: update-demo-nautilus-946rn is created but not running Jun 10 22:04:51.825: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-8179 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Jun 10 22:04:52.019: INFO: stderr: "" Jun 10 22:04:52.019: INFO: stdout: "update-demo-nautilus-946rn update-demo-nautilus-glnfp " Jun 10 22:04:52.019: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-8179 get pods update-demo-nautilus-946rn -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Jun 10 22:04:52.195: INFO: stderr: "" Jun 10 22:04:52.195: INFO: stdout: "" Jun 10 22:04:52.195: INFO: update-demo-nautilus-946rn is created but not running Jun 10 22:04:57.196: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-8179 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Jun 10 22:04:57.372: INFO: stderr: "" Jun 10 22:04:57.372: INFO: stdout: "update-demo-nautilus-946rn update-demo-nautilus-glnfp " Jun 10 22:04:57.372: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-8179 get pods update-demo-nautilus-946rn -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Jun 10 22:04:57.552: INFO: stderr: "" Jun 10 22:04:57.552: INFO: stdout: "" Jun 10 22:04:57.553: INFO: update-demo-nautilus-946rn is created but not running Jun 10 22:05:02.553: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-8179 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Jun 10 22:05:02.737: INFO: stderr: "" Jun 10 22:05:02.737: INFO: stdout: "update-demo-nautilus-946rn update-demo-nautilus-glnfp " Jun 10 22:05:02.737: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-8179 get pods update-demo-nautilus-946rn -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Jun 10 22:05:02.920: INFO: stderr: "" Jun 10 22:05:02.920: INFO: stdout: "true" Jun 10 22:05:02.920: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-8179 get pods update-demo-nautilus-946rn -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' Jun 10 22:05:03.085: INFO: stderr: "" Jun 10 22:05:03.085: INFO: stdout: "k8s.gcr.io/e2e-test-images/nautilus:1.4" Jun 10 22:05:03.085: INFO: validating pod update-demo-nautilus-946rn Jun 10 22:05:03.088: INFO: got data: { "image": "nautilus.jpg" } Jun 10 22:05:03.088: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jun 10 22:05:03.088: INFO: update-demo-nautilus-946rn is verified up and running Jun 10 22:05:03.088: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-8179 get pods update-demo-nautilus-glnfp -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Jun 10 22:05:03.254: INFO: stderr: "" Jun 10 22:05:03.254: INFO: stdout: "true" Jun 10 22:05:03.255: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-8179 get pods update-demo-nautilus-glnfp -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' Jun 10 22:05:03.420: INFO: stderr: "" Jun 10 22:05:03.420: INFO: stdout: "k8s.gcr.io/e2e-test-images/nautilus:1.4" Jun 10 22:05:03.420: INFO: validating pod update-demo-nautilus-glnfp Jun 10 22:05:03.424: INFO: got data: { "image": "nautilus.jpg" } Jun 10 22:05:03.424: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jun 10 22:05:03.424: INFO: update-demo-nautilus-glnfp is verified up and running STEP: using delete to clean up resources Jun 10 22:05:03.424: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-8179 delete --grace-period=0 --force -f -' Jun 10 22:05:03.560: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jun 10 22:05:03.560: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Jun 10 22:05:03.560: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-8179 get rc,svc -l name=update-demo --no-headers' Jun 10 22:05:03.762: INFO: stderr: "No resources found in kubectl-8179 namespace.\n" Jun 10 22:05:03.762: INFO: stdout: "" Jun 10 22:05:03.762: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-8179 get pods -l name=update-demo -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Jun 10 22:05:03.955: INFO: stderr: "" Jun 10 22:05:03.955: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 10 22:05:03.955: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8179" for this suite. • [SLOW TEST:17.884 seconds] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Update Demo /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:291 should create and stop a replication controller [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 10 22:02:38.076: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:54 [It] should have monotonically increasing restart count [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating pod liveness-fc2312d9-40cc-4efb-a4a5-1d3dcd6d1d64 in namespace container-probe-7648 Jun 10 22:02:42.116: INFO: Started pod liveness-fc2312d9-40cc-4efb-a4a5-1d3dcd6d1d64 in namespace container-probe-7648 STEP: checking the pod's current state and verifying that restartCount is present Jun 10 22:02:42.118: INFO: Initial restart count of pod liveness-fc2312d9-40cc-4efb-a4a5-1d3dcd6d1d64 is 0 Jun 10 22:03:02.203: INFO: Restart count of pod container-probe-7648/liveness-fc2312d9-40cc-4efb-a4a5-1d3dcd6d1d64 is now 1 (20.085707801s elapsed) Jun 10 22:03:20.280: INFO: Restart count of pod container-probe-7648/liveness-fc2312d9-40cc-4efb-a4a5-1d3dcd6d1d64 is now 2 (38.162103763s elapsed) Jun 10 22:03:42.348: INFO: Restart count of pod container-probe-7648/liveness-fc2312d9-40cc-4efb-a4a5-1d3dcd6d1d64 is now 3 (1m0.230345476s elapsed) Jun 10 22:04:00.415: INFO: Restart count of pod container-probe-7648/liveness-fc2312d9-40cc-4efb-a4a5-1d3dcd6d1d64 is now 4 (1m18.297233306s elapsed) Jun 10 22:05:04.655: INFO: Restart count of pod container-probe-7648/liveness-fc2312d9-40cc-4efb-a4a5-1d3dcd6d1d64 is now 5 (2m22.537768005s elapsed) STEP: deleting the pod [AfterEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 10 22:05:04.664: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-7648" for this suite. • [SLOW TEST:146.595 seconds] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should have monotonically increasing restart count [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","total":-1,"completed":28,"skipped":634,"failed":0} SSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 10 22:04:41.829: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with projected pod [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating pod pod-subpath-test-projected-htr9 STEP: Creating a pod to test atomic-volume-subpath Jun 10 22:04:41.873: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-htr9" in namespace "subpath-8389" to be "Succeeded or Failed" Jun 10 22:04:41.876: INFO: Pod "pod-subpath-test-projected-htr9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.621668ms Jun 10 22:04:43.880: INFO: Pod "pod-subpath-test-projected-htr9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006205525s Jun 10 22:04:45.886: INFO: Pod "pod-subpath-test-projected-htr9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.012079749s Jun 10 22:04:47.890: INFO: Pod "pod-subpath-test-projected-htr9": Phase="Pending", Reason="", readiness=false. Elapsed: 6.016218271s Jun 10 22:04:49.895: INFO: Pod "pod-subpath-test-projected-htr9": Phase="Running", Reason="", readiness=true. Elapsed: 8.021128883s Jun 10 22:04:51.899: INFO: Pod "pod-subpath-test-projected-htr9": Phase="Running", Reason="", readiness=true. Elapsed: 10.025733257s Jun 10 22:04:53.903: INFO: Pod "pod-subpath-test-projected-htr9": Phase="Running", Reason="", readiness=true. Elapsed: 12.029990299s Jun 10 22:04:55.908: INFO: Pod "pod-subpath-test-projected-htr9": Phase="Running", Reason="", readiness=true. Elapsed: 14.034195847s Jun 10 22:04:57.913: INFO: Pod "pod-subpath-test-projected-htr9": Phase="Running", Reason="", readiness=true. Elapsed: 16.039179841s Jun 10 22:04:59.917: INFO: Pod "pod-subpath-test-projected-htr9": Phase="Running", Reason="", readiness=true. Elapsed: 18.043655289s Jun 10 22:05:01.921: INFO: Pod "pod-subpath-test-projected-htr9": Phase="Running", Reason="", readiness=true. Elapsed: 20.047664705s Jun 10 22:05:03.926: INFO: Pod "pod-subpath-test-projected-htr9": Phase="Running", Reason="", readiness=true. Elapsed: 22.05269267s Jun 10 22:05:05.932: INFO: Pod "pod-subpath-test-projected-htr9": Phase="Running", Reason="", readiness=true. Elapsed: 24.058827304s Jun 10 22:05:07.936: INFO: Pod "pod-subpath-test-projected-htr9": Phase="Running", Reason="", readiness=true. Elapsed: 26.062715832s Jun 10 22:05:09.942: INFO: Pod "pod-subpath-test-projected-htr9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 28.068613132s STEP: Saw pod success Jun 10 22:05:09.942: INFO: Pod "pod-subpath-test-projected-htr9" satisfied condition "Succeeded or Failed" Jun 10 22:05:09.944: INFO: Trying to get logs from node node2 pod pod-subpath-test-projected-htr9 container test-container-subpath-projected-htr9: STEP: delete the pod Jun 10 22:05:09.971: INFO: Waiting for pod pod-subpath-test-projected-htr9 to disappear Jun 10 22:05:09.975: INFO: Pod pod-subpath-test-projected-htr9 no longer exists STEP: Deleting pod pod-subpath-test-projected-htr9 Jun 10 22:05:09.975: INFO: Deleting pod "pod-subpath-test-projected-htr9" in namespace "subpath-8389" [AfterEach] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 10 22:05:09.978: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-8389" for this suite. • [SLOW TEST:28.157 seconds] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with projected pod [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance]","total":-1,"completed":15,"skipped":237,"failed":0} SSSSSSS ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance]","total":-1,"completed":35,"skipped":601,"failed":0} [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 10 22:05:03.968: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating configMap with name configmap-test-volume-77db5b89-4767-4c41-b97e-45654015ee57 STEP: Creating a pod to test consume configMaps Jun 10 22:05:04.006: INFO: Waiting up to 5m0s for pod "pod-configmaps-d8d250b9-ff38-49d7-b46c-c133ec0b8622" in namespace "configmap-8213" to be "Succeeded or Failed" Jun 10 22:05:04.009: INFO: Pod "pod-configmaps-d8d250b9-ff38-49d7-b46c-c133ec0b8622": Phase="Pending", Reason="", readiness=false. Elapsed: 2.258042ms Jun 10 22:05:06.012: INFO: Pod "pod-configmaps-d8d250b9-ff38-49d7-b46c-c133ec0b8622": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005300003s Jun 10 22:05:08.015: INFO: Pod "pod-configmaps-d8d250b9-ff38-49d7-b46c-c133ec0b8622": Phase="Pending", Reason="", readiness=false. Elapsed: 4.008398853s Jun 10 22:05:10.019: INFO: Pod "pod-configmaps-d8d250b9-ff38-49d7-b46c-c133ec0b8622": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.012485593s STEP: Saw pod success Jun 10 22:05:10.019: INFO: Pod "pod-configmaps-d8d250b9-ff38-49d7-b46c-c133ec0b8622" satisfied condition "Succeeded or Failed" Jun 10 22:05:10.021: INFO: Trying to get logs from node node2 pod pod-configmaps-d8d250b9-ff38-49d7-b46c-c133ec0b8622 container configmap-volume-test: STEP: delete the pod Jun 10 22:05:10.034: INFO: Waiting for pod pod-configmaps-d8d250b9-ff38-49d7-b46c-c133ec0b8622 to disappear Jun 10 22:05:10.037: INFO: Pod pod-configmaps-d8d250b9-ff38-49d7-b46c-c133ec0b8622 no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 10 22:05:10.037: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-8213" for this suite. • [SLOW TEST:6.077 seconds] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":-1,"completed":36,"skipped":601,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 10 22:04:59.264: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a replication controller. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ReplicationController STEP: Ensuring resource quota status captures replication controller creation STEP: Deleting a ReplicationController STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 10 22:05:10.319: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-4759" for this suite. • [SLOW TEST:11.062 seconds] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a replication controller. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance]","total":-1,"completed":18,"skipped":202,"failed":0} SSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 10 22:04:57.346: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Jun 10 22:04:58.216: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Jun 10 22:05:00.226: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63790495498, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63790495498, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63790495498, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63790495498, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} Jun 10 22:05:02.230: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63790495498, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63790495498, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63790495498, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63790495498, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} Jun 10 22:05:04.229: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63790495498, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63790495498, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63790495498, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63790495498, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jun 10 22:05:07.237: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny attaching pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Registering the webhook via the AdmissionRegistration API STEP: create a pod STEP: 'kubectl attach' the pod, should be denied by the webhook Jun 10 22:05:11.271: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=webhook-7315 attach --namespace=webhook-7315 to-be-attached-pod -i -c=container1' Jun 10 22:05:11.434: INFO: rc: 1 [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 10 22:05:11.438: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-7315" for this suite. STEP: Destroying namespace "webhook-7315-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:14.126 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny attaching pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 10 22:05:10.078: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test downward API volume plugin Jun 10 22:05:10.113: INFO: Waiting up to 5m0s for pod "downwardapi-volume-f632d616-886a-480b-85eb-562dc8f1723b" in namespace "projected-4132" to be "Succeeded or Failed" Jun 10 22:05:10.115: INFO: Pod "downwardapi-volume-f632d616-886a-480b-85eb-562dc8f1723b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.173799ms Jun 10 22:05:12.119: INFO: Pod "downwardapi-volume-f632d616-886a-480b-85eb-562dc8f1723b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00560672s Jun 10 22:05:14.123: INFO: Pod "downwardapi-volume-f632d616-886a-480b-85eb-562dc8f1723b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.010249339s Jun 10 22:05:16.127: INFO: Pod "downwardapi-volume-f632d616-886a-480b-85eb-562dc8f1723b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.014015349s STEP: Saw pod success Jun 10 22:05:16.127: INFO: Pod "downwardapi-volume-f632d616-886a-480b-85eb-562dc8f1723b" satisfied condition "Succeeded or Failed" Jun 10 22:05:16.130: INFO: Trying to get logs from node node1 pod downwardapi-volume-f632d616-886a-480b-85eb-562dc8f1723b container client-container: STEP: delete the pod Jun 10 22:05:16.143: INFO: Waiting for pod downwardapi-volume-f632d616-886a-480b-85eb-562dc8f1723b to disappear Jun 10 22:05:16.145: INFO: Pod downwardapi-volume-f632d616-886a-480b-85eb-562dc8f1723b no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 10 22:05:16.145: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4132" for this suite. • [SLOW TEST:6.074 seconds] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":-1,"completed":37,"skipped":619,"failed":0} SSS ------------------------------ [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 10 22:05:10.350: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test emptydir 0666 on node default medium Jun 10 22:05:10.386: INFO: Waiting up to 5m0s for pod "pod-ecc1e301-13d1-497e-8851-b0951ab712f7" in namespace "emptydir-4592" to be "Succeeded or Failed" Jun 10 22:05:10.388: INFO: Pod "pod-ecc1e301-13d1-497e-8851-b0951ab712f7": Phase="Pending", Reason="", readiness=false. Elapsed: 1.898811ms Jun 10 22:05:12.390: INFO: Pod "pod-ecc1e301-13d1-497e-8851-b0951ab712f7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.004496699s Jun 10 22:05:14.395: INFO: Pod "pod-ecc1e301-13d1-497e-8851-b0951ab712f7": Phase="Pending", Reason="", readiness=false. Elapsed: 4.009472668s Jun 10 22:05:16.399: INFO: Pod "pod-ecc1e301-13d1-497e-8851-b0951ab712f7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.013771625s STEP: Saw pod success Jun 10 22:05:16.399: INFO: Pod "pod-ecc1e301-13d1-497e-8851-b0951ab712f7" satisfied condition "Succeeded or Failed" Jun 10 22:05:16.402: INFO: Trying to get logs from node node1 pod pod-ecc1e301-13d1-497e-8851-b0951ab712f7 container test-container: STEP: delete the pod Jun 10 22:05:16.415: INFO: Waiting for pod pod-ecc1e301-13d1-497e-8851-b0951ab712f7 to disappear Jun 10 22:05:16.417: INFO: Pod pod-ecc1e301-13d1-497e-8851-b0951ab712f7 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 10 22:05:16.417: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-4592" for this suite. • [SLOW TEST:6.076 seconds] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","total":-1,"completed":36,"skipped":732,"failed":0} [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 10 22:05:11.475: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:186 [It] should get a host IP [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating pod Jun 10 22:05:11.518: INFO: The status of Pod pod-hostip-3fdf6371-e747-4e7c-b272-9636ef919f4f is Pending, waiting for it to be Running (with Ready = true) Jun 10 22:05:13.522: INFO: The status of Pod pod-hostip-3fdf6371-e747-4e7c-b272-9636ef919f4f is Pending, waiting for it to be Running (with Ready = true) Jun 10 22:05:15.521: INFO: The status of Pod pod-hostip-3fdf6371-e747-4e7c-b272-9636ef919f4f is Pending, waiting for it to be Running (with Ready = true) Jun 10 22:05:17.521: INFO: The status of Pod pod-hostip-3fdf6371-e747-4e7c-b272-9636ef919f4f is Running (Ready = true) Jun 10 22:05:17.527: INFO: Pod pod-hostip-3fdf6371-e747-4e7c-b272-9636ef919f4f has hostIP: 10.10.190.207 [AfterEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 10 22:05:17.527: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-3331" for this suite. • [SLOW TEST:6.062 seconds] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should get a host IP [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Pods should get a host IP [NodeConformance] [Conformance]","total":-1,"completed":37,"skipped":732,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 10 22:04:54.279: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for ExternalName services [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a test externalName service STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-4110.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-4110.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-4110.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-4110.svc.cluster.local; sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Jun 10 22:05:02.335: INFO: DNS probes using dns-test-8dc7b7a1-794f-4d0e-9a90-8b67717cb1b2 succeeded STEP: deleting the pod STEP: changing the externalName to bar.example.com STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-4110.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-4110.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-4110.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-4110.svc.cluster.local; sleep 1; done STEP: creating a second pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Jun 10 22:05:10.377: INFO: DNS probes using dns-test-cfcbb93c-b86c-4e1a-8024-6caffaf452d7 succeeded STEP: deleting the pod STEP: changing the service to type=ClusterIP STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-4110.svc.cluster.local A > /results/wheezy_udp@dns-test-service-3.dns-4110.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-4110.svc.cluster.local A > /results/jessie_udp@dns-test-service-3.dns-4110.svc.cluster.local; sleep 1; done STEP: creating a third pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Jun 10 22:05:18.424: INFO: DNS probes using dns-test-ffed57a1-52e7-4865-a8a0-7fe5735c8f39 succeeded STEP: deleting the pod STEP: deleting the test externalName service [AfterEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 10 22:05:18.439: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-4110" for this suite. • [SLOW TEST:24.168 seconds] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should provide DNS for ExternalName services [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for ExternalName services [Conformance]","total":-1,"completed":10,"skipped":147,"failed":0} SSS ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":19,"skipped":215,"failed":0} [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 10 22:05:16.428: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test emptydir 0777 on tmpfs Jun 10 22:05:16.463: INFO: Waiting up to 5m0s for pod "pod-b68276cd-f00d-4155-9b7d-80a9fc5ac687" in namespace "emptydir-8388" to be "Succeeded or Failed" Jun 10 22:05:16.466: INFO: Pod "pod-b68276cd-f00d-4155-9b7d-80a9fc5ac687": Phase="Pending", Reason="", readiness=false. Elapsed: 2.694402ms Jun 10 22:05:18.468: INFO: Pod "pod-b68276cd-f00d-4155-9b7d-80a9fc5ac687": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005355166s Jun 10 22:05:20.473: INFO: Pod "pod-b68276cd-f00d-4155-9b7d-80a9fc5ac687": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.009954494s STEP: Saw pod success Jun 10 22:05:20.473: INFO: Pod "pod-b68276cd-f00d-4155-9b7d-80a9fc5ac687" satisfied condition "Succeeded or Failed" Jun 10 22:05:20.476: INFO: Trying to get logs from node node2 pod pod-b68276cd-f00d-4155-9b7d-80a9fc5ac687 container test-container: STEP: delete the pod Jun 10 22:05:20.490: INFO: Waiting for pod pod-b68276cd-f00d-4155-9b7d-80a9fc5ac687 to disappear Jun 10 22:05:20.491: INFO: Pod pod-b68276cd-f00d-4155-9b7d-80a9fc5ac687 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 10 22:05:20.492: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-8388" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":20,"skipped":215,"failed":0} SS ------------------------------ [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 10 22:05:20.507: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746 [It] should complete a service status lifecycle [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating a Service STEP: watching for the Service to be added Jun 10 22:05:20.540: INFO: Found Service test-service-w8mt6 in namespace services-2832 with labels: map[test-service-static:true] & ports [{http TCP 80 {0 80 } 0}] Jun 10 22:05:20.540: INFO: Service test-service-w8mt6 created STEP: Getting /status Jun 10 22:05:20.546: INFO: Service test-service-w8mt6 has LoadBalancer: {[]} STEP: patching the ServiceStatus STEP: watching for the Service to be patched Jun 10 22:05:20.551: INFO: observed Service test-service-w8mt6 in namespace services-2832 with annotations: map[] & LoadBalancer: {[]} Jun 10 22:05:20.552: INFO: Found Service test-service-w8mt6 in namespace services-2832 with annotations: map[patchedstatus:true] & LoadBalancer: {[{203.0.113.1 []}]} Jun 10 22:05:20.552: INFO: Service test-service-w8mt6 has service status patched STEP: updating the ServiceStatus Jun 10 22:05:20.562: INFO: updatedStatus.Conditions: []v1.Condition{v1.Condition{Type:"StatusUpdate", Status:"True", ObservedGeneration:0, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Reason:"E2E", Message:"Set from e2e test"}} STEP: watching for the Service to be updated Jun 10 22:05:20.563: INFO: Observed Service test-service-w8mt6 in namespace services-2832 with annotations: map[] & Conditions: {[]} Jun 10 22:05:20.563: INFO: Observed event: &Service{ObjectMeta:{test-service-w8mt6 services-2832 0bba706a-b97e-478c-95c2-3da26dbc704f 43658 0 2022-06-10 22:05:20 +0000 UTC map[test-service-static:true] map[patchedstatus:true] [] [] [{e2e.test Update v1 2022-06-10 22:05:20 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:patchedstatus":{}},"f:labels":{".":{},"f:test-service-static":{}}},"f:spec":{"f:ports":{".":{},"k:{\"port\":80,\"protocol\":\"TCP\"}":{".":{},"f:name":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:sessionAffinity":{},"f:type":{}},"f:status":{"f:loadBalancer":{"f:ingress":{}}}}}]},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:http,Protocol:TCP,Port:80,TargetPort:{0 80 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{},ClusterIP:10.233.17.86,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,TopologyKeys:[],IPFamilyPolicy:*SingleStack,ClusterIPs:[10.233.17.86],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{LoadBalancerIngress{IP:203.0.113.1,Hostname:,Ports:[]PortStatus{},},},},Conditions:[]Condition{},},} Jun 10 22:05:20.563: INFO: Found Service test-service-w8mt6 in namespace services-2832 with annotations: map[patchedstatus:true] & Conditions: [{StatusUpdate True 0 0001-01-01 00:00:00 +0000 UTC E2E Set from e2e test}] Jun 10 22:05:20.563: INFO: Service test-service-w8mt6 has service status updated STEP: patching the service STEP: watching for the Service to be patched Jun 10 22:05:20.579: INFO: observed Service test-service-w8mt6 in namespace services-2832 with labels: map[test-service-static:true] Jun 10 22:05:20.579: INFO: observed Service test-service-w8mt6 in namespace services-2832 with labels: map[test-service-static:true] Jun 10 22:05:20.579: INFO: observed Service test-service-w8mt6 in namespace services-2832 with labels: map[test-service-static:true] Jun 10 22:05:20.579: INFO: Found Service test-service-w8mt6 in namespace services-2832 with labels: map[test-service:patched test-service-static:true] Jun 10 22:05:20.579: INFO: Service test-service-w8mt6 patched STEP: deleting the service STEP: watching for the Service to be deleted Jun 10 22:05:20.588: INFO: Observed event: ADDED Jun 10 22:05:20.588: INFO: Observed event: MODIFIED Jun 10 22:05:20.588: INFO: Observed event: MODIFIED Jun 10 22:05:20.589: INFO: Observed event: MODIFIED Jun 10 22:05:20.589: INFO: Found Service test-service-w8mt6 in namespace services-2832 with labels: map[test-service:patched test-service-static:true] & annotations: map[patchedstatus:true] Jun 10 22:05:20.589: INFO: Service test-service-w8mt6 deleted [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 10 22:05:20.589: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-2832" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750 • ------------------------------ {"msg":"PASSED [sig-network] Services should complete a service status lifecycle [Conformance]","total":-1,"completed":21,"skipped":217,"failed":0} SSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 10 22:05:17.568: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Jun 10 22:05:18.046: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Jun 10 22:05:20.057: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63790495518, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63790495518, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63790495518, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63790495518, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jun 10 22:05:23.072: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] patching/updating a validating webhook should work [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a validating webhook configuration STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Updating a validating webhook configuration's rules to not include the create operation STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Patching a validating webhook configuration's rules to include the create operation STEP: Creating a configMap that does not comply to the validation webhook rules [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 10 22:05:24.119: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-3149" for this suite. STEP: Destroying namespace "webhook-3149-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.581 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 patching/updating a validating webhook should work [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]","total":-1,"completed":38,"skipped":749,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 10 22:05:10.004: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication Jun 10 22:05:10.312: INFO: role binding webhook-auth-reader already exists STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Jun 10 22:05:10.325: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Jun 10 22:05:12.339: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63790495510, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63790495510, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63790495510, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63790495510, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jun 10 22:05:15.349: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should honor timeout [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Setting timeout (1s) shorter than webhook latency (5s) STEP: Registering slow webhook via the AdmissionRegistration API STEP: Request fails when timeout (1s) is shorter than slow webhook latency (5s) STEP: Having no error when timeout is shorter than webhook latency and failure policy is ignore STEP: Registering slow webhook via the AdmissionRegistration API STEP: Having no error when timeout is longer than webhook latency STEP: Registering slow webhook via the AdmissionRegistration API STEP: Having no error when timeout is empty (defaulted to 10s in v1) STEP: Registering slow webhook via the AdmissionRegistration API [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 10 22:05:27.432: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-1598" for this suite. STEP: Destroying namespace "webhook-1598-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:17.462 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should honor timeout [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","total":-1,"completed":16,"skipped":244,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] Watchers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 10 22:05:18.456: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating a watch on configmaps with a certain label STEP: creating a new configmap STEP: modifying the configmap once STEP: changing the label value of the configmap STEP: Expecting to observe a delete notification for the watched object Jun 10 22:05:18.492: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-6822 5bce2fd0-a7c6-46ef-bb2a-958c8a74e97b 43633 0 2022-06-10 22:05:18 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2022-06-10 22:05:18 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} Jun 10 22:05:18.492: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-6822 5bce2fd0-a7c6-46ef-bb2a-958c8a74e97b 43634 0 2022-06-10 22:05:18 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2022-06-10 22:05:18 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} Jun 10 22:05:18.492: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-6822 5bce2fd0-a7c6-46ef-bb2a-958c8a74e97b 43635 0 2022-06-10 22:05:18 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2022-06-10 22:05:18 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: modifying the configmap a second time STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements STEP: changing the label value of the configmap back STEP: modifying the configmap a third time STEP: deleting the configmap STEP: Expecting to observe an add notification for the watched object when the label value was restored Jun 10 22:05:28.513: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-6822 5bce2fd0-a7c6-46ef-bb2a-958c8a74e97b 43925 0 2022-06-10 22:05:18 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2022-06-10 22:05:18 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} Jun 10 22:05:28.513: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-6822 5bce2fd0-a7c6-46ef-bb2a-958c8a74e97b 43926 0 2022-06-10 22:05:18 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2022-06-10 22:05:18 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},Immutable:nil,} Jun 10 22:05:28.513: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-6822 5bce2fd0-a7c6-46ef-bb2a-958c8a74e97b 43927 0 2022-06-10 22:05:18 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2022-06-10 22:05:18 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 10 22:05:28.513: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-6822" for this suite. • [SLOW TEST:10.065 seconds] [sig-api-machinery] Watchers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance]","total":-1,"completed":11,"skipped":150,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 10 22:05:04.699: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:241 [BeforeEach] Update Demo /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:293 [It] should scale a replication controller [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating a replication controller Jun 10 22:05:04.720: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-4180 create -f -' Jun 10 22:05:05.077: INFO: stderr: "" Jun 10 22:05:05.078: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Jun 10 22:05:05.078: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-4180 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Jun 10 22:05:05.248: INFO: stderr: "" Jun 10 22:05:05.248: INFO: stdout: "update-demo-nautilus-5mgn9 update-demo-nautilus-6258h " Jun 10 22:05:05.248: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-4180 get pods update-demo-nautilus-5mgn9 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Jun 10 22:05:05.415: INFO: stderr: "" Jun 10 22:05:05.415: INFO: stdout: "" Jun 10 22:05:05.415: INFO: update-demo-nautilus-5mgn9 is created but not running Jun 10 22:05:10.415: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-4180 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Jun 10 22:05:10.603: INFO: stderr: "" Jun 10 22:05:10.603: INFO: stdout: "update-demo-nautilus-5mgn9 update-demo-nautilus-6258h " Jun 10 22:05:10.603: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-4180 get pods update-demo-nautilus-5mgn9 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Jun 10 22:05:10.767: INFO: stderr: "" Jun 10 22:05:10.767: INFO: stdout: "true" Jun 10 22:05:10.767: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-4180 get pods update-demo-nautilus-5mgn9 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' Jun 10 22:05:10.948: INFO: stderr: "" Jun 10 22:05:10.949: INFO: stdout: "k8s.gcr.io/e2e-test-images/nautilus:1.4" Jun 10 22:05:10.949: INFO: validating pod update-demo-nautilus-5mgn9 Jun 10 22:05:10.983: INFO: got data: { "image": "nautilus.jpg" } Jun 10 22:05:10.983: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jun 10 22:05:10.983: INFO: update-demo-nautilus-5mgn9 is verified up and running Jun 10 22:05:10.983: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-4180 get pods update-demo-nautilus-6258h -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Jun 10 22:05:11.164: INFO: stderr: "" Jun 10 22:05:11.164: INFO: stdout: "true" Jun 10 22:05:11.164: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-4180 get pods update-demo-nautilus-6258h -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' Jun 10 22:05:11.323: INFO: stderr: "" Jun 10 22:05:11.323: INFO: stdout: "k8s.gcr.io/e2e-test-images/nautilus:1.4" Jun 10 22:05:11.323: INFO: validating pod update-demo-nautilus-6258h Jun 10 22:05:11.327: INFO: got data: { "image": "nautilus.jpg" } Jun 10 22:05:11.327: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jun 10 22:05:11.327: INFO: update-demo-nautilus-6258h is verified up and running STEP: scaling down the replication controller Jun 10 22:05:11.338: INFO: scanned /root for discovery docs: Jun 10 22:05:11.338: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-4180 scale rc update-demo-nautilus --replicas=1 --timeout=5m' Jun 10 22:05:11.554: INFO: stderr: "" Jun 10 22:05:11.554: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. Jun 10 22:05:11.554: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-4180 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Jun 10 22:05:11.740: INFO: stderr: "" Jun 10 22:05:11.740: INFO: stdout: "update-demo-nautilus-5mgn9 update-demo-nautilus-6258h " STEP: Replicas for name=update-demo: expected=1 actual=2 Jun 10 22:05:16.742: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-4180 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Jun 10 22:05:16.923: INFO: stderr: "" Jun 10 22:05:16.923: INFO: stdout: "update-demo-nautilus-5mgn9 update-demo-nautilus-6258h " STEP: Replicas for name=update-demo: expected=1 actual=2 Jun 10 22:05:21.924: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-4180 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Jun 10 22:05:22.096: INFO: stderr: "" Jun 10 22:05:22.096: INFO: stdout: "update-demo-nautilus-5mgn9 " Jun 10 22:05:22.096: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-4180 get pods update-demo-nautilus-5mgn9 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Jun 10 22:05:22.262: INFO: stderr: "" Jun 10 22:05:22.262: INFO: stdout: "true" Jun 10 22:05:22.262: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-4180 get pods update-demo-nautilus-5mgn9 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' Jun 10 22:05:22.432: INFO: stderr: "" Jun 10 22:05:22.432: INFO: stdout: "k8s.gcr.io/e2e-test-images/nautilus:1.4" Jun 10 22:05:22.432: INFO: validating pod update-demo-nautilus-5mgn9 Jun 10 22:05:22.435: INFO: got data: { "image": "nautilus.jpg" } Jun 10 22:05:22.435: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jun 10 22:05:22.435: INFO: update-demo-nautilus-5mgn9 is verified up and running STEP: scaling up the replication controller Jun 10 22:05:22.445: INFO: scanned /root for discovery docs: Jun 10 22:05:22.445: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-4180 scale rc update-demo-nautilus --replicas=2 --timeout=5m' Jun 10 22:05:22.661: INFO: stderr: "" Jun 10 22:05:22.661: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. Jun 10 22:05:22.661: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-4180 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Jun 10 22:05:22.831: INFO: stderr: "" Jun 10 22:05:22.831: INFO: stdout: "update-demo-nautilus-5mgn9 update-demo-nautilus-mkcbt " Jun 10 22:05:22.831: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-4180 get pods update-demo-nautilus-5mgn9 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Jun 10 22:05:22.992: INFO: stderr: "" Jun 10 22:05:22.992: INFO: stdout: "true" Jun 10 22:05:22.992: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-4180 get pods update-demo-nautilus-5mgn9 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' Jun 10 22:05:23.145: INFO: stderr: "" Jun 10 22:05:23.145: INFO: stdout: "k8s.gcr.io/e2e-test-images/nautilus:1.4" Jun 10 22:05:23.145: INFO: validating pod update-demo-nautilus-5mgn9 Jun 10 22:05:23.148: INFO: got data: { "image": "nautilus.jpg" } Jun 10 22:05:23.148: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jun 10 22:05:23.148: INFO: update-demo-nautilus-5mgn9 is verified up and running Jun 10 22:05:23.148: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-4180 get pods update-demo-nautilus-mkcbt -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Jun 10 22:05:23.320: INFO: stderr: "" Jun 10 22:05:23.320: INFO: stdout: "" Jun 10 22:05:23.320: INFO: update-demo-nautilus-mkcbt is created but not running Jun 10 22:05:28.322: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-4180 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Jun 10 22:05:28.493: INFO: stderr: "" Jun 10 22:05:28.493: INFO: stdout: "update-demo-nautilus-5mgn9 update-demo-nautilus-mkcbt " Jun 10 22:05:28.493: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-4180 get pods update-demo-nautilus-5mgn9 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Jun 10 22:05:28.667: INFO: stderr: "" Jun 10 22:05:28.667: INFO: stdout: "true" Jun 10 22:05:28.667: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-4180 get pods update-demo-nautilus-5mgn9 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' Jun 10 22:05:28.843: INFO: stderr: "" Jun 10 22:05:28.843: INFO: stdout: "k8s.gcr.io/e2e-test-images/nautilus:1.4" Jun 10 22:05:28.843: INFO: validating pod update-demo-nautilus-5mgn9 Jun 10 22:05:28.846: INFO: got data: { "image": "nautilus.jpg" } Jun 10 22:05:28.846: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jun 10 22:05:28.846: INFO: update-demo-nautilus-5mgn9 is verified up and running Jun 10 22:05:28.846: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-4180 get pods update-demo-nautilus-mkcbt -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Jun 10 22:05:29.034: INFO: stderr: "" Jun 10 22:05:29.034: INFO: stdout: "true" Jun 10 22:05:29.034: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-4180 get pods update-demo-nautilus-mkcbt -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' Jun 10 22:05:29.201: INFO: stderr: "" Jun 10 22:05:29.201: INFO: stdout: "k8s.gcr.io/e2e-test-images/nautilus:1.4" Jun 10 22:05:29.201: INFO: validating pod update-demo-nautilus-mkcbt Jun 10 22:05:29.207: INFO: got data: { "image": "nautilus.jpg" } Jun 10 22:05:29.207: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jun 10 22:05:29.207: INFO: update-demo-nautilus-mkcbt is verified up and running STEP: using delete to clean up resources Jun 10 22:05:29.207: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-4180 delete --grace-period=0 --force -f -' Jun 10 22:05:29.347: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jun 10 22:05:29.347: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Jun 10 22:05:29.347: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-4180 get rc,svc -l name=update-demo --no-headers' Jun 10 22:05:29.547: INFO: stderr: "No resources found in kubectl-4180 namespace.\n" Jun 10 22:05:29.547: INFO: stdout: "" Jun 10 22:05:29.547: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-4180 get pods -l name=update-demo -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Jun 10 22:05:29.725: INFO: stderr: "" Jun 10 22:05:29.725: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 10 22:05:29.725: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4180" for this suite. • [SLOW TEST:25.037 seconds] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Update Demo /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:291 should scale a replication controller [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance]","total":-1,"completed":29,"skipped":647,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 10 22:05:24.217: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] pod should support shared volumes between containers [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating Pod STEP: Reading file content from the nginx-container Jun 10 22:05:30.258: INFO: ExecWithOptions {Command:[/bin/sh -c cat /usr/share/volumeshare/shareddata.txt] Namespace:emptydir-1183 PodName:pod-sharedvolume-67800e12-cded-4012-8453-3e0931feffc7 ContainerName:busybox-main-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 10 22:05:30.258: INFO: >>> kubeConfig: /root/.kube/config Jun 10 22:05:30.510: INFO: Exec stderr: "" [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 10 22:05:30.510: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-1183" for this suite. • [SLOW TEST:6.304 seconds] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 pod should support shared volumes between containers [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance]","total":-1,"completed":39,"skipped":785,"failed":0} SSSSSSSS ------------------------------ [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 10 22:05:30.539: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test emptydir 0644 on tmpfs Jun 10 22:05:30.576: INFO: Waiting up to 5m0s for pod "pod-cc01090d-8b1b-437f-89d9-ab8bd1bb7824" in namespace "emptydir-738" to be "Succeeded or Failed" Jun 10 22:05:30.578: INFO: Pod "pod-cc01090d-8b1b-437f-89d9-ab8bd1bb7824": Phase="Pending", Reason="", readiness=false. Elapsed: 2.38904ms Jun 10 22:05:32.581: INFO: Pod "pod-cc01090d-8b1b-437f-89d9-ab8bd1bb7824": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005534974s Jun 10 22:05:34.587: INFO: Pod "pod-cc01090d-8b1b-437f-89d9-ab8bd1bb7824": Phase="Pending", Reason="", readiness=false. Elapsed: 4.01107482s Jun 10 22:05:36.592: INFO: Pod "pod-cc01090d-8b1b-437f-89d9-ab8bd1bb7824": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.01638492s STEP: Saw pod success Jun 10 22:05:36.592: INFO: Pod "pod-cc01090d-8b1b-437f-89d9-ab8bd1bb7824" satisfied condition "Succeeded or Failed" Jun 10 22:05:36.595: INFO: Trying to get logs from node node2 pod pod-cc01090d-8b1b-437f-89d9-ab8bd1bb7824 container test-container: STEP: delete the pod Jun 10 22:05:36.611: INFO: Waiting for pod pod-cc01090d-8b1b-437f-89d9-ab8bd1bb7824 to disappear Jun 10 22:05:36.613: INFO: Pod pod-cc01090d-8b1b-437f-89d9-ab8bd1bb7824 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 10 22:05:36.613: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-738" for this suite. • [SLOW TEST:6.082 seconds] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":40,"skipped":793,"failed":0} SSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 10 22:05:28.561: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41 [It] should provide container's cpu limit [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test downward API volume plugin Jun 10 22:05:28.596: INFO: Waiting up to 5m0s for pod "downwardapi-volume-a3079bfd-28e9-44ea-91f3-de3da66f4ce1" in namespace "downward-api-3508" to be "Succeeded or Failed" Jun 10 22:05:28.598: INFO: Pod "downwardapi-volume-a3079bfd-28e9-44ea-91f3-de3da66f4ce1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.282216ms Jun 10 22:05:30.604: INFO: Pod "downwardapi-volume-a3079bfd-28e9-44ea-91f3-de3da66f4ce1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008744004s Jun 10 22:05:32.608: INFO: Pod "downwardapi-volume-a3079bfd-28e9-44ea-91f3-de3da66f4ce1": Phase="Pending", Reason="", readiness=false. Elapsed: 4.011797684s Jun 10 22:05:34.612: INFO: Pod "downwardapi-volume-a3079bfd-28e9-44ea-91f3-de3da66f4ce1": Phase="Pending", Reason="", readiness=false. Elapsed: 6.016204353s Jun 10 22:05:36.616: INFO: Pod "downwardapi-volume-a3079bfd-28e9-44ea-91f3-de3da66f4ce1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.020170582s STEP: Saw pod success Jun 10 22:05:36.616: INFO: Pod "downwardapi-volume-a3079bfd-28e9-44ea-91f3-de3da66f4ce1" satisfied condition "Succeeded or Failed" Jun 10 22:05:36.618: INFO: Trying to get logs from node node1 pod downwardapi-volume-a3079bfd-28e9-44ea-91f3-de3da66f4ce1 container client-container: STEP: delete the pod Jun 10 22:05:36.630: INFO: Waiting for pod downwardapi-volume-a3079bfd-28e9-44ea-91f3-de3da66f4ce1 to disappear Jun 10 22:05:36.632: INFO: Pod downwardapi-volume-a3079bfd-28e9-44ea-91f3-de3da66f4ce1 no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 10 22:05:36.633: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-3508" for this suite. • [SLOW TEST:8.080 seconds] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should provide container's cpu limit [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ S ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance]","total":-1,"completed":12,"skipped":170,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 10 22:05:36.662: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should mount an API token into pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: getting the auto-created API token STEP: reading a file in the container Jun 10 22:05:41.215: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-3579 pod-service-account-94057c51-f073-43cd-8931-7021c2e1c4bc -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/token' STEP: reading a file in the container Jun 10 22:05:41.461: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-3579 pod-service-account-94057c51-f073-43cd-8931-7021c2e1c4bc -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/ca.crt' STEP: reading a file in the container Jun 10 22:05:41.704: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-3579 pod-service-account-94057c51-f073-43cd-8931-7021c2e1c4bc -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/namespace' [AfterEach] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 10 22:05:41.967: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-3579" for this suite. • [SLOW TEST:5.313 seconds] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23 should mount an API token into pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-auth] ServiceAccounts should mount an API token into pods [Conformance]","total":-1,"completed":41,"skipped":814,"failed":0} S ------------------------------ [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 10 22:03:27.184: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746 [It] should be able to create a functioning NodePort service [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating service nodeport-test with type=NodePort in namespace services-826 STEP: creating replication controller nodeport-test in namespace services-826 I0610 22:03:27.222934 25 runners.go:190] Created replication controller with name: nodeport-test, namespace: services-826, replica count: 2 I0610 22:03:30.274634 25 runners.go:190] nodeport-test Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0610 22:03:33.276111 25 runners.go:190] nodeport-test Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0610 22:03:36.278926 25 runners.go:190] nodeport-test Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jun 10 22:03:36.278: INFO: Creating new exec pod Jun 10 22:03:41.303: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-826 exec execpodwpls4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 nodeport-test 80' Jun 10 22:03:41.551: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 nodeport-test 80\nConnection to nodeport-test 80 port [tcp/http] succeeded!\n" Jun 10 22:03:41.551: INFO: stdout: "nodeport-test-nzshj" Jun 10 22:03:41.552: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-826 exec execpodwpls4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.233.1.185 80' Jun 10 22:03:41.878: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 10.233.1.185 80\nConnection to 10.233.1.185 80 port [tcp/http] succeeded!\n" Jun 10 22:03:41.879: INFO: stdout: "nodeport-test-nzshj" Jun 10 22:03:41.879: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-826 exec execpodwpls4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32205' Jun 10 22:03:42.140: INFO: rc: 1 Jun 10 22:03:42.140: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-826 exec execpodwpls4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32205: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32205 nc: connect to 10.10.190.207 port 32205 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:03:43.140: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-826 exec execpodwpls4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32205' Jun 10 22:03:43.398: INFO: rc: 1 Jun 10 22:03:43.398: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-826 exec execpodwpls4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32205: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32205 nc: connect to 10.10.190.207 port 32205 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:03:44.140: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-826 exec execpodwpls4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32205' Jun 10 22:03:44.371: INFO: rc: 1 Jun 10 22:03:44.371: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-826 exec execpodwpls4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32205: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32205 nc: connect to 10.10.190.207 port 32205 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:03:45.140: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-826 exec execpodwpls4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32205' Jun 10 22:03:45.393: INFO: rc: 1 Jun 10 22:03:45.393: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-826 exec execpodwpls4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32205: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32205 nc: connect to 10.10.190.207 port 32205 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:03:46.142: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-826 exec execpodwpls4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32205' Jun 10 22:03:46.429: INFO: rc: 1 Jun 10 22:03:46.429: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-826 exec execpodwpls4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32205: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32205 nc: connect to 10.10.190.207 port 32205 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:03:47.141: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-826 exec execpodwpls4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32205' Jun 10 22:03:47.391: INFO: rc: 1 Jun 10 22:03:47.391: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-826 exec execpodwpls4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32205: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32205 nc: connect to 10.10.190.207 port 32205 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:03:48.140: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-826 exec execpodwpls4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32205' Jun 10 22:03:48.391: INFO: rc: 1 Jun 10 22:03:48.391: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-826 exec execpodwpls4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32205: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32205 nc: connect to 10.10.190.207 port 32205 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:03:49.141: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-826 exec execpodwpls4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32205' Jun 10 22:03:49.376: INFO: rc: 1 Jun 10 22:03:49.376: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-826 exec execpodwpls4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32205: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32205 nc: connect to 10.10.190.207 port 32205 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:03:50.140: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-826 exec execpodwpls4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32205' Jun 10 22:03:50.688: INFO: rc: 1 Jun 10 22:03:50.688: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-826 exec execpodwpls4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32205: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32205 nc: connect to 10.10.190.207 port 32205 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:03:51.141: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-826 exec execpodwpls4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32205' Jun 10 22:03:51.411: INFO: rc: 1 Jun 10 22:03:51.411: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-826 exec execpodwpls4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32205: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32205 nc: connect to 10.10.190.207 port 32205 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:03:52.140: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-826 exec execpodwpls4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32205' Jun 10 22:03:52.402: INFO: rc: 1 Jun 10 22:03:52.402: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-826 exec execpodwpls4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32205: Command stdout: stderr: + echo+ hostNamenc -v -t -w 2 10.10.190.207 32205 nc: connect to 10.10.190.207 port 32205 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:03:53.140: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-826 exec execpodwpls4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32205' Jun 10 22:03:53.398: INFO: rc: 1 Jun 10 22:03:53.398: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-826 exec execpodwpls4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32205: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32205 nc: connect to 10.10.190.207 port 32205 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:03:54.140: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-826 exec execpodwpls4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32205' Jun 10 22:03:54.386: INFO: rc: 1 Jun 10 22:03:54.386: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-826 exec execpodwpls4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32205: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32205 nc: connect to 10.10.190.207 port 32205 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:03:55.141: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-826 exec execpodwpls4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32205' Jun 10 22:03:55.400: INFO: rc: 1 Jun 10 22:03:55.400: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-826 exec execpodwpls4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32205: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32205 nc: connect to 10.10.190.207 port 32205 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:03:56.141: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-826 exec execpodwpls4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32205' Jun 10 22:03:56.409: INFO: rc: 1 Jun 10 22:03:56.409: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-826 exec execpodwpls4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32205: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32205 nc: connect to 10.10.190.207 port 32205 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:03:57.140: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-826 exec execpodwpls4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32205' Jun 10 22:03:57.357: INFO: rc: 1 Jun 10 22:03:57.357: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-826 exec execpodwpls4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32205: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32205 nc: connect to 10.10.190.207 port 32205 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:03:58.142: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-826 exec execpodwpls4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32205' Jun 10 22:03:58.441: INFO: rc: 1 Jun 10 22:03:58.441: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-826 exec execpodwpls4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32205: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32205 nc: connect to 10.10.190.207 port 32205 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:03:59.141: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-826 exec execpodwpls4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32205' Jun 10 22:03:59.382: INFO: rc: 1 Jun 10 22:03:59.382: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-826 exec execpodwpls4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32205: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32205 nc: connect to 10.10.190.207 port 32205 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:04:00.141: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-826 exec execpodwpls4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32205' Jun 10 22:04:00.649: INFO: rc: 1 Jun 10 22:04:00.649: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-826 exec execpodwpls4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32205: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32205 nc: connect to 10.10.190.207 port 32205 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:04:01.141: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-826 exec execpodwpls4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32205' Jun 10 22:04:01.407: INFO: rc: 1 Jun 10 22:04:01.407: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-826 exec execpodwpls4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32205: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32205 nc: connect to 10.10.190.207 port 32205 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:04:02.141: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-826 exec execpodwpls4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32205' Jun 10 22:04:02.418: INFO: rc: 1 Jun 10 22:04:02.418: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-826 exec execpodwpls4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32205: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32205 nc: connect to 10.10.190.207 port 32205 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:04:03.142: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-826 exec execpodwpls4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32205' Jun 10 22:04:03.399: INFO: rc: 1 Jun 10 22:04:03.399: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-826 exec execpodwpls4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32205: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32205 nc: connect to 10.10.190.207 port 32205 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:04:04.142: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-826 exec execpodwpls4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32205' Jun 10 22:04:04.390: INFO: rc: 1 Jun 10 22:04:04.390: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-826 exec execpodwpls4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32205: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32205 nc: connect to 10.10.190.207 port 32205 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:04:05.141: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-826 exec execpodwpls4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32205' Jun 10 22:04:05.391: INFO: rc: 1 Jun 10 22:04:05.391: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-826 exec execpodwpls4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32205: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32205 nc: connect to 10.10.190.207 port 32205 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:04:06.142: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-826 exec execpodwpls4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32205' Jun 10 22:04:06.400: INFO: rc: 1 Jun 10 22:04:06.400: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-826 exec execpodwpls4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32205: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32205 nc: connect to 10.10.190.207 port 32205 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:04:07.140: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-826 exec execpodwpls4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32205' Jun 10 22:04:07.380: INFO: rc: 1 Jun 10 22:04:07.380: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-826 exec execpodwpls4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32205: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32205 nc: connect to 10.10.190.207 port 32205 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:04:08.141: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-826 exec execpodwpls4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32205' Jun 10 22:04:08.386: INFO: rc: 1 Jun 10 22:04:08.386: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-826 exec execpodwpls4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32205: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32205 nc: connect to 10.10.190.207 port 32205 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:04:09.141: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-826 exec execpodwpls4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32205' Jun 10 22:04:09.416: INFO: rc: 1 Jun 10 22:04:09.416: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-826 exec execpodwpls4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32205: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32205 nc: connect to 10.10.190.207 port 32205 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:04:10.140: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-826 exec execpodwpls4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32205' Jun 10 22:04:10.381: INFO: rc: 1 Jun 10 22:04:10.381: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-826 exec execpodwpls4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32205: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32205 nc: connect to 10.10.190.207 port 32205 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:04:11.141: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-826 exec execpodwpls4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32205' Jun 10 22:04:11.395: INFO: rc: 1 Jun 10 22:04:11.396: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-826 exec execpodwpls4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32205: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32205 nc: connect to 10.10.190.207 port 32205 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:04:12.141: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-826 exec execpodwpls4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32205' Jun 10 22:04:12.398: INFO: rc: 1 Jun 10 22:04:12.398: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-826 exec execpodwpls4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32205: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32205 nc: connect to 10.10.190.207 port 32205 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:04:13.141: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-826 exec execpodwpls4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32205' Jun 10 22:04:13.409: INFO: rc: 1 Jun 10 22:04:13.410: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-826 exec execpodwpls4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32205: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32205 nc: connect to 10.10.190.207 port 32205 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:04:14.141: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-826 exec execpodwpls4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32205' Jun 10 22:04:14.440: INFO: rc: 1 Jun 10 22:04:14.440: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-826 exec execpodwpls4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32205: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32205 nc: connect to 10.10.190.207 port 32205 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:04:15.141: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-826 exec execpodwpls4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32205' Jun 10 22:04:15.485: INFO: rc: 1 Jun 10 22:04:15.485: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-826 exec execpodwpls4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32205: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32205 nc: connect to 10.10.190.207 port 32205 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:04:16.140: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-826 exec execpodwpls4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32205' Jun 10 22:04:16.400: INFO: rc: 1 Jun 10 22:04:16.400: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-826 exec execpodwpls4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32205: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32205 nc: connect to 10.10.190.207 port 32205 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:04:17.141: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-826 exec execpodwpls4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32205' Jun 10 22:04:17.392: INFO: rc: 1 Jun 10 22:04:17.392: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-826 exec execpodwpls4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32205: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32205 nc: connect to 10.10.190.207 port 32205 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:04:18.140: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-826 exec execpodwpls4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32205' Jun 10 22:04:18.370: INFO: rc: 1 Jun 10 22:04:18.371: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-826 exec execpodwpls4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32205: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32205 nc: connect to 10.10.190.207 port 32205 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:04:19.140: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-826 exec execpodwpls4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32205' Jun 10 22:04:19.398: INFO: rc: 1 Jun 10 22:04:19.398: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-826 exec execpodwpls4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32205: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32205 nc: connect to 10.10.190.207 port 32205 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:04:20.141: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-826 exec execpodwpls4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32205' Jun 10 22:04:20.594: INFO: rc: 1 Jun 10 22:04:20.594: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-826 exec execpodwpls4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32205: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32205 nc: connect to 10.10.190.207 port 32205 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:04:21.140: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-826 exec execpodwpls4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32205' Jun 10 22:04:21.776: INFO: rc: 1 Jun 10 22:04:21.776: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-826 exec execpodwpls4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32205: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32205 nc: connect to 10.10.190.207 port 32205 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:04:22.140: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-826 exec execpodwpls4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32205' Jun 10 22:04:23.209: INFO: rc: 1 Jun 10 22:04:23.209: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-826 exec execpodwpls4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32205: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32205 + echo hostName nc: connect to 10.10.190.207 port 32205 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:04:24.141: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-826 exec execpodwpls4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32205' Jun 10 22:04:26.828: INFO: rc: 1 Jun 10 22:04:26.828: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-826 exec execpodwpls4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32205: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32205 nc: connect to 10.10.190.207 port 32205 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:04:27.140: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-826 exec execpodwpls4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32205' Jun 10 22:04:27.394: INFO: rc: 1 Jun 10 22:04:27.394: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-826 exec execpodwpls4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32205: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32205 nc: connect to 10.10.190.207 port 32205 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:04:28.142: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-826 exec execpodwpls4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32205' Jun 10 22:04:29.541: INFO: rc: 1 Jun 10 22:04:29.542: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-826 exec execpodwpls4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32205: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32205 nc: connect to 10.10.190.207 port 32205 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:04:30.140: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-826 exec execpodwpls4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32205' Jun 10 22:04:30.850: INFO: rc: 1 Jun 10 22:04:30.850: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-826 exec execpodwpls4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32205: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32205 nc: connect to 10.10.190.207 port 32205 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:04:31.140: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-826 exec execpodwpls4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32205' Jun 10 22:04:31.657: INFO: rc: 1 Jun 10 22:04:31.657: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-826 exec execpodwpls4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32205: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32205 nc: connect to 10.10.190.207 port 32205 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:04:32.141: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-826 exec execpodwpls4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32205' Jun 10 22:04:32.747: INFO: rc: 1 Jun 10 22:04:32.747: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-826 exec execpodwpls4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32205: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32205 nc: connect to 10.10.190.207 port 32205 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:04:33.141: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-826 exec execpodwpls4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32205' Jun 10 22:04:33.603: INFO: rc: 1 Jun 10 22:04:33.603: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-826 exec execpodwpls4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32205: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32205 nc: connect to 10.10.190.207 port 32205 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:04:34.141: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-826 exec execpodwpls4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32205' Jun 10 22:04:34.399: INFO: rc: 1 Jun 10 22:04:34.399: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-826 exec execpodwpls4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32205: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32205 nc: connect to 10.10.190.207 port 32205 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:04:35.141: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-826 exec execpodwpls4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32205' Jun 10 22:04:35.431: INFO: rc: 1 Jun 10 22:04:35.431: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-826 exec execpodwpls4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32205: Command stdout: stderr: + + echonc -v hostName -t -w 2 10.10.190.207 32205 nc: connect to 10.10.190.207 port 32205 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:04:36.141: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-826 exec execpodwpls4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32205' Jun 10 22:04:36.560: INFO: rc: 1 Jun 10 22:04:36.560: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-826 exec execpodwpls4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32205: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32205 nc: connect to 10.10.190.207 port 32205 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:04:37.141: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-826 exec execpodwpls4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32205' Jun 10 22:04:37.491: INFO: rc: 1 Jun 10 22:04:37.492: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-826 exec execpodwpls4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32205: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32205 nc: connect to 10.10.190.207 port 32205 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:04:38.141: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-826 exec execpodwpls4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32205' Jun 10 22:04:38.463: INFO: rc: 1 Jun 10 22:04:38.463: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-826 exec execpodwpls4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32205: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32205 nc: connect to 10.10.190.207 port 32205 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:04:39.141: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-826 exec execpodwpls4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32205' Jun 10 22:04:39.467: INFO: rc: 1 Jun 10 22:04:39.467: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-826 exec execpodwpls4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32205: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32205 nc: connect to 10.10.190.207 port 32205 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:04:40.141: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-826 exec execpodwpls4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32205' Jun 10 22:04:40.404: INFO: rc: 1 Jun 10 22:04:40.404: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-826 exec execpodwpls4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32205: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32205 nc: connect to 10.10.190.207 port 32205 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:04:41.140: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-826 exec execpodwpls4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32205' Jun 10 22:04:41.404: INFO: rc: 1 Jun 10 22:04:41.404: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-826 exec execpodwpls4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32205: Command stdout: stderr: + echo+ nc hostName -v -t -w 2 10.10.190.207 32205 nc: connect to 10.10.190.207 port 32205 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:04:42.140: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-826 exec execpodwpls4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32205' Jun 10 22:04:42.389: INFO: rc: 1 Jun 10 22:04:42.389: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-826 exec execpodwpls4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32205: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32205 nc: connect to 10.10.190.207 port 32205 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:04:43.140: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-826 exec execpodwpls4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32205' Jun 10 22:04:43.387: INFO: rc: 1 Jun 10 22:04:43.387: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-826 exec execpodwpls4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32205: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32205 nc: connect to 10.10.190.207 port 32205 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:04:44.140: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-826 exec execpodwpls4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32205' Jun 10 22:04:44.389: INFO: rc: 1 Jun 10 22:04:44.389: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-826 exec execpodwpls4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32205: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32205 nc: connect to 10.10.190.207 port 32205 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:04:45.141: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-826 exec execpodwpls4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32205' Jun 10 22:04:45.403: INFO: rc: 1 Jun 10 22:04:45.403: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-826 exec execpodwpls4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32205: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32205 nc: connect to 10.10.190.207 port 32205 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:04:46.141: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-826 exec execpodwpls4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32205' Jun 10 22:04:46.375: INFO: rc: 1 Jun 10 22:04:46.375: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-826 exec execpodwpls4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32205: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32205 nc: connect to 10.10.190.207 port 32205 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:04:47.140: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-826 exec execpodwpls4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32205' Jun 10 22:04:47.546: INFO: rc: 1 Jun 10 22:04:47.546: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-826 exec execpodwpls4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32205: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32205 nc: connect to 10.10.190.207 port 32205 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:04:48.141: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-826 exec execpodwpls4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32205' Jun 10 22:04:48.443: INFO: rc: 1 Jun 10 22:04:48.443: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-826 exec execpodwpls4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32205: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32205 nc: connect to 10.10.190.207 port 32205 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:04:49.141: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-826 exec execpodwpls4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32205' Jun 10 22:04:49.404: INFO: rc: 1 Jun 10 22:04:49.404: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-826 exec execpodwpls4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32205: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32205 nc: connect to 10.10.190.207 port 32205 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:04:50.141: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-826 exec execpodwpls4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32205' Jun 10 22:04:50.399: INFO: rc: 1 Jun 10 22:04:50.399: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-826 exec execpodwpls4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32205: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32205 nc: connect to 10.10.190.207 port 32205 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:04:51.141: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-826 exec execpodwpls4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32205' Jun 10 22:04:51.774: INFO: rc: 1 Jun 10 22:04:51.775: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-826 exec execpodwpls4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32205: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32205 nc: connect to 10.10.190.207 port 32205 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:04:52.140: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-826 exec execpodwpls4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32205' Jun 10 22:04:52.377: INFO: rc: 1 Jun 10 22:04:52.377: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-826 exec execpodwpls4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32205: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32205 nc: connect to 10.10.190.207 port 32205 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:04:53.140: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-826 exec execpodwpls4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32205' Jun 10 22:04:53.400: INFO: rc: 1 Jun 10 22:04:53.401: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-826 exec execpodwpls4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32205: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32205 nc: connect to 10.10.190.207 port 32205 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:04:54.141: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-826 exec execpodwpls4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32205' Jun 10 22:04:54.378: INFO: rc: 1 Jun 10 22:04:54.378: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-826 exec execpodwpls4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32205: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32205 nc: connect to 10.10.190.207 port 32205 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:04:55.140: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-826 exec execpodwpls4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32205' Jun 10 22:04:55.400: INFO: rc: 1 Jun 10 22:04:55.400: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-826 exec execpodwpls4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32205: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32205 nc: connect to 10.10.190.207 port 32205 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:04:56.141: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-826 exec execpodwpls4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32205' Jun 10 22:04:56.413: INFO: rc: 1 Jun 10 22:04:56.413: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-826 exec execpodwpls4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32205: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32205 nc: connect to 10.10.190.207 port 32205 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:04:57.141: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-826 exec execpodwpls4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32205' Jun 10 22:04:57.411: INFO: rc: 1 Jun 10 22:04:57.411: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-826 exec execpodwpls4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32205: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32205 nc: connect to 10.10.190.207 port 32205 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:04:58.142: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-826 exec execpodwpls4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32205' Jun 10 22:04:59.144: INFO: rc: 1 Jun 10 22:04:59.144: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-826 exec execpodwpls4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32205: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32205 nc: connect to 10.10.190.207 port 32205 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:05:00.141: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-826 exec execpodwpls4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32205' Jun 10 22:05:00.958: INFO: rc: 1 Jun 10 22:05:00.958: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-826 exec execpodwpls4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32205: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32205 nc: connect to 10.10.190.207 port 32205 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:05:01.141: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-826 exec execpodwpls4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32205' Jun 10 22:05:01.826: INFO: rc: 1 Jun 10 22:05:01.826: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-826 exec execpodwpls4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32205: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32205 nc: connect to 10.10.190.207 port 32205 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:05:02.141: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-826 exec execpodwpls4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32205' Jun 10 22:05:02.402: INFO: rc: 1 Jun 10 22:05:02.402: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-826 exec execpodwpls4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32205: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32205 nc: connect to 10.10.190.207 port 32205 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:05:03.141: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-826 exec execpodwpls4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32205' Jun 10 22:05:03.473: INFO: rc: 1 Jun 10 22:05:03.473: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-826 exec execpodwpls4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32205: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32205 nc: connect to 10.10.190.207 port 32205 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:05:04.141: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-826 exec execpodwpls4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32205' Jun 10 22:05:04.588: INFO: rc: 1 Jun 10 22:05:04.588: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-826 exec execpodwpls4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32205: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32205 nc: connect to 10.10.190.207 port 32205 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:05:05.141: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-826 exec execpodwpls4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32205' Jun 10 22:05:05.394: INFO: rc: 1 Jun 10 22:05:05.394: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-826 exec execpodwpls4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32205: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32205 nc: connect to 10.10.190.207 port 32205 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:05:06.140: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-826 exec execpodwpls4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32205' Jun 10 22:05:06.405: INFO: rc: 1 Jun 10 22:05:06.405: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-826 exec execpodwpls4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32205: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32205 nc: connect to 10.10.190.207 port 32205 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:05:07.141: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-826 exec execpodwpls4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32205' Jun 10 22:05:07.385: INFO: rc: 1 Jun 10 22:05:07.385: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-826 exec execpodwpls4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32205: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32205 nc: connect to 10.10.190.207 port 32205 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:05:08.141: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-826 exec execpodwpls4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32205' Jun 10 22:05:08.788: INFO: rc: 1 Jun 10 22:05:08.788: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-826 exec execpodwpls4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32205: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32205 nc: connect to 10.10.190.207 port 32205 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:05:09.141: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-826 exec execpodwpls4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32205' Jun 10 22:05:09.367: INFO: rc: 1 Jun 10 22:05:09.368: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-826 exec execpodwpls4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32205: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32205 nc: connect to 10.10.190.207 port 32205 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:05:10.141: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-826 exec execpodwpls4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32205' Jun 10 22:05:10.368: INFO: rc: 1 Jun 10 22:05:10.368: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-826 exec execpodwpls4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32205: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32205 nc: connect to 10.10.190.207 port 32205 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:05:11.140: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-826 exec execpodwpls4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32205' Jun 10 22:05:11.807: INFO: rc: 1 Jun 10 22:05:11.808: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-826 exec execpodwpls4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32205: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32205 nc: connect to 10.10.190.207 port 32205 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:05:12.140: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-826 exec execpodwpls4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32205' Jun 10 22:05:12.406: INFO: rc: 1 Jun 10 22:05:12.406: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-826 exec execpodwpls4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32205: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32205 nc: connect to 10.10.190.207 port 32205 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:05:13.141: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-826 exec execpodwpls4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32205' Jun 10 22:05:13.780: INFO: rc: 1 Jun 10 22:05:13.781: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-826 exec execpodwpls4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32205: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32205 nc: connect to 10.10.190.207 port 32205 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:05:14.141: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-826 exec execpodwpls4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32205' Jun 10 22:05:14.591: INFO: rc: 1 Jun 10 22:05:14.591: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-826 exec execpodwpls4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32205: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32205 nc: connect to 10.10.190.207 port 32205 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:05:15.140: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-826 exec execpodwpls4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32205' Jun 10 22:05:15.432: INFO: rc: 1 Jun 10 22:05:15.432: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-826 exec execpodwpls4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32205: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32205 nc: connect to 10.10.190.207 port 32205 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:05:16.140: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-826 exec execpodwpls4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32205' Jun 10 22:05:16.481: INFO: rc: 1 Jun 10 22:05:16.481: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-826 exec execpodwpls4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32205: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32205 nc: connect to 10.10.190.207 port 32205 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:05:17.141: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-826 exec execpodwpls4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32205' Jun 10 22:05:17.381: INFO: rc: 1 Jun 10 22:05:17.381: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-826 exec execpodwpls4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32205: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32205 nc: connect to 10.10.190.207 port 32205 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:05:18.140: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-826 exec execpodwpls4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32205' Jun 10 22:05:18.383: INFO: rc: 1 Jun 10 22:05:18.383: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-826 exec execpodwpls4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32205: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32205 nc: connect to 10.10.190.207 port 32205 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:05:19.140: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-826 exec execpodwpls4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32205' Jun 10 22:05:19.379: INFO: rc: 1 Jun 10 22:05:19.379: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-826 exec execpodwpls4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32205: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32205 + echo hostName nc: connect to 10.10.190.207 port 32205 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:05:20.141: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-826 exec execpodwpls4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32205' Jun 10 22:05:20.386: INFO: rc: 1 Jun 10 22:05:20.386: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-826 exec execpodwpls4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32205: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32205 nc: connect to 10.10.190.207 port 32205 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:05:21.140: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-826 exec execpodwpls4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32205' Jun 10 22:05:21.368: INFO: rc: 1 Jun 10 22:05:21.368: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-826 exec execpodwpls4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32205: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32205 nc: connect to 10.10.190.207 port 32205 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:05:22.140: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-826 exec execpodwpls4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32205' Jun 10 22:05:22.378: INFO: rc: 1 Jun 10 22:05:22.378: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-826 exec execpodwpls4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32205: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32205 nc: connect to 10.10.190.207 port 32205 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:05:23.140: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-826 exec execpodwpls4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32205' Jun 10 22:05:23.406: INFO: rc: 1 Jun 10 22:05:23.406: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-826 exec execpodwpls4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32205: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32205 nc: connect to 10.10.190.207 port 32205 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:05:24.140: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-826 exec execpodwpls4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32205' Jun 10 22:05:24.371: INFO: rc: 1 Jun 10 22:05:24.371: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-826 exec execpodwpls4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32205: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32205 nc: connect to 10.10.190.207 port 32205 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:05:25.141: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-826 exec execpodwpls4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32205' Jun 10 22:05:25.435: INFO: rc: 1 Jun 10 22:05:25.435: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-826 exec execpodwpls4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32205: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32205 nc: connect to 10.10.190.207 port 32205 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:05:26.140: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-826 exec execpodwpls4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32205' Jun 10 22:05:26.566: INFO: rc: 1 Jun 10 22:05:26.566: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-826 exec execpodwpls4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32205: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32205 nc: connect to 10.10.190.207 port 32205 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:05:27.141: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-826 exec execpodwpls4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32205' Jun 10 22:05:27.414: INFO: rc: 1 Jun 10 22:05:27.414: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-826 exec execpodwpls4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32205: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32205 nc: connect to 10.10.190.207 port 32205 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:05:28.140: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-826 exec execpodwpls4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32205' Jun 10 22:05:29.134: INFO: rc: 1 Jun 10 22:05:29.134: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-826 exec execpodwpls4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32205: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32205 nc: connect to 10.10.190.207 port 32205 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:05:29.141: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-826 exec execpodwpls4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32205' Jun 10 22:05:29.421: INFO: rc: 1 Jun 10 22:05:29.421: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-826 exec execpodwpls4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32205: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32205 nc: connect to 10.10.190.207 port 32205 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:05:30.140: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-826 exec execpodwpls4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32205' Jun 10 22:05:30.530: INFO: rc: 1 Jun 10 22:05:30.530: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-826 exec execpodwpls4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32205: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32205 nc: connect to 10.10.190.207 port 32205 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:05:31.140: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-826 exec execpodwpls4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32205' Jun 10 22:05:33.255: INFO: rc: 1 Jun 10 22:05:33.255: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-826 exec execpodwpls4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32205: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32205 nc: connect to 10.10.190.207 port 32205 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:05:34.140: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-826 exec execpodwpls4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32205' Jun 10 22:05:34.736: INFO: rc: 1 Jun 10 22:05:34.736: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-826 exec execpodwpls4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32205: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32205 nc: connect to 10.10.190.207 port 32205 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:05:35.141: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-826 exec execpodwpls4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32205' Jun 10 22:05:35.486: INFO: rc: 1 Jun 10 22:05:35.486: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-826 exec execpodwpls4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32205: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32205 nc: connect to 10.10.190.207 port 32205 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:05:36.141: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-826 exec execpodwpls4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32205' Jun 10 22:05:36.589: INFO: rc: 1 Jun 10 22:05:36.589: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-826 exec execpodwpls4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32205: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32205 nc: connect to 10.10.190.207 port 32205 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:05:37.141: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-826 exec execpodwpls4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32205' Jun 10 22:05:37.395: INFO: rc: 1 Jun 10 22:05:37.395: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-826 exec execpodwpls4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32205: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32205 nc: connect to 10.10.190.207 port 32205 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:05:38.141: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-826 exec execpodwpls4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32205' Jun 10 22:05:38.604: INFO: rc: 1 Jun 10 22:05:38.604: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-826 exec execpodwpls4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32205: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32205 nc: connect to 10.10.190.207 port 32205 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:05:39.140: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-826 exec execpodwpls4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32205' Jun 10 22:05:39.604: INFO: rc: 1 Jun 10 22:05:39.604: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-826 exec execpodwpls4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32205: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32205 nc: connect to 10.10.190.207 port 32205 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:05:40.141: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-826 exec execpodwpls4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32205' Jun 10 22:05:40.395: INFO: rc: 1 Jun 10 22:05:40.395: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-826 exec execpodwpls4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32205: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32205 nc: connect to 10.10.190.207 port 32205 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:05:41.140: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-826 exec execpodwpls4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32205' Jun 10 22:05:41.408: INFO: rc: 1 Jun 10 22:05:41.408: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-826 exec execpodwpls4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32205: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32205 nc: connect to 10.10.190.207 port 32205 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:05:42.140: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-826 exec execpodwpls4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32205' Jun 10 22:05:42.388: INFO: rc: 1 Jun 10 22:05:42.388: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-826 exec execpodwpls4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32205: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32205 nc: connect to 10.10.190.207 port 32205 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:05:42.388: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-826 exec execpodwpls4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32205' Jun 10 22:05:42.631: INFO: rc: 1 Jun 10 22:05:42.631: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-826 exec execpodwpls4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32205: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32205 nc: connect to 10.10.190.207 port 32205 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:05:42.631: FAIL: Unexpected error: <*errors.errorString | 0xc0036ea6f0>: { s: "service is not reachable within 2m0s timeout on endpoint 10.10.190.207:32205 over TCP protocol", } service is not reachable within 2m0s timeout on endpoint 10.10.190.207:32205 over TCP protocol occurred Full Stack Trace k8s.io/kubernetes/test/e2e/network.glob..func24.11() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1169 +0x265 k8s.io/kubernetes/test/e2e.RunE2ETests(0xc000cc0f00) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x36c k8s.io/kubernetes/test/e2e.TestE2E(0xc000cc0f00) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:144 +0x2b testing.tRunner(0xc000cc0f00, 0x70f99e8) /usr/local/go/src/testing/testing.go:1193 +0xef created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1238 +0x2b3 [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 STEP: Collecting events from namespace "services-826". STEP: Found 17 events. Jun 10 22:05:42.647: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for execpodwpls4: { } Scheduled: Successfully assigned services-826/execpodwpls4 to node1 Jun 10 22:05:42.647: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for nodeport-test-2v642: { } Scheduled: Successfully assigned services-826/nodeport-test-2v642 to node2 Jun 10 22:05:42.647: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for nodeport-test-nzshj: { } Scheduled: Successfully assigned services-826/nodeport-test-nzshj to node2 Jun 10 22:05:42.647: INFO: At 2022-06-10 22:03:27 +0000 UTC - event for nodeport-test: {replication-controller } SuccessfulCreate: Created pod: nodeport-test-2v642 Jun 10 22:05:42.647: INFO: At 2022-06-10 22:03:27 +0000 UTC - event for nodeport-test: {replication-controller } SuccessfulCreate: Created pod: nodeport-test-nzshj Jun 10 22:05:42.647: INFO: At 2022-06-10 22:03:29 +0000 UTC - event for nodeport-test-2v642: {kubelet node2} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/agnhost:2.32" Jun 10 22:05:42.647: INFO: At 2022-06-10 22:03:29 +0000 UTC - event for nodeport-test-2v642: {kubelet node2} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/agnhost:2.32" in 292.095857ms Jun 10 22:05:42.647: INFO: At 2022-06-10 22:03:30 +0000 UTC - event for nodeport-test-2v642: {kubelet node2} Started: Started container nodeport-test Jun 10 22:05:42.647: INFO: At 2022-06-10 22:03:30 +0000 UTC - event for nodeport-test-2v642: {kubelet node2} Created: Created container nodeport-test Jun 10 22:05:42.647: INFO: At 2022-06-10 22:03:30 +0000 UTC - event for nodeport-test-nzshj: {kubelet node2} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/agnhost:2.32" Jun 10 22:05:42.647: INFO: At 2022-06-10 22:03:30 +0000 UTC - event for nodeport-test-nzshj: {kubelet node2} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/agnhost:2.32" in 452.798418ms Jun 10 22:05:42.647: INFO: At 2022-06-10 22:03:31 +0000 UTC - event for nodeport-test-nzshj: {kubelet node2} Created: Created container nodeport-test Jun 10 22:05:42.647: INFO: At 2022-06-10 22:03:32 +0000 UTC - event for nodeport-test-nzshj: {kubelet node2} Started: Started container nodeport-test Jun 10 22:05:42.647: INFO: At 2022-06-10 22:03:37 +0000 UTC - event for execpodwpls4: {kubelet node1} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/agnhost:2.32" Jun 10 22:05:42.647: INFO: At 2022-06-10 22:03:38 +0000 UTC - event for execpodwpls4: {kubelet node1} Started: Started container agnhost-container Jun 10 22:05:42.647: INFO: At 2022-06-10 22:03:38 +0000 UTC - event for execpodwpls4: {kubelet node1} Created: Created container agnhost-container Jun 10 22:05:42.647: INFO: At 2022-06-10 22:03:38 +0000 UTC - event for execpodwpls4: {kubelet node1} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/agnhost:2.32" in 570.123003ms Jun 10 22:05:42.650: INFO: POD NODE PHASE GRACE CONDITIONS Jun 10 22:05:42.650: INFO: execpodwpls4 node1 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-06-10 22:03:36 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2022-06-10 22:03:38 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-06-10 22:03:38 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-06-10 22:03:36 +0000 UTC }] Jun 10 22:05:42.650: INFO: nodeport-test-2v642 node2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-06-10 22:03:27 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2022-06-10 22:03:34 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-06-10 22:03:34 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-06-10 22:03:27 +0000 UTC }] Jun 10 22:05:42.650: INFO: nodeport-test-nzshj node2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-06-10 22:03:27 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2022-06-10 22:03:33 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-06-10 22:03:33 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-06-10 22:03:27 +0000 UTC }] Jun 10 22:05:42.650: INFO: Jun 10 22:05:42.655: INFO: Logging node info for node master1 Jun 10 22:05:42.657: INFO: Node Info: &Node{ObjectMeta:{master1 e472448e-87fd-4e8d-bbb7-98d43d3d8a87 44219 0 2022-06-10 19:57:38 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master1 kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.202 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/master.version:v0.8.2 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2022-06-10 19:57:41 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}}} {flanneld Update v1 2022-06-10 20:00:25 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2022-06-10 20:00:33 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.0.0/24\"":{}},"f:taints":{}}}} {kubelet Update v1 2022-06-10 20:05:13 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}} {nfd-master Update v1 2022-06-10 20:08:15 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/master.version":{}}}}}]},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{451201003520 0} {} 440625980Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234739200 0} {} 196518300Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{406080902496 0} {} 406080902496 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324575232 0} {} 195629468Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-06-10 20:03:20 +0000 UTC,LastTransitionTime:2022-06-10 20:03:20 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-06-10 22:05:41 +0000 UTC,LastTransitionTime:2022-06-10 19:57:36 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-06-10 22:05:41 +0000 UTC,LastTransitionTime:2022-06-10 19:57:36 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-06-10 22:05:41 +0000 UTC,LastTransitionTime:2022-06-10 19:57:36 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-06-10 22:05:41 +0000 UTC,LastTransitionTime:2022-06-10 20:00:33 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.202,},NodeAddress{Type:Hostname,Address:master1,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:3faca96dd267476388422e9ecfe8ffa5,SystemUUID:00ACFB60-0631-E711-906E-0017A4403562,BootID:a8563bde-8faa-4424-940f-741c59dd35bf,KernelVersion:3.10.0-1160.66.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.17,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:727708945,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 sirot/netperf-latest:latest],SizeBytes:282025213,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[k8s.gcr.io/nfd/node-feature-discovery@sha256:74a1cbd82354f148277e20cdce25d57816e355a896bc67f67a0f722164b16945 k8s.gcr.io/nfd/node-feature-discovery:v0.8.2],SizeBytes:108486428,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[quay.io/prometheus-operator/prometheus-operator@sha256:850c86bfeda4389bc9c757a9fd17ca5a090ea6b424968178d4467492cfa13921 quay.io/prometheus-operator/prometheus-operator:v0.44.1],SizeBytes:42617274,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e k8s.gcr.io/coredns/coredns:v1.8.0],SizeBytes:42454755,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[localhost:30500/tasextender@sha256:bec743bd4fe4525edfd5f3c9bb11da21629092dfe60d396ce7f8168ac1088695 tasextender:latest localhost:30500/tasextender:0.4],SizeBytes:28910791,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[registry@sha256:1cd9409a311350c3072fe510b52046f104416376c126a479cef9a4dfe692cf57 registry:2.7.0],SizeBytes:24191168,},ContainerImage{Names:[nginx@sha256:b92d3b942c8b84da889ac3dc6e83bd20ffb8cd2d8298eba92c8b0bf88d52f03e nginx:1.20.1-alpine],SizeBytes:22721538,},ContainerImage{Names:[@ :],SizeBytes:5577654,},ContainerImage{Names:[alpine@sha256:c0e9560cda118f9ec63ddefb4a173a2b2a0347082d7dff7dc14272e7841a5b5a alpine:3.12.1],SizeBytes:5573013,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jun 10 22:05:42.658: INFO: Logging kubelet events for node master1 Jun 10 22:05:42.660: INFO: Logging pods the kubelet thinks is on node master1 Jun 10 22:05:42.674: INFO: node-exporter-vc67r started at 2022-06-10 20:13:33 +0000 UTC (0+2 container statuses recorded) Jun 10 22:05:42.674: INFO: Container kube-rbac-proxy ready: true, restart count 0 Jun 10 22:05:42.674: INFO: Container node-exporter ready: true, restart count 0 Jun 10 22:05:42.674: INFO: kube-apiserver-master1 started at 2022-06-10 19:58:43 +0000 UTC (0+1 container statuses recorded) Jun 10 22:05:42.674: INFO: Container kube-apiserver ready: true, restart count 0 Jun 10 22:05:42.674: INFO: kube-controller-manager-master1 started at 2022-06-10 20:06:49 +0000 UTC (0+1 container statuses recorded) Jun 10 22:05:42.674: INFO: Container kube-controller-manager ready: true, restart count 2 Jun 10 22:05:42.674: INFO: kube-scheduler-master1 started at 2022-06-10 19:58:43 +0000 UTC (0+1 container statuses recorded) Jun 10 22:05:42.674: INFO: Container kube-scheduler ready: true, restart count 0 Jun 10 22:05:42.674: INFO: kube-proxy-rd4j7 started at 2022-06-10 19:59:24 +0000 UTC (0+1 container statuses recorded) Jun 10 22:05:42.674: INFO: Container kube-proxy ready: true, restart count 3 Jun 10 22:05:42.674: INFO: container-registry-65d7c44b96-rsh2n started at 2022-06-10 20:04:56 +0000 UTC (0+2 container statuses recorded) Jun 10 22:05:42.674: INFO: Container docker-registry ready: true, restart count 0 Jun 10 22:05:42.674: INFO: Container nginx ready: true, restart count 0 Jun 10 22:05:42.674: INFO: node-feature-discovery-controller-cff799f9f-74qhv started at 2022-06-10 20:08:09 +0000 UTC (0+1 container statuses recorded) Jun 10 22:05:42.674: INFO: Container nfd-controller ready: true, restart count 0 Jun 10 22:05:42.674: INFO: prometheus-operator-585ccfb458-kkb8f started at 2022-06-10 20:13:26 +0000 UTC (0+2 container statuses recorded) Jun 10 22:05:42.674: INFO: Container kube-rbac-proxy ready: true, restart count 0 Jun 10 22:05:42.674: INFO: Container prometheus-operator ready: true, restart count 0 Jun 10 22:05:42.674: INFO: kube-flannel-xx9h7 started at 2022-06-10 20:00:20 +0000 UTC (1+1 container statuses recorded) Jun 10 22:05:42.674: INFO: Init container install-cni ready: true, restart count 0 Jun 10 22:05:42.674: INFO: Container kube-flannel ready: true, restart count 1 Jun 10 22:05:42.674: INFO: kube-multus-ds-amd64-t5pr7 started at 2022-06-10 20:00:29 +0000 UTC (0+1 container statuses recorded) Jun 10 22:05:42.674: INFO: Container kube-multus ready: true, restart count 1 Jun 10 22:05:42.674: INFO: dns-autoscaler-7df78bfcfb-kz7px started at 2022-06-10 20:00:58 +0000 UTC (0+1 container statuses recorded) Jun 10 22:05:42.674: INFO: Container autoscaler ready: true, restart count 1 Jun 10 22:05:42.770: INFO: Latency metrics for node master1 Jun 10 22:05:42.770: INFO: Logging node info for node master2 Jun 10 22:05:42.773: INFO: Node Info: &Node{ObjectMeta:{master2 66c7af40-c8de-462b-933d-792f10a44a43 44200 0 2022-06-10 19:58:07 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master2 kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.203 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2022-06-10 19:58:08 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}}} {flanneld Update v1 2022-06-10 20:00:25 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2022-06-10 20:00:25 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}},"f:taints":{}}}} {kubelet Update v1 2022-06-10 20:10:54 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:capacity":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.1.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{451201003520 0} {} 440625980Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234743296 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{406080902496 0} {} 406080902496 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324579328 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-06-10 20:03:20 +0000 UTC,LastTransitionTime:2022-06-10 20:03:20 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-06-10 22:05:40 +0000 UTC,LastTransitionTime:2022-06-10 19:58:07 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-06-10 22:05:40 +0000 UTC,LastTransitionTime:2022-06-10 19:58:07 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-06-10 22:05:40 +0000 UTC,LastTransitionTime:2022-06-10 19:58:07 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-06-10 22:05:40 +0000 UTC,LastTransitionTime:2022-06-10 20:00:25 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.203,},NodeAddress{Type:Hostname,Address:master2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:31687d4b1abb46329a442e068ee56c42,SystemUUID:00A0DE53-E51D-E711-906E-0017A4403562,BootID:e234d452-a6d8-4bf0-b98d-a080613c39e9,KernelVersion:3.10.0-1160.66.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.17,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:727708945,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 sirot/netperf-latest:latest],SizeBytes:282025213,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e k8s.gcr.io/coredns/coredns:v1.8.0],SizeBytes:42454755,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 aquasec/kube-bench:0.3.1],SizeBytes:19301876,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jun 10 22:05:42.774: INFO: Logging kubelet events for node master2 Jun 10 22:05:42.776: INFO: Logging pods the kubelet thinks is on node master2 Jun 10 22:05:42.790: INFO: kube-controller-manager-master2 started at 2022-06-10 20:06:49 +0000 UTC (0+1 container statuses recorded) Jun 10 22:05:42.790: INFO: Container kube-controller-manager ready: true, restart count 1 Jun 10 22:05:42.790: INFO: kube-scheduler-master2 started at 2022-06-10 20:06:49 +0000 UTC (0+1 container statuses recorded) Jun 10 22:05:42.790: INFO: Container kube-scheduler ready: true, restart count 3 Jun 10 22:05:42.790: INFO: kube-proxy-2kbvc started at 2022-06-10 19:59:24 +0000 UTC (0+1 container statuses recorded) Jun 10 22:05:42.790: INFO: Container kube-proxy ready: true, restart count 2 Jun 10 22:05:42.790: INFO: kube-flannel-ftn9l started at 2022-06-10 20:00:20 +0000 UTC (1+1 container statuses recorded) Jun 10 22:05:42.790: INFO: Init container install-cni ready: true, restart count 2 Jun 10 22:05:42.790: INFO: Container kube-flannel ready: true, restart count 1 Jun 10 22:05:42.790: INFO: kube-multus-ds-amd64-nrmqq started at 2022-06-10 20:00:29 +0000 UTC (0+1 container statuses recorded) Jun 10 22:05:42.790: INFO: Container kube-multus ready: true, restart count 1 Jun 10 22:05:42.790: INFO: coredns-8474476ff8-hlspd started at 2022-06-10 20:01:00 +0000 UTC (0+1 container statuses recorded) Jun 10 22:05:42.790: INFO: Container coredns ready: true, restart count 1 Jun 10 22:05:42.790: INFO: kube-apiserver-master2 started at 2022-06-10 19:58:44 +0000 UTC (0+1 container statuses recorded) Jun 10 22:05:42.790: INFO: Container kube-apiserver ready: true, restart count 0 Jun 10 22:05:42.790: INFO: node-exporter-6fbrb started at 2022-06-10 20:13:33 +0000 UTC (0+2 container statuses recorded) Jun 10 22:05:42.790: INFO: Container kube-rbac-proxy ready: true, restart count 0 Jun 10 22:05:42.790: INFO: Container node-exporter ready: true, restart count 0 Jun 10 22:05:42.876: INFO: Latency metrics for node master2 Jun 10 22:05:42.876: INFO: Logging node info for node master3 Jun 10 22:05:42.878: INFO: Node Info: &Node{ObjectMeta:{master3 e51505ec-e791-4bbe-aeb1-bd0671fd4464 44184 0 2022-06-10 19:58:16 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master3 kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.204 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2022-06-10 19:58:18 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}}} {flanneld Update v1 2022-06-10 20:00:25 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2022-06-10 20:00:31 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.2.0/24\"":{}},"f:taints":{}}}} {kubelet Update v1 2022-06-10 20:10:54 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:capacity":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.2.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{451201003520 0} {} 440625980Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234739200 0} {} 196518300Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{406080902496 0} {} 406080902496 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324575232 0} {} 195629468Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-06-10 20:03:14 +0000 UTC,LastTransitionTime:2022-06-10 20:03:14 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-06-10 22:05:37 +0000 UTC,LastTransitionTime:2022-06-10 19:58:16 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-06-10 22:05:37 +0000 UTC,LastTransitionTime:2022-06-10 19:58:16 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-06-10 22:05:37 +0000 UTC,LastTransitionTime:2022-06-10 19:58:16 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-06-10 22:05:37 +0000 UTC,LastTransitionTime:2022-06-10 20:00:31 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.204,},NodeAddress{Type:Hostname,Address:master3,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:1f373495c4c54f68a37fa0d50cd1da58,SystemUUID:008B1444-141E-E711-906E-0017A4403562,BootID:a719d949-f9d1-4ee4-a79b-ab3a929b7d00,KernelVersion:3.10.0-1160.66.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.17,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:727708945,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e k8s.gcr.io/coredns/coredns:v1.8.0],SizeBytes:42454755,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jun 10 22:05:42.879: INFO: Logging kubelet events for node master3 Jun 10 22:05:42.881: INFO: Logging pods the kubelet thinks is on node master3 Jun 10 22:05:42.889: INFO: node-exporter-q4rw6 started at 2022-06-10 20:13:33 +0000 UTC (0+2 container statuses recorded) Jun 10 22:05:42.889: INFO: Container kube-rbac-proxy ready: true, restart count 0 Jun 10 22:05:42.889: INFO: Container node-exporter ready: true, restart count 0 Jun 10 22:05:42.889: INFO: kube-apiserver-master3 started at 2022-06-10 20:03:07 +0000 UTC (0+1 container statuses recorded) Jun 10 22:05:42.889: INFO: Container kube-apiserver ready: true, restart count 0 Jun 10 22:05:42.889: INFO: kube-controller-manager-master3 started at 2022-06-10 20:06:49 +0000 UTC (0+1 container statuses recorded) Jun 10 22:05:42.889: INFO: Container kube-controller-manager ready: true, restart count 2 Jun 10 22:05:42.889: INFO: kube-scheduler-master3 started at 2022-06-10 20:03:07 +0000 UTC (0+1 container statuses recorded) Jun 10 22:05:42.889: INFO: Container kube-scheduler ready: true, restart count 1 Jun 10 22:05:42.889: INFO: kube-proxy-rm9n6 started at 2022-06-10 19:59:24 +0000 UTC (0+1 container statuses recorded) Jun 10 22:05:42.889: INFO: Container kube-proxy ready: true, restart count 1 Jun 10 22:05:42.889: INFO: kube-flannel-jpd2j started at 2022-06-10 20:00:20 +0000 UTC (1+1 container statuses recorded) Jun 10 22:05:42.889: INFO: Init container install-cni ready: true, restart count 2 Jun 10 22:05:42.889: INFO: Container kube-flannel ready: true, restart count 2 Jun 10 22:05:42.889: INFO: kube-multus-ds-amd64-8b4tg started at 2022-06-10 20:00:29 +0000 UTC (0+1 container statuses recorded) Jun 10 22:05:42.889: INFO: Container kube-multus ready: true, restart count 1 Jun 10 22:05:42.889: INFO: coredns-8474476ff8-s8q89 started at 2022-06-10 20:00:56 +0000 UTC (0+1 container statuses recorded) Jun 10 22:05:42.889: INFO: Container coredns ready: true, restart count 1 Jun 10 22:05:42.970: INFO: Latency metrics for node master3 Jun 10 22:05:42.970: INFO: Logging node info for node node1 Jun 10 22:05:42.973: INFO: Node Info: &Node{ObjectMeta:{node1 fa951133-0317-499e-8a0a-fc7a0636a371 44140 0 2022-06-10 19:59:19 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux cmk.intel.com/cmk-node:true feature.node.kubernetes.io/cpu-cpuid.ADX:true feature.node.kubernetes.io/cpu-cpuid.AESNI:true feature.node.kubernetes.io/cpu-cpuid.AVX:true feature.node.kubernetes.io/cpu-cpuid.AVX2:true feature.node.kubernetes.io/cpu-cpuid.AVX512BW:true feature.node.kubernetes.io/cpu-cpuid.AVX512CD:true feature.node.kubernetes.io/cpu-cpuid.AVX512DQ:true feature.node.kubernetes.io/cpu-cpuid.AVX512F:true feature.node.kubernetes.io/cpu-cpuid.AVX512VL:true feature.node.kubernetes.io/cpu-cpuid.FMA3:true feature.node.kubernetes.io/cpu-cpuid.HLE:true feature.node.kubernetes.io/cpu-cpuid.IBPB:true feature.node.kubernetes.io/cpu-cpuid.MPX:true feature.node.kubernetes.io/cpu-cpuid.RTM:true feature.node.kubernetes.io/cpu-cpuid.SSE4:true feature.node.kubernetes.io/cpu-cpuid.SSE42:true feature.node.kubernetes.io/cpu-cpuid.STIBP:true feature.node.kubernetes.io/cpu-cpuid.VMX:true feature.node.kubernetes.io/cpu-cstate.enabled:true feature.node.kubernetes.io/cpu-hardware_multithreading:true feature.node.kubernetes.io/cpu-pstate.status:active feature.node.kubernetes.io/cpu-pstate.turbo:true feature.node.kubernetes.io/cpu-rdt.RDTCMT:true feature.node.kubernetes.io/cpu-rdt.RDTL3CA:true feature.node.kubernetes.io/cpu-rdt.RDTMBA:true feature.node.kubernetes.io/cpu-rdt.RDTMBM:true feature.node.kubernetes.io/cpu-rdt.RDTMON:true feature.node.kubernetes.io/kernel-config.NO_HZ:true feature.node.kubernetes.io/kernel-config.NO_HZ_FULL:true feature.node.kubernetes.io/kernel-selinux.enabled:true feature.node.kubernetes.io/kernel-version.full:3.10.0-1160.66.1.el7.x86_64 feature.node.kubernetes.io/kernel-version.major:3 feature.node.kubernetes.io/kernel-version.minor:10 feature.node.kubernetes.io/kernel-version.revision:0 feature.node.kubernetes.io/memory-numa:true feature.node.kubernetes.io/network-sriov.capable:true feature.node.kubernetes.io/network-sriov.configured:true feature.node.kubernetes.io/pci-0300_1a03.present:true feature.node.kubernetes.io/storage-nonrotationaldisk:true feature.node.kubernetes.io/system-os_release.ID:centos feature.node.kubernetes.io/system-os_release.VERSION_ID:7 feature.node.kubernetes.io/system-os_release.VERSION_ID.major:7 kubernetes.io/arch:amd64 kubernetes.io/hostname:node1 kubernetes.io/os:linux] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.207 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/extended-resources: nfd.node.kubernetes.io/feature-labels:cpu-cpuid.ADX,cpu-cpuid.AESNI,cpu-cpuid.AVX,cpu-cpuid.AVX2,cpu-cpuid.AVX512BW,cpu-cpuid.AVX512CD,cpu-cpuid.AVX512DQ,cpu-cpuid.AVX512F,cpu-cpuid.AVX512VL,cpu-cpuid.FMA3,cpu-cpuid.HLE,cpu-cpuid.IBPB,cpu-cpuid.MPX,cpu-cpuid.RTM,cpu-cpuid.SSE4,cpu-cpuid.SSE42,cpu-cpuid.STIBP,cpu-cpuid.VMX,cpu-cstate.enabled,cpu-hardware_multithreading,cpu-pstate.status,cpu-pstate.turbo,cpu-rdt.RDTCMT,cpu-rdt.RDTL3CA,cpu-rdt.RDTMBA,cpu-rdt.RDTMBM,cpu-rdt.RDTMON,kernel-config.NO_HZ,kernel-config.NO_HZ_FULL,kernel-selinux.enabled,kernel-version.full,kernel-version.major,kernel-version.minor,kernel-version.revision,memory-numa,network-sriov.capable,network-sriov.configured,pci-0300_1a03.present,storage-nonrotationaldisk,system-os_release.ID,system-os_release.VERSION_ID,system-os_release.VERSION_ID.major nfd.node.kubernetes.io/worker.version:v0.8.2 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2022-06-10 19:59:19 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.3.0/24\"":{}}}}} {kubeadm Update v1 2022-06-10 19:59:20 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {flanneld Update v1 2022-06-10 20:00:25 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {nfd-master Update v1 2022-06-10 20:08:16 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/extended-resources":{},"f:nfd.node.kubernetes.io/feature-labels":{},"f:nfd.node.kubernetes.io/worker.version":{}},"f:labels":{"f:feature.node.kubernetes.io/cpu-cpuid.ADX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AESNI":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX2":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512BW":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512CD":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512DQ":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512F":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512VL":{},"f:feature.node.kubernetes.io/cpu-cpuid.FMA3":{},"f:feature.node.kubernetes.io/cpu-cpuid.HLE":{},"f:feature.node.kubernetes.io/cpu-cpuid.IBPB":{},"f:feature.node.kubernetes.io/cpu-cpuid.MPX":{},"f:feature.node.kubernetes.io/cpu-cpuid.RTM":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE4":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE42":{},"f:feature.node.kubernetes.io/cpu-cpuid.STIBP":{},"f:feature.node.kubernetes.io/cpu-cpuid.VMX":{},"f:feature.node.kubernetes.io/cpu-cstate.enabled":{},"f:feature.node.kubernetes.io/cpu-hardware_multithreading":{},"f:feature.node.kubernetes.io/cpu-pstate.status":{},"f:feature.node.kubernetes.io/cpu-pstate.turbo":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTCMT":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTL3CA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBM":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMON":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ_FULL":{},"f:feature.node.kubernetes.io/kernel-selinux.enabled":{},"f:feature.node.kubernetes.io/kernel-version.full":{},"f:feature.node.kubernetes.io/kernel-version.major":{},"f:feature.node.kubernetes.io/kernel-version.minor":{},"f:feature.node.kubernetes.io/kernel-version.revision":{},"f:feature.node.kubernetes.io/memory-numa":{},"f:feature.node.kubernetes.io/network-sriov.capable":{},"f:feature.node.kubernetes.io/network-sriov.configured":{},"f:feature.node.kubernetes.io/pci-0300_1a03.present":{},"f:feature.node.kubernetes.io/storage-nonrotationaldisk":{},"f:feature.node.kubernetes.io/system-os_release.ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID.major":{}}}}} {Swagger-Codegen Update v1 2022-06-10 20:11:46 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:cmk.intel.com/cmk-node":{}}},"f:status":{"f:capacity":{"f:cmk.intel.com/exclusive-cores":{}}}}} {kubelet Update v1 2022-06-10 20:11:53 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:cmk.intel.com/exclusive-cores":{},"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:capacity":{"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.3.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.3.0/24],},Status:NodeStatus{Capacity:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{451201003520 0} {} 440625980Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{201269608448 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{77 0} {} 77 DecimalSI},ephemeral-storage: {{406080902496 0} {} 406080902496 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{178884608000 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-06-10 20:03:13 +0000 UTC,LastTransitionTime:2022-06-10 20:03:13 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-06-10 22:05:36 +0000 UTC,LastTransitionTime:2022-06-10 19:59:19 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-06-10 22:05:36 +0000 UTC,LastTransitionTime:2022-06-10 19:59:19 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-06-10 22:05:36 +0000 UTC,LastTransitionTime:2022-06-10 19:59:19 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-06-10 22:05:36 +0000 UTC,LastTransitionTime:2022-06-10 20:00:27 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.207,},NodeAddress{Type:Hostname,Address:node1,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:aabc551d0ffe4cb3b41c0db91649a9a2,SystemUUID:00CDA902-D022-E711-906E-0017A4403562,BootID:fea48af7-d08f-4093-b808-340d06faf38b,KernelVersion:3.10.0-1160.66.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.17,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[opnfv/barometer-collectd@sha256:f30e965aa6195e6ac4ca2410f5a15e3704c92e4afa5208178ca22a7911975d66],SizeBytes:1075575763,},ContainerImage{Names:[@ :],SizeBytes:1003977815,},ContainerImage{Names:[localhost:30500/cmk@sha256:fa61e6e6fee0a4d296013d2993a9ff5538ff0b2e232e6b9c661a6604d93ce888 cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:727708945,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[golang@sha256:db2475a1dbb2149508e5db31d7d77a75e6600d54be645f37681f03f2762169ba golang:alpine3.12],SizeBytes:301186719,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[k8s.gcr.io/etcd@sha256:4ad90a11b55313b182afc186b9876c8e891531b8db4c9bf1541953021618d0e2 k8s.gcr.io/etcd:3.4.13-0],SizeBytes:253392289,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:702a992280fb7c3303e84a5801acbb4c9c7fcf48cffe0e9c8be3f0c60f74cf89 k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.4],SizeBytes:253371792,},ContainerImage{Names:[grafana/grafana@sha256:ba39bf5131dcc0464134a3ff0e26e8c6380415249fa725e5f619176601255172 grafana/grafana:7.5.4],SizeBytes:203572842,},ContainerImage{Names:[quay.io/prometheus/prometheus@sha256:b899dbd1b9017b9a379f76ce5b40eead01a62762c4f2057eacef945c3c22d210 quay.io/prometheus/prometheus:v2.22.1],SizeBytes:168344243,},ContainerImage{Names:[nginx@sha256:a05b0cdd4fc1be3b224ba9662ebdf98fe44c09c0c9215b45f84344c12867002e nginx:1.21.1],SizeBytes:133175493,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:716d2f68314c5c4ddd5ecdb45183fcb4ed8019015982c1321571f863989b70b0 k8s.gcr.io/e2e-test-images/httpd:2.4.39-1],SizeBytes:126894770,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:125930239,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:1f36a24cfb5e0c3f725d7565a867c2384282fcbeccc77b07b423c9da95763a9a k8s.gcr.io/e2e-test-images/nautilus:1.4],SizeBytes:121748345,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[k8s.gcr.io/nfd/node-feature-discovery@sha256:74a1cbd82354f148277e20cdce25d57816e355a896bc67f67a0f722164b16945 k8s.gcr.io/nfd/node-feature-discovery:v0.8.2],SizeBytes:108486428,},ContainerImage{Names:[directxman12/k8s-prometheus-adapter@sha256:2b09a571757a12c0245f2f1a74db4d1b9386ff901cf57f5ce48a0a682bd0e3af directxman12/k8s-prometheus-adapter:v0.8.2],SizeBytes:68230450,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/sample-apiserver@sha256:e7fddbaac4c3451da2365ab90bad149d32f11409738034e41e0f460927f7c276 k8s.gcr.io/e2e-test-images/sample-apiserver:1.17.4],SizeBytes:58172101,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[localhost:30500/sriov-device-plugin@sha256:73408b8d6699bf382b8f7526b6d0a986fad0f037440cd9aabd8985a7e1dbea07 nfvpe/sriov-device-plugin:latest localhost:30500/sriov-device-plugin:v3.3.2],SizeBytes:42676189,},ContainerImage{Names:[localhost:30500/tasextender@sha256:bec743bd4fe4525edfd5f3c9bb11da21629092dfe60d396ce7f8168ac1088695 localhost:30500/tasextender:0.4],SizeBytes:28910791,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[prom/collectd-exporter@sha256:73fbda4d24421bff3b741c27efc36f1b6fbe7c57c378d56d4ff78101cd556654],SizeBytes:17463681,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:16032814,},ContainerImage{Names:[quay.io/prometheus-operator/prometheus-config-reloader@sha256:4dee0fcf1820355ddd6986c1317b555693776c731315544a99d6cc59a7e34ce9 quay.io/prometheus-operator/prometheus-config-reloader:v0.44.1],SizeBytes:13433274,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:11443478,},ContainerImage{Names:[appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172cead6502fec989198b98d8bb appropriate/curl:edge],SizeBytes:5654234,},ContainerImage{Names:[alpine@sha256:c75ac27b49326926b803b9ed43bf088bc220d22556de1bc5f72d742c91398f69 alpine:3.12],SizeBytes:5581590,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:1154361,},ContainerImage{Names:[busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 busybox:1.28],SizeBytes:1146369,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jun 10 22:05:42.974: INFO: Logging kubelet events for node node1 Jun 10 22:05:42.976: INFO: Logging pods the kubelet thinks is on node node1 Jun 10 22:05:42.993: INFO: cmk-webhook-6c9d5f8578-n9w8j started at 2022-06-10 20:12:30 +0000 UTC (0+1 container statuses recorded) Jun 10 22:05:42.993: INFO: Container cmk-webhook ready: true, restart count 0 Jun 10 22:05:42.993: INFO: collectd-kpj5z started at 2022-06-10 20:17:30 +0000 UTC (0+3 container statuses recorded) Jun 10 22:05:42.993: INFO: Container collectd ready: true, restart count 0 Jun 10 22:05:42.993: INFO: Container collectd-exporter ready: true, restart count 0 Jun 10 22:05:42.993: INFO: Container rbac-proxy ready: true, restart count 0 Jun 10 22:05:42.993: INFO: cmk-qjrhs started at 2022-06-10 20:12:29 +0000 UTC (0+2 container statuses recorded) Jun 10 22:05:42.993: INFO: Container nodereport ready: true, restart count 0 Jun 10 22:05:42.993: INFO: Container reconcile ready: true, restart count 0 Jun 10 22:05:42.993: INFO: test-pod started at 2022-06-10 22:02:20 +0000 UTC (0+1 container statuses recorded) Jun 10 22:05:42.993: INFO: Container webserver ready: true, restart count 0 Jun 10 22:05:42.993: INFO: nginx-proxy-node1 started at 2022-06-10 19:59:19 +0000 UTC (0+1 container statuses recorded) Jun 10 22:05:42.993: INFO: Container nginx-proxy ready: true, restart count 2 Jun 10 22:05:42.993: INFO: execpod-affinitybwztk started at 2022-06-10 22:05:36 +0000 UTC (0+1 container statuses recorded) Jun 10 22:05:42.993: INFO: Container agnhost-container ready: true, restart count 0 Jun 10 22:05:42.993: INFO: pod-with-poststart-http-hook started at 2022-06-10 22:05:24 +0000 UTC (0+1 container statuses recorded) Jun 10 22:05:42.993: INFO: Container pod-with-poststart-http-hook ready: false, restart count 0 Jun 10 22:05:42.993: INFO: affinity-nodeport-transition-cx685 started at 2022-06-10 22:05:27 +0000 UTC (0+1 container statuses recorded) Jun 10 22:05:42.993: INFO: Container affinity-nodeport-transition ready: true, restart count 0 Jun 10 22:05:42.993: INFO: kube-proxy-5bkrr started at 2022-06-10 19:59:24 +0000 UTC (0+1 container statuses recorded) Jun 10 22:05:42.993: INFO: Container kube-proxy ready: true, restart count 1 Jun 10 22:05:42.993: INFO: kube-flannel-x926c started at 2022-06-10 20:00:20 +0000 UTC (1+1 container statuses recorded) Jun 10 22:05:42.993: INFO: Init container install-cni ready: true, restart count 2 Jun 10 22:05:42.993: INFO: Container kube-flannel ready: true, restart count 2 Jun 10 22:05:42.993: INFO: kube-multus-ds-amd64-4gckf started at 2022-06-10 20:00:29 +0000 UTC (0+1 container statuses recorded) Jun 10 22:05:42.993: INFO: Container kube-multus ready: true, restart count 1 Jun 10 22:05:42.993: INFO: tas-telemetry-aware-scheduling-84ff454dfb-lb2mn started at 2022-06-10 20:16:40 +0000 UTC (0+1 container statuses recorded) Jun 10 22:05:42.993: INFO: Container tas-extender ready: true, restart count 0 Jun 10 22:05:42.993: INFO: execpod-affinitypdwf4 started at 2022-06-10 22:05:00 +0000 UTC (0+1 container statuses recorded) Jun 10 22:05:42.993: INFO: Container agnhost-container ready: true, restart count 0 Jun 10 22:05:42.993: INFO: netserver-0 started at 2022-06-10 22:05:29 +0000 UTC (0+1 container statuses recorded) Jun 10 22:05:42.993: INFO: Container webserver ready: false, restart count 0 Jun 10 22:05:42.993: INFO: sample-crd-conversion-webhook-deployment-697cdbd8f4-fbn58 started at 2022-06-10 22:05:37 +0000 UTC (0+1 container statuses recorded) Jun 10 22:05:42.993: INFO: Container sample-crd-conversion-webhook ready: true, restart count 0 Jun 10 22:05:42.993: INFO: execpodwpls4 started at 2022-06-10 22:03:36 +0000 UTC (0+1 container statuses recorded) Jun 10 22:05:42.993: INFO: Container agnhost-container ready: true, restart count 0 Jun 10 22:05:42.993: INFO: cmk-init-discover-node1-hlbt6 started at 2022-06-10 20:11:42 +0000 UTC (0+3 container statuses recorded) Jun 10 22:05:42.993: INFO: Container discover ready: false, restart count 0 Jun 10 22:05:42.993: INFO: Container init ready: false, restart count 0 Jun 10 22:05:42.993: INFO: Container install ready: false, restart count 0 Jun 10 22:05:42.993: INFO: prometheus-k8s-0 started at 2022-06-10 20:13:45 +0000 UTC (0+4 container statuses recorded) Jun 10 22:05:42.993: INFO: Container config-reloader ready: true, restart count 0 Jun 10 22:05:42.993: INFO: Container custom-metrics-apiserver ready: true, restart count 0 Jun 10 22:05:42.993: INFO: Container grafana ready: true, restart count 0 Jun 10 22:05:42.993: INFO: Container prometheus ready: true, restart count 1 Jun 10 22:05:42.993: INFO: node-feature-discovery-worker-9xsdt started at 2022-06-10 20:08:09 +0000 UTC (0+1 container statuses recorded) Jun 10 22:05:42.993: INFO: Container nfd-worker ready: true, restart count 0 Jun 10 22:05:42.993: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-k4f5v started at 2022-06-10 20:09:21 +0000 UTC (0+1 container statuses recorded) Jun 10 22:05:42.993: INFO: Container kube-sriovdp ready: true, restart count 0 Jun 10 22:05:42.993: INFO: node-exporter-tk8f9 started at 2022-06-10 20:13:33 +0000 UTC (0+2 container statuses recorded) Jun 10 22:05:42.993: INFO: Container kube-rbac-proxy ready: true, restart count 0 Jun 10 22:05:42.993: INFO: Container node-exporter ready: true, restart count 0 Jun 10 22:05:43.302: INFO: Latency metrics for node node1 Jun 10 22:05:43.302: INFO: Logging node info for node node2 Jun 10 22:05:43.305: INFO: Node Info: &Node{ObjectMeta:{node2 e3ba5b73-7a35-4d3f-9138-31db06c90dc3 44083 0 2022-06-10 19:59:19 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux cmk.intel.com/cmk-node:true feature.node.kubernetes.io/cpu-cpuid.ADX:true feature.node.kubernetes.io/cpu-cpuid.AESNI:true feature.node.kubernetes.io/cpu-cpuid.AVX:true feature.node.kubernetes.io/cpu-cpuid.AVX2:true feature.node.kubernetes.io/cpu-cpuid.AVX512BW:true feature.node.kubernetes.io/cpu-cpuid.AVX512CD:true feature.node.kubernetes.io/cpu-cpuid.AVX512DQ:true feature.node.kubernetes.io/cpu-cpuid.AVX512F:true feature.node.kubernetes.io/cpu-cpuid.AVX512VL:true feature.node.kubernetes.io/cpu-cpuid.FMA3:true feature.node.kubernetes.io/cpu-cpuid.HLE:true feature.node.kubernetes.io/cpu-cpuid.IBPB:true feature.node.kubernetes.io/cpu-cpuid.MPX:true feature.node.kubernetes.io/cpu-cpuid.RTM:true feature.node.kubernetes.io/cpu-cpuid.SSE4:true feature.node.kubernetes.io/cpu-cpuid.SSE42:true feature.node.kubernetes.io/cpu-cpuid.STIBP:true feature.node.kubernetes.io/cpu-cpuid.VMX:true feature.node.kubernetes.io/cpu-cstate.enabled:true feature.node.kubernetes.io/cpu-hardware_multithreading:true feature.node.kubernetes.io/cpu-pstate.status:active feature.node.kubernetes.io/cpu-pstate.turbo:true feature.node.kubernetes.io/cpu-rdt.RDTCMT:true feature.node.kubernetes.io/cpu-rdt.RDTL3CA:true feature.node.kubernetes.io/cpu-rdt.RDTMBA:true feature.node.kubernetes.io/cpu-rdt.RDTMBM:true feature.node.kubernetes.io/cpu-rdt.RDTMON:true feature.node.kubernetes.io/kernel-config.NO_HZ:true feature.node.kubernetes.io/kernel-config.NO_HZ_FULL:true feature.node.kubernetes.io/kernel-selinux.enabled:true feature.node.kubernetes.io/kernel-version.full:3.10.0-1160.66.1.el7.x86_64 feature.node.kubernetes.io/kernel-version.major:3 feature.node.kubernetes.io/kernel-version.minor:10 feature.node.kubernetes.io/kernel-version.revision:0 feature.node.kubernetes.io/memory-numa:true feature.node.kubernetes.io/network-sriov.capable:true feature.node.kubernetes.io/network-sriov.configured:true feature.node.kubernetes.io/pci-0300_1a03.present:true feature.node.kubernetes.io/storage-nonrotationaldisk:true feature.node.kubernetes.io/system-os_release.ID:centos feature.node.kubernetes.io/system-os_release.VERSION_ID:7 feature.node.kubernetes.io/system-os_release.VERSION_ID.major:7 kubernetes.io/arch:amd64 kubernetes.io/hostname:node2 kubernetes.io/os:linux] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.208 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/extended-resources: nfd.node.kubernetes.io/feature-labels:cpu-cpuid.ADX,cpu-cpuid.AESNI,cpu-cpuid.AVX,cpu-cpuid.AVX2,cpu-cpuid.AVX512BW,cpu-cpuid.AVX512CD,cpu-cpuid.AVX512DQ,cpu-cpuid.AVX512F,cpu-cpuid.AVX512VL,cpu-cpuid.FMA3,cpu-cpuid.HLE,cpu-cpuid.IBPB,cpu-cpuid.MPX,cpu-cpuid.RTM,cpu-cpuid.SSE4,cpu-cpuid.SSE42,cpu-cpuid.STIBP,cpu-cpuid.VMX,cpu-cstate.enabled,cpu-hardware_multithreading,cpu-pstate.status,cpu-pstate.turbo,cpu-rdt.RDTCMT,cpu-rdt.RDTL3CA,cpu-rdt.RDTMBA,cpu-rdt.RDTMBM,cpu-rdt.RDTMON,kernel-config.NO_HZ,kernel-config.NO_HZ_FULL,kernel-selinux.enabled,kernel-version.full,kernel-version.major,kernel-version.minor,kernel-version.revision,memory-numa,network-sriov.capable,network-sriov.configured,pci-0300_1a03.present,storage-nonrotationaldisk,system-os_release.ID,system-os_release.VERSION_ID,system-os_release.VERSION_ID.major nfd.node.kubernetes.io/worker.version:v0.8.2 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2022-06-10 19:59:19 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.4.0/24\"":{}}}}} {kubeadm Update v1 2022-06-10 19:59:20 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {flanneld Update v1 2022-06-10 20:00:25 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {nfd-master Update v1 2022-06-10 20:08:16 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/extended-resources":{},"f:nfd.node.kubernetes.io/feature-labels":{},"f:nfd.node.kubernetes.io/worker.version":{}},"f:labels":{"f:feature.node.kubernetes.io/cpu-cpuid.ADX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AESNI":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX2":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512BW":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512CD":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512DQ":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512F":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512VL":{},"f:feature.node.kubernetes.io/cpu-cpuid.FMA3":{},"f:feature.node.kubernetes.io/cpu-cpuid.HLE":{},"f:feature.node.kubernetes.io/cpu-cpuid.IBPB":{},"f:feature.node.kubernetes.io/cpu-cpuid.MPX":{},"f:feature.node.kubernetes.io/cpu-cpuid.RTM":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE4":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE42":{},"f:feature.node.kubernetes.io/cpu-cpuid.STIBP":{},"f:feature.node.kubernetes.io/cpu-cpuid.VMX":{},"f:feature.node.kubernetes.io/cpu-cstate.enabled":{},"f:feature.node.kubernetes.io/cpu-hardware_multithreading":{},"f:feature.node.kubernetes.io/cpu-pstate.status":{},"f:feature.node.kubernetes.io/cpu-pstate.turbo":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTCMT":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTL3CA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBM":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMON":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ_FULL":{},"f:feature.node.kubernetes.io/kernel-selinux.enabled":{},"f:feature.node.kubernetes.io/kernel-version.full":{},"f:feature.node.kubernetes.io/kernel-version.major":{},"f:feature.node.kubernetes.io/kernel-version.minor":{},"f:feature.node.kubernetes.io/kernel-version.revision":{},"f:feature.node.kubernetes.io/memory-numa":{},"f:feature.node.kubernetes.io/network-sriov.capable":{},"f:feature.node.kubernetes.io/network-sriov.configured":{},"f:feature.node.kubernetes.io/pci-0300_1a03.present":{},"f:feature.node.kubernetes.io/storage-nonrotationaldisk":{},"f:feature.node.kubernetes.io/system-os_release.ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID.major":{}}}}} {Swagger-Codegen Update v1 2022-06-10 20:12:10 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:cmk.intel.com/cmk-node":{}}},"f:status":{"f:capacity":{"f:cmk.intel.com/exclusive-cores":{}}}}} {kubelet Update v1 2022-06-10 20:12:13 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:cmk.intel.com/exclusive-cores":{},"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:capacity":{"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.4.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.4.0/24],},Status:NodeStatus{Capacity:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{451201003520 0} {} 440625980Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{201269604352 0} {} 196552348Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{77 0} {} 77 DecimalSI},ephemeral-storage: {{406080902496 0} {} 406080902496 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{178884603904 0} {} 174691996Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-06-10 20:03:16 +0000 UTC,LastTransitionTime:2022-06-10 20:03:16 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-06-10 22:05:34 +0000 UTC,LastTransitionTime:2022-06-10 19:59:19 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-06-10 22:05:34 +0000 UTC,LastTransitionTime:2022-06-10 19:59:19 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-06-10 22:05:34 +0000 UTC,LastTransitionTime:2022-06-10 19:59:19 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-06-10 22:05:34 +0000 UTC,LastTransitionTime:2022-06-10 20:00:31 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.208,},NodeAddress{Type:Hostname,Address:node2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:bb5fb4a83f9949939cd41b7583e9b343,SystemUUID:80B3CD56-852F-E711-906E-0017A4403562,BootID:bd9c2046-c9ae-4b83-a147-c07e3487254e,KernelVersion:3.10.0-1160.66.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.17,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[opnfv/barometer-collectd@sha256:f30e965aa6195e6ac4ca2410f5a15e3704c92e4afa5208178ca22a7911975d66],SizeBytes:1075575763,},ContainerImage{Names:[localhost:30500/cmk@sha256:fa61e6e6fee0a4d296013d2993a9ff5538ff0b2e232e6b9c661a6604d93ce888 localhost:30500/cmk:v1.5.1],SizeBytes:727708945,},ContainerImage{Names:[cmk:v1.5.1],SizeBytes:727708945,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[aquasec/kube-hunter@sha256:2be6820bc1d7e0f57193a9a27d5a3e16b2fd93c53747b03ce8ca48c6fc323781 aquasec/kube-hunter:0.3.1],SizeBytes:347611549,},ContainerImage{Names:[sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 sirot/netperf-latest:latest],SizeBytes:282025213,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:702a992280fb7c3303e84a5801acbb4c9c7fcf48cffe0e9c8be3f0c60f74cf89 k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.4],SizeBytes:253371792,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[nginx@sha256:a05b0cdd4fc1be3b224ba9662ebdf98fe44c09c0c9215b45f84344c12867002e nginx:1.21.1],SizeBytes:133175493,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:716d2f68314c5c4ddd5ecdb45183fcb4ed8019015982c1321571f863989b70b0 k8s.gcr.io/e2e-test-images/httpd:2.4.39-1],SizeBytes:126894770,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:125930239,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:1f36a24cfb5e0c3f725d7565a867c2384282fcbeccc77b07b423c9da95763a9a k8s.gcr.io/e2e-test-images/nautilus:1.4],SizeBytes:121748345,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[k8s.gcr.io/nfd/node-feature-discovery@sha256:74a1cbd82354f148277e20cdce25d57816e355a896bc67f67a0f722164b16945 k8s.gcr.io/nfd/node-feature-discovery:v0.8.2],SizeBytes:108486428,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[localhost:30500/sriov-device-plugin@sha256:73408b8d6699bf382b8f7526b6d0a986fad0f037440cd9aabd8985a7e1dbea07 localhost:30500/sriov-device-plugin:v3.3.2],SizeBytes:42676189,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 aquasec/kube-bench:0.3.1],SizeBytes:19301876,},ContainerImage{Names:[prom/collectd-exporter@sha256:73fbda4d24421bff3b741c27efc36f1b6fbe7c57c378d56d4ff78101cd556654],SizeBytes:17463681,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:16032814,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:11443478,},ContainerImage{Names:[appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172cead6502fec989198b98d8bb appropriate/curl:edge],SizeBytes:5654234,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:1154361,},ContainerImage{Names:[busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 busybox:1.28],SizeBytes:1146369,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jun 10 22:05:43.306: INFO: Logging kubelet events for node node2 Jun 10 22:05:43.309: INFO: Logging pods the kubelet thinks is on node node2 Jun 10 22:05:43.399: INFO: affinity-nodeport-transition-rpgll started at 2022-06-10 22:05:27 +0000 UTC (0+1 container statuses recorded) Jun 10 22:05:43.400: INFO: Container affinity-nodeport-transition ready: true, restart count 0 Jun 10 22:05:43.400: INFO: kubernetes-metrics-scraper-5558854cb-pf6tn started at 2022-06-10 20:01:01 +0000 UTC (0+1 container statuses recorded) Jun 10 22:05:43.400: INFO: Container kubernetes-metrics-scraper ready: true, restart count 1 Jun 10 22:05:43.400: INFO: node-exporter-trpg7 started at 2022-06-10 20:13:33 +0000 UTC (0+2 container statuses recorded) Jun 10 22:05:43.400: INFO: Container kube-rbac-proxy ready: true, restart count 0 Jun 10 22:05:43.400: INFO: Container node-exporter ready: true, restart count 0 Jun 10 22:05:43.400: INFO: affinity-nodeport-transition-st24r started at 2022-06-10 22:05:27 +0000 UTC (0+1 container statuses recorded) Jun 10 22:05:43.400: INFO: Container affinity-nodeport-transition ready: true, restart count 0 Jun 10 22:05:43.400: INFO: kube-multus-ds-amd64-nj866 started at 2022-06-10 20:00:29 +0000 UTC (0+1 container statuses recorded) Jun 10 22:05:43.400: INFO: Container kube-multus ready: true, restart count 1 Jun 10 22:05:43.400: INFO: kubernetes-dashboard-785dcbb76d-7pmgn started at 2022-06-10 20:01:00 +0000 UTC (0+1 container statuses recorded) Jun 10 22:05:43.400: INFO: Container kubernetes-dashboard ready: true, restart count 1 Jun 10 22:05:43.400: INFO: cmk-zpstc started at 2022-06-10 20:12:29 +0000 UTC (0+2 container statuses recorded) Jun 10 22:05:43.400: INFO: Container nodereport ready: true, restart count 0 Jun 10 22:05:43.400: INFO: Container reconcile ready: true, restart count 0 Jun 10 22:05:43.400: INFO: affinity-nodeport-w8zqs started at 2022-06-10 22:04:51 +0000 UTC (0+1 container statuses recorded) Jun 10 22:05:43.400: INFO: Container affinity-nodeport ready: true, restart count 0 Jun 10 22:05:43.400: INFO: nginx-proxy-node2 started at 2022-06-10 19:59:19 +0000 UTC (0+1 container statuses recorded) Jun 10 22:05:43.400: INFO: Container nginx-proxy ready: true, restart count 2 Jun 10 22:05:43.400: INFO: pod-service-account-94057c51-f073-43cd-8931-7021c2e1c4bc started at 2022-06-10 22:05:37 +0000 UTC (0+1 container statuses recorded) Jun 10 22:05:43.400: INFO: Container test ready: true, restart count 0 Jun 10 22:05:43.400: INFO: affinity-nodeport-s59tk started at 2022-06-10 22:04:51 +0000 UTC (0+1 container statuses recorded) Jun 10 22:05:43.400: INFO: Container affinity-nodeport ready: true, restart count 0 Jun 10 22:05:43.400: INFO: node-feature-discovery-worker-s9mwk started at 2022-06-10 20:08:09 +0000 UTC (0+1 container statuses recorded) Jun 10 22:05:43.400: INFO: Container nfd-worker ready: true, restart count 0 Jun 10 22:05:43.400: INFO: kube-flannel-8jl6m started at 2022-06-10 20:00:20 +0000 UTC (1+1 container statuses recorded) Jun 10 22:05:43.400: INFO: Init container install-cni ready: true, restart count 2 Jun 10 22:05:43.400: INFO: Container kube-flannel ready: true, restart count 2 Jun 10 22:05:43.400: INFO: cmk-init-discover-node2-jxvbr started at 2022-06-10 20:12:04 +0000 UTC (0+3 container statuses recorded) Jun 10 22:05:43.400: INFO: Container discover ready: false, restart count 0 Jun 10 22:05:43.400: INFO: Container init ready: false, restart count 0 Jun 10 22:05:43.400: INFO: Container install ready: false, restart count 0 Jun 10 22:05:43.400: INFO: nodeport-test-2v642 started at 2022-06-10 22:03:27 +0000 UTC (0+1 container statuses recorded) Jun 10 22:05:43.400: INFO: Container nodeport-test ready: true, restart count 0 Jun 10 22:05:43.400: INFO: nodeport-test-nzshj started at 2022-06-10 22:03:27 +0000 UTC (0+1 container statuses recorded) Jun 10 22:05:43.400: INFO: Container nodeport-test ready: true, restart count 0 Jun 10 22:05:43.400: INFO: pod-configmaps-456af38a-6809-470f-b3bf-88fd9e36e665 started at 2022-06-10 22:05:42 +0000 UTC (0+1 container statuses recorded) Jun 10 22:05:43.400: INFO: Container env-test ready: false, restart count 0 Jun 10 22:05:43.400: INFO: kube-proxy-4clxz started at 2022-06-10 19:59:24 +0000 UTC (0+1 container statuses recorded) Jun 10 22:05:43.400: INFO: Container kube-proxy ready: true, restart count 2 Jun 10 22:05:43.400: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-z4m46 started at 2022-06-10 20:09:21 +0000 UTC (0+1 container statuses recorded) Jun 10 22:05:43.400: INFO: Container kube-sriovdp ready: true, restart count 0 Jun 10 22:05:43.400: INFO: affinity-nodeport-784pk started at 2022-06-10 22:04:51 +0000 UTC (0+1 container statuses recorded) Jun 10 22:05:43.400: INFO: Container affinity-nodeport ready: true, restart count 0 Jun 10 22:05:43.400: INFO: netserver-1 started at 2022-06-10 22:05:29 +0000 UTC (0+1 container statuses recorded) Jun 10 22:05:43.400: INFO: Container webserver ready: false, restart count 0 Jun 10 22:05:43.400: INFO: pod-handle-http-request started at 2022-06-10 22:05:20 +0000 UTC (0+1 container statuses recorded) Jun 10 22:05:43.400: INFO: Container agnhost-container ready: true, restart count 0 Jun 10 22:05:43.400: INFO: collectd-srmjh started at 2022-06-10 20:17:30 +0000 UTC (0+3 container statuses recorded) Jun 10 22:05:43.400: INFO: Container collectd ready: true, restart count 0 Jun 10 22:05:43.400: INFO: Container collectd-exporter ready: true, restart count 0 Jun 10 22:05:43.400: INFO: Container rbac-proxy ready: true, restart count 0 Jun 10 22:05:43.745: INFO: Latency metrics for node node2 Jun 10 22:05:43.745: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-826" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750 • Failure [136.571 seconds] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should be able to create a functioning NodePort service [Conformance] [It] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Jun 10 22:05:42.631: Unexpected error: <*errors.errorString | 0xc0036ea6f0>: { s: "service is not reachable within 2m0s timeout on endpoint 10.10.190.207:32205 over TCP protocol", } service is not reachable within 2m0s timeout on endpoint 10.10.190.207:32205 over TCP protocol occurred /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1169 ------------------------------ [BeforeEach] [sig-node] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 10 22:05:41.981: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating secret secrets-4696/secret-test-3ce6c8eb-ca5b-4472-9fae-096a6b76ed4b STEP: Creating a pod to test consume secrets Jun 10 22:05:42.022: INFO: Waiting up to 5m0s for pod "pod-configmaps-456af38a-6809-470f-b3bf-88fd9e36e665" in namespace "secrets-4696" to be "Succeeded or Failed" Jun 10 22:05:42.024: INFO: Pod "pod-configmaps-456af38a-6809-470f-b3bf-88fd9e36e665": Phase="Pending", Reason="", readiness=false. Elapsed: 2.055608ms Jun 10 22:05:44.027: INFO: Pod "pod-configmaps-456af38a-6809-470f-b3bf-88fd9e36e665": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005391305s Jun 10 22:05:46.030: INFO: Pod "pod-configmaps-456af38a-6809-470f-b3bf-88fd9e36e665": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.008853213s STEP: Saw pod success Jun 10 22:05:46.031: INFO: Pod "pod-configmaps-456af38a-6809-470f-b3bf-88fd9e36e665" satisfied condition "Succeeded or Failed" Jun 10 22:05:46.033: INFO: Trying to get logs from node node2 pod pod-configmaps-456af38a-6809-470f-b3bf-88fd9e36e665 container env-test: STEP: delete the pod Jun 10 22:05:46.045: INFO: Waiting for pod pod-configmaps-456af38a-6809-470f-b3bf-88fd9e36e665 to disappear Jun 10 22:05:46.047: INFO: Pod pod-configmaps-456af38a-6809-470f-b3bf-88fd9e36e665 no longer exists [AfterEach] [sig-node] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 10 22:05:46.047: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-4696" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Secrets should be consumable via the environment [NodeConformance] [Conformance]","total":-1,"completed":42,"skipped":815,"failed":0} SSS ------------------------------ [BeforeEach] [sig-node] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 10 22:05:46.069: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create secret due to empty secret key [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating projection with secret that has name secret-emptykey-test-b46c2ca7-6fe2-4e80-8791-c17fb608afa8 [AfterEach] [sig-node] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 10 22:05:46.095: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-3166" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Secrets should fail to create secret due to empty secret key [Conformance]","total":-1,"completed":43,"skipped":818,"failed":0} SSSSSSSSSSS ------------------------------ {"msg":"FAILED [sig-network] Services should be able to create a functioning NodePort service [Conformance]","total":-1,"completed":16,"skipped":305,"failed":2,"failures":["[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should be able to create a functioning NodePort service [Conformance]"]} [BeforeEach] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 10 22:05:43.763: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Jun 10 22:05:47.824: INFO: Expected: &{OK} to match Container's Termination Message: OK -- STEP: delete the container [AfterEach] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 10 22:05:47.834: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-9155" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":-1,"completed":17,"skipped":305,"failed":2,"failures":["[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should be able to create a functioning NodePort service [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 10 22:05:20.623: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/lifecycle_hook.go:52 STEP: create the container to handle the HTTPGet hook request. Jun 10 22:05:20.660: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Jun 10 22:05:22.663: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Jun 10 22:05:24.665: INFO: The status of Pod pod-handle-http-request is Running (Ready = true) [It] should execute poststart http hook properly [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: create the pod with lifecycle hook Jun 10 22:05:24.679: INFO: The status of Pod pod-with-poststart-http-hook is Pending, waiting for it to be Running (with Ready = true) Jun 10 22:05:26.683: INFO: The status of Pod pod-with-poststart-http-hook is Pending, waiting for it to be Running (with Ready = true) Jun 10 22:05:28.683: INFO: The status of Pod pod-with-poststart-http-hook is Pending, waiting for it to be Running (with Ready = true) Jun 10 22:05:30.682: INFO: The status of Pod pod-with-poststart-http-hook is Running (Ready = true) STEP: check poststart hook STEP: delete the pod with lifecycle hook Jun 10 22:05:30.697: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Jun 10 22:05:30.699: INFO: Pod pod-with-poststart-http-hook still exists Jun 10 22:05:32.700: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Jun 10 22:05:32.703: INFO: Pod pod-with-poststart-http-hook still exists Jun 10 22:05:34.700: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Jun 10 22:05:34.703: INFO: Pod pod-with-poststart-http-hook still exists Jun 10 22:05:36.701: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Jun 10 22:05:36.704: INFO: Pod pod-with-poststart-http-hook still exists Jun 10 22:05:38.701: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Jun 10 22:05:38.704: INFO: Pod pod-with-poststart-http-hook still exists Jun 10 22:05:40.700: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Jun 10 22:05:40.702: INFO: Pod pod-with-poststart-http-hook still exists Jun 10 22:05:42.699: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Jun 10 22:05:42.702: INFO: Pod pod-with-poststart-http-hook still exists Jun 10 22:05:44.700: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Jun 10 22:05:44.703: INFO: Pod pod-with-poststart-http-hook still exists Jun 10 22:05:46.700: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Jun 10 22:05:46.704: INFO: Pod pod-with-poststart-http-hook still exists Jun 10 22:05:48.701: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Jun 10 22:05:48.704: INFO: Pod pod-with-poststart-http-hook no longer exists [AfterEach] [sig-node] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 10 22:05:48.704: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-604" for this suite. • [SLOW TEST:28.091 seconds] [sig-node] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 when create a pod with lifecycle hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/lifecycle_hook.go:43 should execute poststart http hook properly [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance]","total":-1,"completed":22,"skipped":229,"failed":0} SSSSS ------------------------------ [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 10 22:05:36.684: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:126 STEP: Setting up server cert STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication STEP: Deploying the custom resource conversion webhook pod STEP: Wait for the deployment to be ready Jun 10 22:05:37.014: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set Jun 10 22:05:39.024: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63790495537, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63790495537, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63790495537, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63790495537, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-697cdbd8f4\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jun 10 22:05:42.038: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1 [It] should be able to convert a non homogeneous list of CRs [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Jun 10 22:05:42.042: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating a v1 custom resource STEP: Create a v2 custom resource STEP: List CRs in v1 STEP: List CRs in v2 [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 10 22:05:50.180: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-webhook-2598" for this suite. [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:137 • [SLOW TEST:13.525 seconds] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to convert a non homogeneous list of CRs [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","total":-1,"completed":13,"skipped":193,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 10 22:05:47.896: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a volume subpath [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test substitution in volume subpath Jun 10 22:05:47.929: INFO: Waiting up to 5m0s for pod "var-expansion-86438a2f-fc71-40a8-9738-d169723f3cfe" in namespace "var-expansion-3603" to be "Succeeded or Failed" Jun 10 22:05:47.932: INFO: Pod "var-expansion-86438a2f-fc71-40a8-9738-d169723f3cfe": Phase="Pending", Reason="", readiness=false. Elapsed: 2.319906ms Jun 10 22:05:49.935: INFO: Pod "var-expansion-86438a2f-fc71-40a8-9738-d169723f3cfe": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005869085s Jun 10 22:05:51.938: INFO: Pod "var-expansion-86438a2f-fc71-40a8-9738-d169723f3cfe": Phase="Pending", Reason="", readiness=false. Elapsed: 4.008725849s Jun 10 22:05:53.943: INFO: Pod "var-expansion-86438a2f-fc71-40a8-9738-d169723f3cfe": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.013221433s STEP: Saw pod success Jun 10 22:05:53.943: INFO: Pod "var-expansion-86438a2f-fc71-40a8-9738-d169723f3cfe" satisfied condition "Succeeded or Failed" Jun 10 22:05:53.945: INFO: Trying to get logs from node node2 pod var-expansion-86438a2f-fc71-40a8-9738-d169723f3cfe container dapi-container: STEP: delete the pod Jun 10 22:05:54.063: INFO: Waiting for pod var-expansion-86438a2f-fc71-40a8-9738-d169723f3cfe to disappear Jun 10 22:05:54.065: INFO: Pod var-expansion-86438a2f-fc71-40a8-9738-d169723f3cfe no longer exists [AfterEach] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 10 22:05:54.066: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-3603" for this suite. • [SLOW TEST:6.177 seconds] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should allow substituting values in a volume subpath [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Variable Expansion should allow substituting values in a volume subpath [Conformance]","total":-1,"completed":18,"skipped":333,"failed":2,"failures":["[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should be able to create a functioning NodePort service [Conformance]"]} SSSSS ------------------------------ [BeforeEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 10 22:05:29.840: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: http [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Performing setup for networking test in namespace pod-network-test-8374 STEP: creating a selector STEP: Creating the service pods in kubernetes Jun 10 22:05:29.865: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Jun 10 22:05:29.908: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jun 10 22:05:31.912: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jun 10 22:05:33.914: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jun 10 22:05:35.912: INFO: The status of Pod netserver-0 is Running (Ready = false) Jun 10 22:05:37.912: INFO: The status of Pod netserver-0 is Running (Ready = false) Jun 10 22:05:39.913: INFO: The status of Pod netserver-0 is Running (Ready = false) Jun 10 22:05:41.911: INFO: The status of Pod netserver-0 is Running (Ready = false) Jun 10 22:05:43.913: INFO: The status of Pod netserver-0 is Running (Ready = false) Jun 10 22:05:45.912: INFO: The status of Pod netserver-0 is Running (Ready = false) Jun 10 22:05:47.911: INFO: The status of Pod netserver-0 is Running (Ready = false) Jun 10 22:05:49.913: INFO: The status of Pod netserver-0 is Running (Ready = false) Jun 10 22:05:51.912: INFO: The status of Pod netserver-0 is Running (Ready = true) Jun 10 22:05:51.916: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods Jun 10 22:05:55.938: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2 Jun 10 22:05:55.938: INFO: Breadth first check of 10.244.3.242 on host 10.10.190.207... Jun 10 22:05:55.941: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.3.245:9080/dial?request=hostname&protocol=http&host=10.244.3.242&port=8080&tries=1'] Namespace:pod-network-test-8374 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 10 22:05:55.941: INFO: >>> kubeConfig: /root/.kube/config Jun 10 22:05:56.029: INFO: Waiting for responses: map[] Jun 10 22:05:56.029: INFO: reached 10.244.3.242 after 0/1 tries Jun 10 22:05:56.029: INFO: Breadth first check of 10.244.4.174 on host 10.10.190.208... Jun 10 22:05:56.032: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.3.245:9080/dial?request=hostname&protocol=http&host=10.244.4.174&port=8080&tries=1'] Namespace:pod-network-test-8374 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 10 22:05:56.032: INFO: >>> kubeConfig: /root/.kube/config Jun 10 22:05:56.126: INFO: Waiting for responses: map[] Jun 10 22:05:56.126: INFO: reached 10.244.4.174 after 0/1 tries Jun 10 22:05:56.126: INFO: Going to retry 0 out of 2 pods.... [AfterEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 10 22:05:56.126: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-8374" for this suite. • [SLOW TEST:26.296 seconds] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/network/framework.go:23 Granular Checks: Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/network/networking.go:30 should function for intra-pod communication: http [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","total":-1,"completed":30,"skipped":707,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 10 22:05:50.287: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating configMap with name projected-configmap-test-volume-map-cc2acac1-15c4-4acc-8290-ad62e1f8033e STEP: Creating a pod to test consume configMaps Jun 10 22:05:50.325: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-46ec7529-19e2-4474-a386-49c8910fa17b" in namespace "projected-4063" to be "Succeeded or Failed" Jun 10 22:05:50.328: INFO: Pod "pod-projected-configmaps-46ec7529-19e2-4474-a386-49c8910fa17b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.408078ms Jun 10 22:05:52.332: INFO: Pod "pod-projected-configmaps-46ec7529-19e2-4474-a386-49c8910fa17b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006648762s Jun 10 22:05:54.336: INFO: Pod "pod-projected-configmaps-46ec7529-19e2-4474-a386-49c8910fa17b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.010413201s Jun 10 22:05:56.340: INFO: Pod "pod-projected-configmaps-46ec7529-19e2-4474-a386-49c8910fa17b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.014386819s Jun 10 22:05:58.344: INFO: Pod "pod-projected-configmaps-46ec7529-19e2-4474-a386-49c8910fa17b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.018826833s STEP: Saw pod success Jun 10 22:05:58.344: INFO: Pod "pod-projected-configmaps-46ec7529-19e2-4474-a386-49c8910fa17b" satisfied condition "Succeeded or Failed" Jun 10 22:05:58.346: INFO: Trying to get logs from node node2 pod pod-projected-configmaps-46ec7529-19e2-4474-a386-49c8910fa17b container agnhost-container: STEP: delete the pod Jun 10 22:05:58.360: INFO: Waiting for pod pod-projected-configmaps-46ec7529-19e2-4474-a386-49c8910fa17b to disappear Jun 10 22:05:58.361: INFO: Pod pod-projected-configmaps-46ec7529-19e2-4474-a386-49c8910fa17b no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 10 22:05:58.362: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4063" for this suite. • [SLOW TEST:8.084 seconds] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":14,"skipped":237,"failed":0} SSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 10 22:05:58.397: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should patch a secret [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating a secret STEP: listing secrets in all namespaces to ensure that there are more than zero STEP: patching the secret STEP: deleting the secret using a LabelSelector STEP: listing secrets in all namespaces, searching for label name and value in patch [AfterEach] [sig-node] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 10 22:05:58.452: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-5490" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Secrets should patch a secret [Conformance]","total":-1,"completed":15,"skipped":248,"failed":0} SSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 10 22:05:56.226: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41 [It] should provide container's memory limit [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test downward API volume plugin Jun 10 22:05:56.265: INFO: Waiting up to 5m0s for pod "downwardapi-volume-7c163ee1-5ea5-4ed4-8a69-4531ae0d38e8" in namespace "projected-6052" to be "Succeeded or Failed" Jun 10 22:05:56.268: INFO: Pod "downwardapi-volume-7c163ee1-5ea5-4ed4-8a69-4531ae0d38e8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.682969ms Jun 10 22:05:58.272: INFO: Pod "downwardapi-volume-7c163ee1-5ea5-4ed4-8a69-4531ae0d38e8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006902738s Jun 10 22:06:00.275: INFO: Pod "downwardapi-volume-7c163ee1-5ea5-4ed4-8a69-4531ae0d38e8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010169441s STEP: Saw pod success Jun 10 22:06:00.275: INFO: Pod "downwardapi-volume-7c163ee1-5ea5-4ed4-8a69-4531ae0d38e8" satisfied condition "Succeeded or Failed" Jun 10 22:06:00.277: INFO: Trying to get logs from node node1 pod downwardapi-volume-7c163ee1-5ea5-4ed4-8a69-4531ae0d38e8 container client-container: STEP: delete the pod Jun 10 22:06:00.294: INFO: Waiting for pod downwardapi-volume-7c163ee1-5ea5-4ed4-8a69-4531ae0d38e8 to disappear Jun 10 22:06:00.296: INFO: Pod downwardapi-volume-7c163ee1-5ea5-4ed4-8a69-4531ae0d38e8 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 10 22:06:00.296: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6052" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance]","total":-1,"completed":31,"skipped":756,"failed":0} SSSS ------------------------------ [BeforeEach] [sig-network] HostPort /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 10 22:05:46.124: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename hostport STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] HostPort /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/hostport.go:47 [It] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Trying to create a pod(pod1) with hostport 54323 and hostIP 127.0.0.1 and expect scheduled Jun 10 22:05:46.164: INFO: The status of Pod pod1 is Pending, waiting for it to be Running (with Ready = true) Jun 10 22:05:48.170: INFO: The status of Pod pod1 is Pending, waiting for it to be Running (with Ready = true) Jun 10 22:05:50.169: INFO: The status of Pod pod1 is Running (Ready = true) STEP: Trying to create another pod(pod2) with hostport 54323 but hostIP 10.10.190.208 on the node which pod1 resides and expect scheduled Jun 10 22:05:50.182: INFO: The status of Pod pod2 is Pending, waiting for it to be Running (with Ready = true) Jun 10 22:05:52.186: INFO: The status of Pod pod2 is Pending, waiting for it to be Running (with Ready = true) Jun 10 22:05:54.187: INFO: The status of Pod pod2 is Pending, waiting for it to be Running (with Ready = true) Jun 10 22:05:56.186: INFO: The status of Pod pod2 is Pending, waiting for it to be Running (with Ready = true) Jun 10 22:05:58.187: INFO: The status of Pod pod2 is Running (Ready = true) STEP: Trying to create a third pod(pod3) with hostport 54323, hostIP 10.10.190.208 but use UDP protocol on the node which pod2 resides Jun 10 22:05:58.203: INFO: The status of Pod pod3 is Pending, waiting for it to be Running (with Ready = true) Jun 10 22:06:00.206: INFO: The status of Pod pod3 is Pending, waiting for it to be Running (with Ready = true) Jun 10 22:06:02.207: INFO: The status of Pod pod3 is Running (Ready = true) Jun 10 22:06:02.218: INFO: The status of Pod e2e-host-exec is Pending, waiting for it to be Running (with Ready = true) Jun 10 22:06:04.226: INFO: The status of Pod e2e-host-exec is Pending, waiting for it to be Running (with Ready = true) Jun 10 22:06:06.222: INFO: The status of Pod e2e-host-exec is Running (Ready = true) STEP: checking connectivity from pod e2e-host-exec to serverIP: 127.0.0.1, port: 54323 Jun 10 22:06:06.225: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g --connect-timeout 5 --interface 10.10.190.208 http://127.0.0.1:54323/hostname] Namespace:hostport-8019 PodName:e2e-host-exec ContainerName:e2e-host-exec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 10 22:06:06.225: INFO: >>> kubeConfig: /root/.kube/config STEP: checking connectivity from pod e2e-host-exec to serverIP: 10.10.190.208, port: 54323 Jun 10 22:06:06.326: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g --connect-timeout 5 http://10.10.190.208:54323/hostname] Namespace:hostport-8019 PodName:e2e-host-exec ContainerName:e2e-host-exec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 10 22:06:06.326: INFO: >>> kubeConfig: /root/.kube/config STEP: checking connectivity from pod e2e-host-exec to serverIP: 10.10.190.208, port: 54323 UDP Jun 10 22:06:06.412: INFO: ExecWithOptions {Command:[/bin/sh -c nc -vuz -w 5 10.10.190.208 54323] Namespace:hostport-8019 PodName:e2e-host-exec ContainerName:e2e-host-exec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 10 22:06:06.412: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-network] HostPort /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 10 22:06:11.571: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "hostport-8019" for this suite. • [SLOW TEST:25.456 seconds] [sig-network] HostPort /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 validates that there is no conflict between pods with same hostPort but different hostIP and protocol [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-network] HostPort validates that there is no conflict between pods with same hostPort but different hostIP and protocol [LinuxOnly] [Conformance]","total":-1,"completed":44,"skipped":829,"failed":0} SSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 10 22:06:11.605: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:86 [It] RecreateDeployment should delete old pods and create new ones [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Jun 10 22:06:11.630: INFO: Creating deployment "test-recreate-deployment" Jun 10 22:06:11.633: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1 Jun 10 22:06:11.639: INFO: deployment "test-recreate-deployment" doesn't have the required revision set Jun 10 22:06:13.647: INFO: Waiting deployment "test-recreate-deployment" to complete Jun 10 22:06:13.650: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63790495571, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63790495571, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63790495571, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63790495571, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-6cb8b65c46\" is progressing."}}, CollisionCount:(*int32)(nil)} Jun 10 22:06:15.653: INFO: Triggering a new rollout for deployment "test-recreate-deployment" Jun 10 22:06:15.663: INFO: Updating deployment test-recreate-deployment Jun 10 22:06:15.663: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:80 Jun 10 22:06:15.702: INFO: Deployment "test-recreate-deployment": &Deployment{ObjectMeta:{test-recreate-deployment deployment-4694 a13b7fa9-b7e4-4371-b66b-055f49f12d19 44946 2 2022-06-10 22:06:11 +0000 UTC map[name:sample-pod-3] map[deployment.kubernetes.io/revision:2] [] [] [{e2e.test Update apps/v1 2022-06-10 22:06:15 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{},"f:strategy":{"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2022-06-10 22:06:15 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:replicas":{},"f:unavailableReplicas":{},"f:updatedReplicas":{}}}}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3] map[] [] [] []} {[] [] [{httpd k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc002e110f8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2022-06-10 22:06:15 +0000 UTC,LastTransitionTime:2022-06-10 22:06:15 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "test-recreate-deployment-85d47dcb4" is progressing.,LastUpdateTime:2022-06-10 22:06:15 +0000 UTC,LastTransitionTime:2022-06-10 22:06:11 +0000 UTC,},},ReadyReplicas:0,CollisionCount:nil,},} Jun 10 22:06:15.706: INFO: New ReplicaSet "test-recreate-deployment-85d47dcb4" of Deployment "test-recreate-deployment": &ReplicaSet{ObjectMeta:{test-recreate-deployment-85d47dcb4 deployment-4694 8defbf20-fe49-45e5-ba24-05ad7db4c4d4 44945 1 2022-06-10 22:06:15 +0000 UTC map[name:sample-pod-3 pod-template-hash:85d47dcb4] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-recreate-deployment a13b7fa9-b7e4-4371-b66b-055f49f12d19 0xc002e11570 0xc002e11571}] [] [{kube-controller-manager Update apps/v1 2022-06-10 22:06:15 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a13b7fa9-b7e4-4371-b66b-055f49f12d19\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 85d47dcb4,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3 pod-template-hash:85d47dcb4] map[] [] [] []} {[] [] [{httpd k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc002e115e8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Jun 10 22:06:15.706: INFO: All old ReplicaSets of Deployment "test-recreate-deployment": Jun 10 22:06:15.706: INFO: &ReplicaSet{ObjectMeta:{test-recreate-deployment-6cb8b65c46 deployment-4694 ae4e24f1-9ef0-45c2-bd19-245e3acfd089 44935 2 2022-06-10 22:06:11 +0000 UTC map[name:sample-pod-3 pod-template-hash:6cb8b65c46] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-recreate-deployment a13b7fa9-b7e4-4371-b66b-055f49f12d19 0xc002e11477 0xc002e11478}] [] [{kube-controller-manager Update apps/v1 2022-06-10 22:06:15 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a13b7fa9-b7e4-4371-b66b-055f49f12d19\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 6cb8b65c46,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3 pod-template-hash:6cb8b65c46] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.32 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc002e11508 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Jun 10 22:06:15.710: INFO: Pod "test-recreate-deployment-85d47dcb4-9lkdd" is not available: &Pod{ObjectMeta:{test-recreate-deployment-85d47dcb4-9lkdd test-recreate-deployment-85d47dcb4- deployment-4694 fe5cf2c2-803a-4b88-b5b1-83fc35265f72 44947 0 2022-06-10 22:06:15 +0000 UTC map[name:sample-pod-3 pod-template-hash:85d47dcb4] map[kubernetes.io/psp:collectd] [{apps/v1 ReplicaSet test-recreate-deployment-85d47dcb4 8defbf20-fe49-45e5-ba24-05ad7db4c4d4 0xc002f2449f 0xc002f244b0}] [] [{kube-controller-manager Update v1 2022-06-10 22:06:15 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8defbf20-fe49-45e5-ba24-05ad7db4c4d4\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2022-06-10 22:06:15 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-s7t6x,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-s7t6x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-06-10 22:06:15 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-06-10 22:06:15 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-06-10 22:06:15 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-06-10 22:06:15 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.10.190.208,PodIP:,StartTime:2022-06-10 22:06:15 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 10 22:06:15.710: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-4694" for this suite. • ------------------------------ {"msg":"PASSED [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance]","total":-1,"completed":45,"skipped":841,"failed":0} SSSSSSS ------------------------------ [BeforeEach] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 10 22:05:54.085: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with configmap pod [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating pod pod-subpath-test-configmap-wrkk STEP: Creating a pod to test atomic-volume-subpath Jun 10 22:05:54.124: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-wrkk" in namespace "subpath-8675" to be "Succeeded or Failed" Jun 10 22:05:54.128: INFO: Pod "pod-subpath-test-configmap-wrkk": Phase="Pending", Reason="", readiness=false. Elapsed: 4.360636ms Jun 10 22:05:56.132: INFO: Pod "pod-subpath-test-configmap-wrkk": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008208396s Jun 10 22:05:58.137: INFO: Pod "pod-subpath-test-configmap-wrkk": Phase="Running", Reason="", readiness=true. Elapsed: 4.013380507s Jun 10 22:06:00.140: INFO: Pod "pod-subpath-test-configmap-wrkk": Phase="Running", Reason="", readiness=true. Elapsed: 6.016369476s Jun 10 22:06:02.144: INFO: Pod "pod-subpath-test-configmap-wrkk": Phase="Running", Reason="", readiness=true. Elapsed: 8.020146221s Jun 10 22:06:04.150: INFO: Pod "pod-subpath-test-configmap-wrkk": Phase="Running", Reason="", readiness=true. Elapsed: 10.026123487s Jun 10 22:06:06.156: INFO: Pod "pod-subpath-test-configmap-wrkk": Phase="Running", Reason="", readiness=true. Elapsed: 12.032376281s Jun 10 22:06:08.163: INFO: Pod "pod-subpath-test-configmap-wrkk": Phase="Running", Reason="", readiness=true. Elapsed: 14.038427485s Jun 10 22:06:10.166: INFO: Pod "pod-subpath-test-configmap-wrkk": Phase="Running", Reason="", readiness=true. Elapsed: 16.042131692s Jun 10 22:06:12.170: INFO: Pod "pod-subpath-test-configmap-wrkk": Phase="Running", Reason="", readiness=true. Elapsed: 18.045621458s Jun 10 22:06:14.174: INFO: Pod "pod-subpath-test-configmap-wrkk": Phase="Running", Reason="", readiness=true. Elapsed: 20.04966958s Jun 10 22:06:16.180: INFO: Pod "pod-subpath-test-configmap-wrkk": Phase="Running", Reason="", readiness=true. Elapsed: 22.056234511s Jun 10 22:06:18.187: INFO: Pod "pod-subpath-test-configmap-wrkk": Phase="Running", Reason="", readiness=true. Elapsed: 24.062392366s Jun 10 22:06:20.190: INFO: Pod "pod-subpath-test-configmap-wrkk": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.065397617s STEP: Saw pod success Jun 10 22:06:20.190: INFO: Pod "pod-subpath-test-configmap-wrkk" satisfied condition "Succeeded or Failed" Jun 10 22:06:20.192: INFO: Trying to get logs from node node2 pod pod-subpath-test-configmap-wrkk container test-container-subpath-configmap-wrkk: STEP: delete the pod Jun 10 22:06:20.208: INFO: Waiting for pod pod-subpath-test-configmap-wrkk to disappear Jun 10 22:06:20.211: INFO: Pod pod-subpath-test-configmap-wrkk no longer exists STEP: Deleting pod pod-subpath-test-configmap-wrkk Jun 10 22:06:20.211: INFO: Deleting pod "pod-subpath-test-configmap-wrkk" in namespace "subpath-8675" [AfterEach] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 10 22:06:20.214: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-8675" for this suite. • [SLOW TEST:26.137 seconds] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with configmap pod [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance]","total":-1,"completed":19,"skipped":338,"failed":2,"failures":["[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should be able to create a functioning NodePort service [Conformance]"]} SSSSSSS ------------------------------ [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 10 22:06:15.735: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test downward API volume plugin Jun 10 22:06:15.768: INFO: Waiting up to 5m0s for pod "downwardapi-volume-67ac9fb7-0e41-43d6-a5cd-b6796dde639f" in namespace "downward-api-7278" to be "Succeeded or Failed" Jun 10 22:06:15.776: INFO: Pod "downwardapi-volume-67ac9fb7-0e41-43d6-a5cd-b6796dde639f": Phase="Pending", Reason="", readiness=false. Elapsed: 8.131182ms Jun 10 22:06:17.781: INFO: Pod "downwardapi-volume-67ac9fb7-0e41-43d6-a5cd-b6796dde639f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013399751s Jun 10 22:06:19.786: INFO: Pod "downwardapi-volume-67ac9fb7-0e41-43d6-a5cd-b6796dde639f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.018440254s Jun 10 22:06:21.789: INFO: Pod "downwardapi-volume-67ac9fb7-0e41-43d6-a5cd-b6796dde639f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.021583255s STEP: Saw pod success Jun 10 22:06:21.789: INFO: Pod "downwardapi-volume-67ac9fb7-0e41-43d6-a5cd-b6796dde639f" satisfied condition "Succeeded or Failed" Jun 10 22:06:21.791: INFO: Trying to get logs from node node2 pod downwardapi-volume-67ac9fb7-0e41-43d6-a5cd-b6796dde639f container client-container: STEP: delete the pod Jun 10 22:06:21.809: INFO: Waiting for pod downwardapi-volume-67ac9fb7-0e41-43d6-a5cd-b6796dde639f to disappear Jun 10 22:06:21.812: INFO: Pod downwardapi-volume-67ac9fb7-0e41-43d6-a5cd-b6796dde639f no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 10 22:06:21.812: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-7278" for this suite. • [SLOW TEST:6.085 seconds] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":46,"skipped":848,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 10 22:06:20.237: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Jun 10 22:06:20.490: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Jun 10 22:06:22.500: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63790495580, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63790495580, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63790495580, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63790495580, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jun 10 22:06:25.517: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate configmap [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Registering the mutating configmap webhook via the AdmissionRegistration API STEP: create a configmap that should be updated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 10 22:06:25.542: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-7884" for this suite. STEP: Destroying namespace "webhook-7884-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:5.337 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate configmap [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 10 22:06:21.860: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41 [It] should provide container's memory request [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test downward API volume plugin Jun 10 22:06:21.896: INFO: Waiting up to 5m0s for pod "downwardapi-volume-a5ce429b-b39f-464f-bba8-8fd0cd2107ac" in namespace "downward-api-2776" to be "Succeeded or Failed" Jun 10 22:06:21.898: INFO: Pod "downwardapi-volume-a5ce429b-b39f-464f-bba8-8fd0cd2107ac": Phase="Pending", Reason="", readiness=false. Elapsed: 2.219334ms Jun 10 22:06:23.903: INFO: Pod "downwardapi-volume-a5ce429b-b39f-464f-bba8-8fd0cd2107ac": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006920686s Jun 10 22:06:25.907: INFO: Pod "downwardapi-volume-a5ce429b-b39f-464f-bba8-8fd0cd2107ac": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011198871s STEP: Saw pod success Jun 10 22:06:25.907: INFO: Pod "downwardapi-volume-a5ce429b-b39f-464f-bba8-8fd0cd2107ac" satisfied condition "Succeeded or Failed" Jun 10 22:06:25.910: INFO: Trying to get logs from node node2 pod downwardapi-volume-a5ce429b-b39f-464f-bba8-8fd0cd2107ac container client-container: STEP: delete the pod Jun 10 22:06:25.922: INFO: Waiting for pod downwardapi-volume-a5ce429b-b39f-464f-bba8-8fd0cd2107ac to disappear Jun 10 22:06:25.924: INFO: Pod downwardapi-volume-a5ce429b-b39f-464f-bba8-8fd0cd2107ac no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 10 22:06:25.924: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-2776" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance]","total":-1,"completed":47,"skipped":869,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 10 22:06:25.980: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746 [It] should serve a basic endpoint from pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating service endpoint-test2 in namespace services-6226 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-6226 to expose endpoints map[] Jun 10 22:06:26.010: INFO: Failed go get Endpoints object: endpoints "endpoint-test2" not found Jun 10 22:06:27.017: INFO: successfully validated that service endpoint-test2 in namespace services-6226 exposes endpoints map[] STEP: Creating pod pod1 in namespace services-6226 Jun 10 22:06:27.032: INFO: The status of Pod pod1 is Pending, waiting for it to be Running (with Ready = true) Jun 10 22:06:29.038: INFO: The status of Pod pod1 is Pending, waiting for it to be Running (with Ready = true) Jun 10 22:06:31.036: INFO: The status of Pod pod1 is Pending, waiting for it to be Running (with Ready = true) Jun 10 22:06:33.036: INFO: The status of Pod pod1 is Running (Ready = true) STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-6226 to expose endpoints map[pod1:[80]] Jun 10 22:06:33.047: INFO: successfully validated that service endpoint-test2 in namespace services-6226 exposes endpoints map[pod1:[80]] STEP: Creating pod pod2 in namespace services-6226 Jun 10 22:06:33.059: INFO: The status of Pod pod2 is Pending, waiting for it to be Running (with Ready = true) Jun 10 22:06:35.065: INFO: The status of Pod pod2 is Pending, waiting for it to be Running (with Ready = true) Jun 10 22:06:37.064: INFO: The status of Pod pod2 is Running (Ready = true) STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-6226 to expose endpoints map[pod1:[80] pod2:[80]] Jun 10 22:06:37.078: INFO: successfully validated that service endpoint-test2 in namespace services-6226 exposes endpoints map[pod1:[80] pod2:[80]] STEP: Deleting pod pod1 in namespace services-6226 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-6226 to expose endpoints map[pod2:[80]] Jun 10 22:06:37.092: INFO: successfully validated that service endpoint-test2 in namespace services-6226 exposes endpoints map[pod2:[80]] STEP: Deleting pod pod2 in namespace services-6226 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-6226 to expose endpoints map[] Jun 10 22:06:37.104: INFO: successfully validated that service endpoint-test2 in namespace services-6226 exposes endpoints map[] [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 10 22:06:37.113: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-6226" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750 • [SLOW TEST:11.141 seconds] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should serve a basic endpoint from pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-network] Services should serve a basic endpoint from pods [Conformance]","total":-1,"completed":48,"skipped":894,"failed":0} SSSSS ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance]","total":-1,"completed":20,"skipped":345,"failed":2,"failures":["[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should be able to create a functioning NodePort service [Conformance]"]} [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 10 22:06:25.577: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD with validation schema [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Jun 10 22:06:25.599: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with known and required properties Jun 10 22:06:34.219: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-606 --namespace=crd-publish-openapi-606 create -f -' Jun 10 22:06:34.718: INFO: stderr: "" Jun 10 22:06:34.718: INFO: stdout: "e2e-test-crd-publish-openapi-6133-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n" Jun 10 22:06:34.718: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-606 --namespace=crd-publish-openapi-606 delete e2e-test-crd-publish-openapi-6133-crds test-foo' Jun 10 22:06:34.874: INFO: stderr: "" Jun 10 22:06:34.874: INFO: stdout: "e2e-test-crd-publish-openapi-6133-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n" Jun 10 22:06:34.874: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-606 --namespace=crd-publish-openapi-606 apply -f -' Jun 10 22:06:35.245: INFO: stderr: "" Jun 10 22:06:35.245: INFO: stdout: "e2e-test-crd-publish-openapi-6133-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n" Jun 10 22:06:35.245: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-606 --namespace=crd-publish-openapi-606 delete e2e-test-crd-publish-openapi-6133-crds test-foo' Jun 10 22:06:35.417: INFO: stderr: "" Jun 10 22:06:35.417: INFO: stdout: "e2e-test-crd-publish-openapi-6133-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n" STEP: client-side validation (kubectl create and apply) rejects request with unknown properties when disallowed by the schema Jun 10 22:06:35.417: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-606 --namespace=crd-publish-openapi-606 create -f -' Jun 10 22:06:35.782: INFO: rc: 1 Jun 10 22:06:35.782: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-606 --namespace=crd-publish-openapi-606 apply -f -' Jun 10 22:06:36.143: INFO: rc: 1 STEP: client-side validation (kubectl create and apply) rejects request without required properties Jun 10 22:06:36.144: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-606 --namespace=crd-publish-openapi-606 create -f -' Jun 10 22:06:36.449: INFO: rc: 1 Jun 10 22:06:36.450: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-606 --namespace=crd-publish-openapi-606 apply -f -' Jun 10 22:06:36.764: INFO: rc: 1 STEP: kubectl explain works to explain CR properties Jun 10 22:06:36.764: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-606 explain e2e-test-crd-publish-openapi-6133-crds' Jun 10 22:06:37.119: INFO: stderr: "" Jun 10 22:06:37.119: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-6133-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nDESCRIPTION:\n Foo CRD for Testing\n\nFIELDS:\n apiVersion\t\n APIVersion defines the versioned schema of this representation of an\n object. Servers should convert recognized schemas to the latest internal\n value, and may reject unrecognized values. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n kind\t\n Kind is a string value representing the REST resource this object\n represents. Servers may infer this from the endpoint the client submits\n requests to. Cannot be updated. In CamelCase. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n metadata\t\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n spec\t\n Specification of Foo\n\n status\t\n Status of Foo\n\n" STEP: kubectl explain works to explain CR properties recursively Jun 10 22:06:37.120: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-606 explain e2e-test-crd-publish-openapi-6133-crds.metadata' Jun 10 22:06:37.487: INFO: stderr: "" Jun 10 22:06:37.487: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-6133-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: metadata \n\nDESCRIPTION:\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n ObjectMeta is metadata that all persisted resources must have, which\n includes all objects users must create.\n\nFIELDS:\n annotations\t\n Annotations is an unstructured key value map stored with a resource that\n may be set by external tools to store and retrieve arbitrary metadata. They\n are not queryable and should be preserved when modifying objects. More\n info: http://kubernetes.io/docs/user-guide/annotations\n\n clusterName\t\n The name of the cluster which the object belongs to. This is used to\n distinguish resources with same name and namespace in different clusters.\n This field is not set anywhere right now and apiserver is going to ignore\n it if set in create or update request.\n\n creationTimestamp\t\n CreationTimestamp is a timestamp representing the server time when this\n object was created. It is not guaranteed to be set in happens-before order\n across separate operations. Clients may not set this value. It is\n represented in RFC3339 form and is in UTC.\n\n Populated by the system. Read-only. Null for lists. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n deletionGracePeriodSeconds\t\n Number of seconds allowed for this object to gracefully terminate before it\n will be removed from the system. Only set when deletionTimestamp is also\n set. May only be shortened. Read-only.\n\n deletionTimestamp\t\n DeletionTimestamp is RFC 3339 date and time at which this resource will be\n deleted. This field is set by the server when a graceful deletion is\n requested by the user, and is not directly settable by a client. The\n resource is expected to be deleted (no longer visible from resource lists,\n and not reachable by name) after the time in this field, once the\n finalizers list is empty. As long as the finalizers list contains items,\n deletion is blocked. Once the deletionTimestamp is set, this value may not\n be unset or be set further into the future, although it may be shortened or\n the resource may be deleted prior to this time. For example, a user may\n request that a pod is deleted in 30 seconds. The Kubelet will react by\n sending a graceful termination signal to the containers in the pod. After\n that 30 seconds, the Kubelet will send a hard termination signal (SIGKILL)\n to the container and after cleanup, remove the pod from the API. In the\n presence of network partitions, this object may still exist after this\n timestamp, until an administrator or automated process can determine the\n resource is fully terminated. If not set, graceful deletion of the object\n has not been requested.\n\n Populated by the system when a graceful deletion is requested. Read-only.\n More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n finalizers\t<[]string>\n Must be empty before the object is deleted from the registry. Each entry is\n an identifier for the responsible component that will remove the entry from\n the list. If the deletionTimestamp of the object is non-nil, entries in\n this list can only be removed. Finalizers may be processed and removed in\n any order. Order is NOT enforced because it introduces significant risk of\n stuck finalizers. finalizers is a shared field, any actor with permission\n can reorder it. If the finalizer list is processed in order, then this can\n lead to a situation in which the component responsible for the first\n finalizer in the list is waiting for a signal (field value, external\n system, or other) produced by a component responsible for a finalizer later\n in the list, resulting in a deadlock. Without enforced ordering finalizers\n are free to order amongst themselves and are not vulnerable to ordering\n changes in the list.\n\n generateName\t\n GenerateName is an optional prefix, used by the server, to generate a\n unique name ONLY IF the Name field has not been provided. If this field is\n used, the name returned to the client will be different than the name\n passed. This value will also be combined with a unique suffix. The provided\n value has the same validation rules as the Name field, and may be truncated\n by the length of the suffix required to make the value unique on the\n server.\n\n If this field is specified and the generated name exists, the server will\n NOT return a 409 - instead, it will either return 201 Created or 500 with\n Reason ServerTimeout indicating a unique name could not be found in the\n time allotted, and the client should retry (optionally after the time\n indicated in the Retry-After header).\n\n Applied only if Name is not specified. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#idempotency\n\n generation\t\n A sequence number representing a specific generation of the desired state.\n Populated by the system. Read-only.\n\n labels\t\n Map of string keys and values that can be used to organize and categorize\n (scope and select) objects. May match selectors of replication controllers\n and services. More info: http://kubernetes.io/docs/user-guide/labels\n\n managedFields\t<[]Object>\n ManagedFields maps workflow-id and version to the set of fields that are\n managed by that workflow. This is mostly for internal housekeeping, and\n users typically shouldn't need to set or understand this field. A workflow\n can be the user's name, a controller's name, or the name of a specific\n apply path like \"ci-cd\". The set of fields is always in the version that\n the workflow used when modifying the object.\n\n name\t\n Name must be unique within a namespace. Is required when creating\n resources, although some resources may allow a client to request the\n generation of an appropriate name automatically. Name is primarily intended\n for creation idempotence and configuration definition. Cannot be updated.\n More info: http://kubernetes.io/docs/user-guide/identifiers#names\n\n namespace\t\n Namespace defines the space within which each name must be unique. An empty\n namespace is equivalent to the \"default\" namespace, but \"default\" is the\n canonical representation. Not all objects are required to be scoped to a\n namespace - the value of this field for those objects will be empty.\n\n Must be a DNS_LABEL. Cannot be updated. More info:\n http://kubernetes.io/docs/user-guide/namespaces\n\n ownerReferences\t<[]Object>\n List of objects depended by this object. If ALL objects in the list have\n been deleted, this object will be garbage collected. If this object is\n managed by a controller, then an entry in this list will point to this\n controller, with the controller field set to true. There cannot be more\n than one managing controller.\n\n resourceVersion\t\n An opaque value that represents the internal version of this object that\n can be used by clients to determine when objects have changed. May be used\n for optimistic concurrency, change detection, and the watch operation on a\n resource or set of resources. Clients must treat these values as opaque and\n passed unmodified back to the server. They may only be valid for a\n particular resource or set of resources.\n\n Populated by the system. Read-only. Value must be treated as opaque by\n clients and . More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency\n\n selfLink\t\n SelfLink is a URL representing this object. Populated by the system.\n Read-only.\n\n DEPRECATED Kubernetes will stop propagating this field in 1.20 release and\n the field is planned to be removed in 1.21 release.\n\n uid\t\n UID is the unique in time and space value for this object. It is typically\n generated by the server on successful creation of a resource and is not\n allowed to change on PUT operations.\n\n Populated by the system. Read-only. More info:\n http://kubernetes.io/docs/user-guide/identifiers#uids\n\n" Jun 10 22:06:37.488: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-606 explain e2e-test-crd-publish-openapi-6133-crds.spec' Jun 10 22:06:37.839: INFO: stderr: "" Jun 10 22:06:37.839: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-6133-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: spec \n\nDESCRIPTION:\n Specification of Foo\n\nFIELDS:\n bars\t<[]Object>\n List of Bars and their specs.\n\n" Jun 10 22:06:37.839: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-606 explain e2e-test-crd-publish-openapi-6133-crds.spec.bars' Jun 10 22:06:38.206: INFO: stderr: "" Jun 10 22:06:38.206: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-6133-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: bars <[]Object>\n\nDESCRIPTION:\n List of Bars and their specs.\n\nFIELDS:\n age\t\n Age of Bar.\n\n bazs\t<[]string>\n List of Bazs.\n\n name\t -required-\n Name of Bar.\n\n" STEP: kubectl explain works to return error when explain is called on property that doesn't exist Jun 10 22:06:38.206: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-606 explain e2e-test-crd-publish-openapi-6133-crds.spec.bars2' Jun 10 22:06:38.540: INFO: rc: 1 [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 10 22:06:42.170: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-606" for this suite. • [SLOW TEST:16.615 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD with validation schema [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance]","total":-1,"completed":21,"skipped":345,"failed":2,"failures":["[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should be able to create a functioning NodePort service [Conformance]"]} SSSSSS ------------------------------ [BeforeEach] [sig-apps] DisruptionController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 10 22:06:37.134: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename disruption STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] DisruptionController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/disruption.go:69 [It] should create a PodDisruptionBudget [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating the pdb STEP: Waiting for the pdb to be processed STEP: updating the pdb STEP: Waiting for the pdb to be processed STEP: patching the pdb STEP: Waiting for the pdb to be processed STEP: Waiting for the pdb to be deleted [AfterEach] [sig-apps] DisruptionController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 10 22:06:43.198: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "disruption-9092" for this suite. • [SLOW TEST:6.074 seconds] [sig-apps] DisruptionController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should create a PodDisruptionBudget [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] DisruptionController should create a PodDisruptionBudget [Conformance]","total":-1,"completed":49,"skipped":899,"failed":0} SSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 10 22:05:58.481: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746 [It] should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating service in namespace services-7844 Jun 10 22:05:58.517: INFO: The status of Pod kube-proxy-mode-detector is Pending, waiting for it to be Running (with Ready = true) Jun 10 22:06:00.521: INFO: The status of Pod kube-proxy-mode-detector is Running (Ready = true) Jun 10 22:06:00.524: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7844 exec kube-proxy-mode-detector -- /bin/sh -x -c curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode' Jun 10 22:06:00.760: INFO: stderr: "+ curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode\n" Jun 10 22:06:00.760: INFO: stdout: "iptables" Jun 10 22:06:00.760: INFO: proxyMode: iptables Jun 10 22:06:00.766: INFO: Waiting for pod kube-proxy-mode-detector to disappear Jun 10 22:06:00.768: INFO: Pod kube-proxy-mode-detector no longer exists STEP: creating service affinity-clusterip-timeout in namespace services-7844 STEP: creating replication controller affinity-clusterip-timeout in namespace services-7844 I0610 22:06:00.778228 30 runners.go:190] Created replication controller with name: affinity-clusterip-timeout, namespace: services-7844, replica count: 3 I0610 22:06:03.830235 30 runners.go:190] affinity-clusterip-timeout Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0610 22:06:06.831316 30 runners.go:190] affinity-clusterip-timeout Pods: 3 out of 3 created, 1 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0610 22:06:09.832469 30 runners.go:190] affinity-clusterip-timeout Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jun 10 22:06:09.837: INFO: Creating new exec pod Jun 10 22:06:14.856: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7844 exec execpod-affinityqr856 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-timeout 80' Jun 10 22:06:15.134: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-clusterip-timeout 80\nConnection to affinity-clusterip-timeout 80 port [tcp/http] succeeded!\n" Jun 10 22:06:15.134: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" Jun 10 22:06:15.134: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7844 exec execpod-affinityqr856 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.233.60.215 80' Jun 10 22:06:15.392: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 10.233.60.215 80\nConnection to 10.233.60.215 80 port [tcp/http] succeeded!\n" Jun 10 22:06:15.392: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" Jun 10 22:06:15.392: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7844 exec execpod-affinityqr856 -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://10.233.60.215:80/ ; done' Jun 10 22:06:15.700: INFO: stderr: "+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.60.215:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.60.215:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.60.215:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.60.215:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.60.215:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.60.215:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.60.215:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.60.215:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.60.215:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.60.215:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.60.215:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.60.215:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.60.215:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.60.215:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.60.215:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.60.215:80/\n" Jun 10 22:06:15.701: INFO: stdout: "\naffinity-clusterip-timeout-bsh5v\naffinity-clusterip-timeout-bsh5v\naffinity-clusterip-timeout-bsh5v\naffinity-clusterip-timeout-bsh5v\naffinity-clusterip-timeout-bsh5v\naffinity-clusterip-timeout-bsh5v\naffinity-clusterip-timeout-bsh5v\naffinity-clusterip-timeout-bsh5v\naffinity-clusterip-timeout-bsh5v\naffinity-clusterip-timeout-bsh5v\naffinity-clusterip-timeout-bsh5v\naffinity-clusterip-timeout-bsh5v\naffinity-clusterip-timeout-bsh5v\naffinity-clusterip-timeout-bsh5v\naffinity-clusterip-timeout-bsh5v\naffinity-clusterip-timeout-bsh5v" Jun 10 22:06:15.701: INFO: Received response from host: affinity-clusterip-timeout-bsh5v Jun 10 22:06:15.701: INFO: Received response from host: affinity-clusterip-timeout-bsh5v Jun 10 22:06:15.701: INFO: Received response from host: affinity-clusterip-timeout-bsh5v Jun 10 22:06:15.701: INFO: Received response from host: affinity-clusterip-timeout-bsh5v Jun 10 22:06:15.701: INFO: Received response from host: affinity-clusterip-timeout-bsh5v Jun 10 22:06:15.701: INFO: Received response from host: affinity-clusterip-timeout-bsh5v Jun 10 22:06:15.701: INFO: Received response from host: affinity-clusterip-timeout-bsh5v Jun 10 22:06:15.701: INFO: Received response from host: affinity-clusterip-timeout-bsh5v Jun 10 22:06:15.701: INFO: Received response from host: affinity-clusterip-timeout-bsh5v Jun 10 22:06:15.701: INFO: Received response from host: affinity-clusterip-timeout-bsh5v Jun 10 22:06:15.701: INFO: Received response from host: affinity-clusterip-timeout-bsh5v Jun 10 22:06:15.701: INFO: Received response from host: affinity-clusterip-timeout-bsh5v Jun 10 22:06:15.701: INFO: Received response from host: affinity-clusterip-timeout-bsh5v Jun 10 22:06:15.701: INFO: Received response from host: affinity-clusterip-timeout-bsh5v Jun 10 22:06:15.701: INFO: Received response from host: affinity-clusterip-timeout-bsh5v Jun 10 22:06:15.701: INFO: Received response from host: affinity-clusterip-timeout-bsh5v Jun 10 22:06:15.701: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7844 exec execpod-affinityqr856 -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://10.233.60.215:80/' Jun 10 22:06:15.935: INFO: stderr: "+ curl -q -s --connect-timeout 2 http://10.233.60.215:80/\n" Jun 10 22:06:15.935: INFO: stdout: "affinity-clusterip-timeout-bsh5v" Jun 10 22:06:35.936: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7844 exec execpod-affinityqr856 -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://10.233.60.215:80/' Jun 10 22:06:36.194: INFO: stderr: "+ curl -q -s --connect-timeout 2 http://10.233.60.215:80/\n" Jun 10 22:06:36.194: INFO: stdout: "affinity-clusterip-timeout-snnff" Jun 10 22:06:36.194: INFO: Cleaning up the exec pod STEP: deleting ReplicationController affinity-clusterip-timeout in namespace services-7844, will wait for the garbage collector to delete the pods Jun 10 22:06:36.262: INFO: Deleting ReplicationController affinity-clusterip-timeout took: 6.908919ms Jun 10 22:06:36.362: INFO: Terminating ReplicationController affinity-clusterip-timeout pods took: 100.73045ms [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 10 22:06:46.974: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-7844" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750 • [SLOW TEST:48.502 seconds] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","total":-1,"completed":16,"skipped":257,"failed":0} SSSSSS ------------------------------ [BeforeEach] [sig-node] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 10 22:06:46.997: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should run through a ConfigMap lifecycle [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating a ConfigMap STEP: fetching the ConfigMap STEP: patching the ConfigMap STEP: listing all ConfigMaps in all namespaces with a label selector STEP: deleting the ConfigMap by collection with a label selector STEP: listing all ConfigMaps in test namespace [AfterEach] [sig-node] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 10 22:06:47.072: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-4212" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] ConfigMap should run through a ConfigMap lifecycle [Conformance]","total":-1,"completed":17,"skipped":263,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 10 22:06:47.125: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:241 [BeforeEach] Kubectl run pod /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1514 [It] should create a pod from an image when restart is Never [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: running the image k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 Jun 10 22:06:47.147: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-8773 run e2e-test-httpd-pod --restart=Never --image=k8s.gcr.io/e2e-test-images/httpd:2.4.38-1' Jun 10 22:06:47.306: INFO: stderr: "" Jun 10 22:06:47.306: INFO: stdout: "pod/e2e-test-httpd-pod created\n" STEP: verifying the pod e2e-test-httpd-pod was created [AfterEach] Kubectl run pod /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1518 Jun 10 22:06:47.308: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-8773 delete pods e2e-test-httpd-pod' Jun 10 22:06:49.746: INFO: stderr: "" Jun 10 22:06:49.746: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 10 22:06:49.747: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8773" for this suite. • ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never [Conformance]","total":-1,"completed":18,"skipped":289,"failed":0} SSSSSSS ------------------------------ [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 10 22:06:49.773: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test downward API volume plugin Jun 10 22:06:49.809: INFO: Waiting up to 5m0s for pod "downwardapi-volume-7971548a-2bf8-457b-b71c-ca32136e6498" in namespace "downward-api-5154" to be "Succeeded or Failed" Jun 10 22:06:49.811: INFO: Pod "downwardapi-volume-7971548a-2bf8-457b-b71c-ca32136e6498": Phase="Pending", Reason="", readiness=false. Elapsed: 2.613438ms Jun 10 22:06:51.815: INFO: Pod "downwardapi-volume-7971548a-2bf8-457b-b71c-ca32136e6498": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006591086s Jun 10 22:06:53.820: INFO: Pod "downwardapi-volume-7971548a-2bf8-457b-b71c-ca32136e6498": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010844356s STEP: Saw pod success Jun 10 22:06:53.820: INFO: Pod "downwardapi-volume-7971548a-2bf8-457b-b71c-ca32136e6498" satisfied condition "Succeeded or Failed" Jun 10 22:06:53.822: INFO: Trying to get logs from node node1 pod downwardapi-volume-7971548a-2bf8-457b-b71c-ca32136e6498 container client-container: STEP: delete the pod Jun 10 22:06:53.837: INFO: Waiting for pod downwardapi-volume-7971548a-2bf8-457b-b71c-ca32136e6498 to disappear Jun 10 22:06:53.839: INFO: Pod downwardapi-volume-7971548a-2bf8-457b-b71c-ca32136e6498 no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 10 22:06:53.839: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-5154" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":19,"skipped":296,"failed":0} SSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 10 22:06:53.873: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/kubelet.go:38 [It] should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Jun 10 22:06:53.911: INFO: The status of Pod busybox-readonly-fsa7711962-df66-45a0-ad53-92e2e9eeb954 is Pending, waiting for it to be Running (with Ready = true) Jun 10 22:06:55.915: INFO: The status of Pod busybox-readonly-fsa7711962-df66-45a0-ad53-92e2e9eeb954 is Pending, waiting for it to be Running (with Ready = true) Jun 10 22:06:57.914: INFO: The status of Pod busybox-readonly-fsa7711962-df66-45a0-ad53-92e2e9eeb954 is Running (Ready = true) [AfterEach] [sig-node] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 10 22:06:57.932: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-9044" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":20,"skipped":310,"failed":0} SSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-apps] CronJob /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 10 22:05:16.161: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename cronjob STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] CronJob /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/cronjob.go:63 W0610 22:05:16.189726 27 warnings.go:70] batch/v1beta1 CronJob is deprecated in v1.21+, unavailable in v1.25+; use batch/v1 CronJob [It] should replace jobs when ReplaceConcurrent [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a ReplaceConcurrent cronjob STEP: Ensuring a job is scheduled STEP: Ensuring exactly one is scheduled STEP: Ensuring exactly one running job exists by listing jobs explicitly STEP: Ensuring the job is replaced with a new one STEP: Removing cronjob [AfterEach] [sig-apps] CronJob /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 10 22:07:00.212: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "cronjob-3496" for this suite. • [SLOW TEST:104.062 seconds] [sig-apps] CronJob /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should replace jobs when ReplaceConcurrent [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","total":-1,"completed":38,"skipped":622,"failed":0} SSSSSSS ------------------------------ [BeforeEach] [sig-network] Ingress API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 10 22:07:00.240: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename ingress STEP: Waiting for a default service account to be provisioned in namespace [It] should support creating Ingress API operations [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: getting /apis STEP: getting /apis/networking.k8s.io STEP: getting /apis/networking.k8s.iov1 STEP: creating STEP: getting STEP: listing STEP: watching Jun 10 22:07:00.285: INFO: starting watch STEP: cluster-wide listing STEP: cluster-wide watching Jun 10 22:07:00.292: INFO: starting watch STEP: patching STEP: updating Jun 10 22:07:00.305: INFO: waiting for watch events with expected annotations Jun 10 22:07:00.305: INFO: saw patched and updated annotations STEP: patching /status STEP: updating /status STEP: get /status STEP: deleting STEP: deleting a collection [AfterEach] [sig-network] Ingress API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 10 22:07:00.333: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "ingress-5587" for this suite. • ------------------------------ {"msg":"PASSED [sig-network] Ingress API should support creating Ingress API operations [Conformance]","total":-1,"completed":39,"skipped":629,"failed":0} SSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 10 22:06:57.968: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41 [It] should provide container's cpu request [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test downward API volume plugin Jun 10 22:06:58.002: INFO: Waiting up to 5m0s for pod "downwardapi-volume-fbfbbfa2-06cc-448c-8f6a-010fa02afeca" in namespace "downward-api-9125" to be "Succeeded or Failed" Jun 10 22:06:58.005: INFO: Pod "downwardapi-volume-fbfbbfa2-06cc-448c-8f6a-010fa02afeca": Phase="Pending", Reason="", readiness=false. Elapsed: 2.316287ms Jun 10 22:07:00.008: INFO: Pod "downwardapi-volume-fbfbbfa2-06cc-448c-8f6a-010fa02afeca": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005776484s Jun 10 22:07:02.012: INFO: Pod "downwardapi-volume-fbfbbfa2-06cc-448c-8f6a-010fa02afeca": Phase="Pending", Reason="", readiness=false. Elapsed: 4.00965743s Jun 10 22:07:04.016: INFO: Pod "downwardapi-volume-fbfbbfa2-06cc-448c-8f6a-010fa02afeca": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.013986196s STEP: Saw pod success Jun 10 22:07:04.017: INFO: Pod "downwardapi-volume-fbfbbfa2-06cc-448c-8f6a-010fa02afeca" satisfied condition "Succeeded or Failed" Jun 10 22:07:04.019: INFO: Trying to get logs from node node1 pod downwardapi-volume-fbfbbfa2-06cc-448c-8f6a-010fa02afeca container client-container: STEP: delete the pod Jun 10 22:07:04.034: INFO: Waiting for pod downwardapi-volume-fbfbbfa2-06cc-448c-8f6a-010fa02afeca to disappear Jun 10 22:07:04.036: INFO: Pod downwardapi-volume-fbfbbfa2-06cc-448c-8f6a-010fa02afeca no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 10 22:07:04.036: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-9125" for this suite. • [SLOW TEST:6.076 seconds] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should provide container's cpu request [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance]","total":-1,"completed":21,"skipped":323,"failed":0} SSS ------------------------------ [BeforeEach] [sig-network] EndpointSlice /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 10 22:06:42.208: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename endpointslice STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] EndpointSlice /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/endpointslice.go:49 [It] should create Endpoints and EndpointSlices for Pods matching a Service [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: referencing a single matching pod STEP: referencing matching pods with named port STEP: creating empty Endpoints and EndpointSlices for no matching Pods STEP: recreating EndpointSlices after they've been deleted Jun 10 22:07:02.315: INFO: EndpointSlice for Service endpointslice-9805/example-named-port not found [AfterEach] [sig-network] EndpointSlice /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 10 22:07:12.323: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "endpointslice-9805" for this suite. • [SLOW TEST:30.123 seconds] [sig-network] EndpointSlice /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should create Endpoints and EndpointSlices for Pods matching a Service [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-network] EndpointSlice should create Endpoints and EndpointSlices for Pods matching a Service [Conformance]","total":-1,"completed":22,"skipped":351,"failed":2,"failures":["[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should be able to create a functioning NodePort service [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] EndpointSliceMirroring /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 10 22:07:12.390: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename endpointslicemirroring STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] EndpointSliceMirroring /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/endpointslicemirroring.go:39 [It] should mirror a custom Endpoints resource through create update and delete [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: mirroring a new custom Endpoint STEP: mirroring an update to a custom Endpoint Jun 10 22:07:12.431: INFO: Expected EndpointSlice to have 10.2.3.4 as address, got 10.1.2.3 STEP: mirroring deletion of a custom Endpoint Jun 10 22:07:14.443: INFO: Waiting for 0 EndpointSlices to exist, got 1 [AfterEach] [sig-network] EndpointSliceMirroring /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 10 22:07:16.448: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "endpointslicemirroring-2397" for this suite. • ------------------------------ {"msg":"PASSED [sig-network] EndpointSliceMirroring should mirror a custom Endpoints resource through create update and delete [Conformance]","total":-1,"completed":23,"skipped":381,"failed":2,"failures":["[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should be able to create a functioning NodePort service [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 10 22:04:51.007: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746 [It] should have session affinity work for NodePort service [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating service in namespace services-4749 STEP: creating service affinity-nodeport in namespace services-4749 STEP: creating replication controller affinity-nodeport in namespace services-4749 I0610 22:04:51.047787 36 runners.go:190] Created replication controller with name: affinity-nodeport, namespace: services-4749, replica count: 3 I0610 22:04:54.099879 36 runners.go:190] affinity-nodeport Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0610 22:04:57.100369 36 runners.go:190] affinity-nodeport Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0610 22:05:00.101603 36 runners.go:190] affinity-nodeport Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jun 10 22:05:00.112: INFO: Creating new exec pod Jun 10 22:05:07.136: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4749 exec execpod-affinitypdwf4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport 80' Jun 10 22:05:07.411: INFO: stderr: "+ nc -v -t -w 2 affinity-nodeport 80\n+ echo hostName\nConnection to affinity-nodeport 80 port [tcp/http] succeeded!\n" Jun 10 22:05:07.411: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" Jun 10 22:05:07.411: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4749 exec execpod-affinitypdwf4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.233.51.248 80' Jun 10 22:05:07.735: INFO: stderr: "+ nc -v -t -w 2 10.233.51.248 80\n+ echo hostName\nConnection to 10.233.51.248 80 port [tcp/http] succeeded!\n" Jun 10 22:05:07.735: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" Jun 10 22:05:07.735: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4749 exec execpod-affinitypdwf4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30179' Jun 10 22:05:08.034: INFO: rc: 1 Jun 10 22:05:08.034: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4749 exec execpod-affinitypdwf4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30179: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30179 nc: connect to 10.10.190.207 port 30179 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:05:09.035: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4749 exec execpod-affinitypdwf4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30179' Jun 10 22:05:09.301: INFO: rc: 1 Jun 10 22:05:09.301: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4749 exec execpod-affinitypdwf4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30179: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30179 nc: connect to 10.10.190.207 port 30179 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:05:10.035: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4749 exec execpod-affinitypdwf4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30179' Jun 10 22:05:10.294: INFO: rc: 1 Jun 10 22:05:10.294: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4749 exec execpod-affinitypdwf4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30179: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30179 nc: connect to 10.10.190.207 port 30179 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:05:11.035: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4749 exec execpod-affinitypdwf4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30179' Jun 10 22:05:11.902: INFO: rc: 1 Jun 10 22:05:11.902: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4749 exec execpod-affinitypdwf4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30179: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30179 nc: connect to 10.10.190.207 port 30179 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:05:12.035: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4749 exec execpod-affinitypdwf4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30179' Jun 10 22:05:12.325: INFO: rc: 1 Jun 10 22:05:12.325: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4749 exec execpod-affinitypdwf4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30179: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30179 nc: connect to 10.10.190.207 port 30179 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:05:13.036: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4749 exec execpod-affinitypdwf4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30179' Jun 10 22:05:13.431: INFO: rc: 1 Jun 10 22:05:13.431: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4749 exec execpod-affinitypdwf4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30179: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30179 nc: connect to 10.10.190.207 port 30179 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:05:14.035: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4749 exec execpod-affinitypdwf4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30179' Jun 10 22:05:14.343: INFO: rc: 1 Jun 10 22:05:14.343: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4749 exec execpod-affinitypdwf4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30179: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30179 nc: connect to 10.10.190.207 port 30179 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:05:15.036: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4749 exec execpod-affinitypdwf4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30179' Jun 10 22:05:15.327: INFO: rc: 1 Jun 10 22:05:15.327: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4749 exec execpod-affinitypdwf4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30179: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30179 nc: connect to 10.10.190.207 port 30179 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:05:16.034: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4749 exec execpod-affinitypdwf4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30179' Jun 10 22:05:16.361: INFO: rc: 1 Jun 10 22:05:16.361: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4749 exec execpod-affinitypdwf4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30179: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30179 nc: connect to 10.10.190.207 port 30179 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:05:17.035: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4749 exec execpod-affinitypdwf4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30179' Jun 10 22:05:17.375: INFO: rc: 1 Jun 10 22:05:17.375: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4749 exec execpod-affinitypdwf4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30179: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30179 nc: connect to 10.10.190.207 port 30179 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:05:18.036: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4749 exec execpod-affinitypdwf4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30179' Jun 10 22:05:18.276: INFO: rc: 1 Jun 10 22:05:18.276: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4749 exec execpod-affinitypdwf4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30179: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30179 nc: connect to 10.10.190.207 port 30179 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:05:19.036: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4749 exec execpod-affinitypdwf4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30179' Jun 10 22:05:19.287: INFO: rc: 1 Jun 10 22:05:19.287: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4749 exec execpod-affinitypdwf4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30179: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30179 nc: connect to 10.10.190.207 port 30179 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:05:20.035: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4749 exec execpod-affinitypdwf4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30179' Jun 10 22:05:20.289: INFO: rc: 1 Jun 10 22:05:20.289: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4749 exec execpod-affinitypdwf4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30179: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30179 nc: connect to 10.10.190.207 port 30179 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:05:21.036: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4749 exec execpod-affinitypdwf4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30179' Jun 10 22:05:21.326: INFO: rc: 1 Jun 10 22:05:21.326: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4749 exec execpod-affinitypdwf4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30179: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30179 nc: connect to 10.10.190.207 port 30179 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:05:22.034: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4749 exec execpod-affinitypdwf4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30179' Jun 10 22:05:22.284: INFO: rc: 1 Jun 10 22:05:22.284: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4749 exec execpod-affinitypdwf4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30179: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30179 nc: connect to 10.10.190.207 port 30179 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:05:23.035: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4749 exec execpod-affinitypdwf4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30179' Jun 10 22:05:23.410: INFO: rc: 1 Jun 10 22:05:23.410: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4749 exec execpod-affinitypdwf4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30179: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30179 nc: connect to 10.10.190.207 port 30179 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:05:24.035: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4749 exec execpod-affinitypdwf4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30179' Jun 10 22:05:24.308: INFO: rc: 1 Jun 10 22:05:24.308: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4749 exec execpod-affinitypdwf4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30179: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30179 nc: connect to 10.10.190.207 port 30179 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:05:25.035: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4749 exec execpod-affinitypdwf4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30179' Jun 10 22:05:25.411: INFO: rc: 1 Jun 10 22:05:25.411: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4749 exec execpod-affinitypdwf4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30179: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30179 nc: connect to 10.10.190.207 port 30179 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:05:26.036: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4749 exec execpod-affinitypdwf4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30179' Jun 10 22:05:26.564: INFO: rc: 1 Jun 10 22:05:26.564: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4749 exec execpod-affinitypdwf4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30179: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30179 nc: connect to 10.10.190.207 port 30179 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:05:27.035: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4749 exec execpod-affinitypdwf4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30179' Jun 10 22:05:27.289: INFO: rc: 1 Jun 10 22:05:27.289: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4749 exec execpod-affinitypdwf4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30179: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30179 nc: connect to 10.10.190.207 port 30179 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:05:28.036: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4749 exec execpod-affinitypdwf4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30179' Jun 10 22:05:28.960: INFO: rc: 1 Jun 10 22:05:28.960: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4749 exec execpod-affinitypdwf4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30179: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30179 nc: connect to 10.10.190.207 port 30179 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:05:29.034: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4749 exec execpod-affinitypdwf4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30179' Jun 10 22:05:29.431: INFO: rc: 1 Jun 10 22:05:29.431: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4749 exec execpod-affinitypdwf4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30179: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30179 nc: connect to 10.10.190.207 port 30179 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:05:30.035: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4749 exec execpod-affinitypdwf4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30179' Jun 10 22:05:30.304: INFO: rc: 1 Jun 10 22:05:30.304: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4749 exec execpod-affinitypdwf4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30179: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30179 nc: connect to 10.10.190.207 port 30179 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:05:31.035: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4749 exec execpod-affinitypdwf4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30179' Jun 10 22:05:33.251: INFO: rc: 1 Jun 10 22:05:33.251: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4749 exec execpod-affinitypdwf4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30179: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30179 nc: connect to 10.10.190.207 port 30179 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:05:34.036: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4749 exec execpod-affinitypdwf4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30179' Jun 10 22:05:34.665: INFO: rc: 1 Jun 10 22:05:34.665: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4749 exec execpod-affinitypdwf4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30179: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30179 nc: connect to 10.10.190.207 port 30179 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:05:35.035: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4749 exec execpod-affinitypdwf4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30179' Jun 10 22:05:35.320: INFO: rc: 1 Jun 10 22:05:35.320: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4749 exec execpod-affinitypdwf4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30179: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30179 nc: connect to 10.10.190.207 port 30179 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:05:36.035: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4749 exec execpod-affinitypdwf4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30179' Jun 10 22:05:36.586: INFO: rc: 1 Jun 10 22:05:36.586: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4749 exec execpod-affinitypdwf4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30179: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30179 nc: connect to 10.10.190.207 port 30179 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:05:37.035: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4749 exec execpod-affinitypdwf4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30179' Jun 10 22:05:37.268: INFO: rc: 1 Jun 10 22:05:37.268: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4749 exec execpod-affinitypdwf4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30179: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30179 nc: connect to 10.10.190.207 port 30179 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:05:38.035: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4749 exec execpod-affinitypdwf4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30179' Jun 10 22:05:38.604: INFO: rc: 1 Jun 10 22:05:38.604: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4749 exec execpod-affinitypdwf4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30179: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30179 nc: connect to 10.10.190.207 port 30179 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:05:39.035: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4749 exec execpod-affinitypdwf4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30179' Jun 10 22:05:39.607: INFO: rc: 1 Jun 10 22:05:39.607: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4749 exec execpod-affinitypdwf4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30179: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30179 nc: connect to 10.10.190.207 port 30179 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:05:40.035: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4749 exec execpod-affinitypdwf4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30179' Jun 10 22:05:40.320: INFO: rc: 1 Jun 10 22:05:40.320: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4749 exec execpod-affinitypdwf4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30179: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30179 nc: connect to 10.10.190.207 port 30179 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:05:41.035: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4749 exec execpod-affinitypdwf4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30179' Jun 10 22:05:41.282: INFO: rc: 1 Jun 10 22:05:41.282: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4749 exec execpod-affinitypdwf4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30179: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30179 nc: connect to 10.10.190.207 port 30179 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:05:42.035: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4749 exec execpod-affinitypdwf4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30179' Jun 10 22:05:42.291: INFO: rc: 1 Jun 10 22:05:42.292: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4749 exec execpod-affinitypdwf4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30179: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30179 nc: connect to 10.10.190.207 port 30179 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:05:43.035: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4749 exec execpod-affinitypdwf4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30179' Jun 10 22:05:43.280: INFO: rc: 1 Jun 10 22:05:43.280: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4749 exec execpod-affinitypdwf4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30179: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30179 nc: connect to 10.10.190.207 port 30179 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:05:44.035: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4749 exec execpod-affinitypdwf4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30179' Jun 10 22:05:44.294: INFO: rc: 1 Jun 10 22:05:44.294: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4749 exec execpod-affinitypdwf4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30179: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30179 nc: connect to 10.10.190.207 port 30179 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:05:45.035: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4749 exec execpod-affinitypdwf4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30179' Jun 10 22:05:45.297: INFO: rc: 1 Jun 10 22:05:45.297: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4749 exec execpod-affinitypdwf4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30179: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30179 nc: connect to 10.10.190.207 port 30179 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:05:46.035: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4749 exec execpod-affinitypdwf4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30179' Jun 10 22:05:46.292: INFO: rc: 1 Jun 10 22:05:46.292: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4749 exec execpod-affinitypdwf4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30179: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30179 nc: connect to 10.10.190.207 port 30179 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:05:47.034: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4749 exec execpod-affinitypdwf4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30179' Jun 10 22:05:47.296: INFO: rc: 1 Jun 10 22:05:47.296: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4749 exec execpod-affinitypdwf4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30179: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30179 nc: connect to 10.10.190.207 port 30179 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:05:48.035: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4749 exec execpod-affinitypdwf4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30179' Jun 10 22:05:48.280: INFO: rc: 1 Jun 10 22:05:48.281: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4749 exec execpod-affinitypdwf4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30179: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30179 nc: connect to 10.10.190.207 port 30179 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:05:49.035: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4749 exec execpod-affinitypdwf4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30179' Jun 10 22:05:49.324: INFO: rc: 1 Jun 10 22:05:49.324: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4749 exec execpod-affinitypdwf4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30179: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30179 nc: connect to 10.10.190.207 port 30179 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:05:50.035: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4749 exec execpod-affinitypdwf4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30179' Jun 10 22:05:50.352: INFO: rc: 1 Jun 10 22:05:50.352: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4749 exec execpod-affinitypdwf4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30179: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30179 nc: connect to 10.10.190.207 port 30179 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:05:51.035: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4749 exec execpod-affinitypdwf4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30179' Jun 10 22:05:51.298: INFO: rc: 1 Jun 10 22:05:51.299: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4749 exec execpod-affinitypdwf4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30179: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30179 nc: connect to 10.10.190.207 port 30179 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:05:52.034: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4749 exec execpod-affinitypdwf4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30179' Jun 10 22:05:52.294: INFO: rc: 1 Jun 10 22:05:52.294: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4749 exec execpod-affinitypdwf4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30179: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30179 nc: connect to 10.10.190.207 port 30179 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:05:53.035: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4749 exec execpod-affinitypdwf4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30179' Jun 10 22:05:53.360: INFO: rc: 1 Jun 10 22:05:53.360: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4749 exec execpod-affinitypdwf4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30179: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30179 nc: connect to 10.10.190.207 port 30179 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:05:54.035: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4749 exec execpod-affinitypdwf4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30179' Jun 10 22:05:54.295: INFO: rc: 1 Jun 10 22:05:54.295: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4749 exec execpod-affinitypdwf4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30179: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30179 nc: connect to 10.10.190.207 port 30179 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:05:55.035: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4749 exec execpod-affinitypdwf4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30179' Jun 10 22:05:55.304: INFO: rc: 1 Jun 10 22:05:55.304: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4749 exec execpod-affinitypdwf4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30179: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30179 nc: connect to 10.10.190.207 port 30179 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:05:56.035: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4749 exec execpod-affinitypdwf4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30179' Jun 10 22:05:56.291: INFO: rc: 1 Jun 10 22:05:56.291: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4749 exec execpod-affinitypdwf4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30179: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30179 nc: connect to 10.10.190.207 port 30179 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:05:57.035: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4749 exec execpod-affinitypdwf4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30179' Jun 10 22:05:57.873: INFO: rc: 1 Jun 10 22:05:57.873: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4749 exec execpod-affinitypdwf4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30179: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30179 nc: connect to 10.10.190.207 port 30179 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:05:58.035: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4749 exec execpod-affinitypdwf4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30179' Jun 10 22:05:58.302: INFO: rc: 1 Jun 10 22:05:58.302: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4749 exec execpod-affinitypdwf4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30179: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30179 nc: connect to 10.10.190.207 port 30179 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:05:59.035: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4749 exec execpod-affinitypdwf4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30179' Jun 10 22:05:59.288: INFO: rc: 1 Jun 10 22:05:59.288: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4749 exec execpod-affinitypdwf4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30179: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30179 nc: connect to 10.10.190.207 port 30179 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:06:00.035: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4749 exec execpod-affinitypdwf4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30179' Jun 10 22:06:00.300: INFO: rc: 1 Jun 10 22:06:00.300: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4749 exec execpod-affinitypdwf4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30179: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30179 nc: connect to 10.10.190.207 port 30179 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:06:01.034: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4749 exec execpod-affinitypdwf4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30179' Jun 10 22:06:01.386: INFO: rc: 1 Jun 10 22:06:01.386: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4749 exec execpod-affinitypdwf4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30179: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30179 nc: connect to 10.10.190.207 port 30179 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:06:02.034: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4749 exec execpod-affinitypdwf4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30179' Jun 10 22:06:03.309: INFO: rc: 1 Jun 10 22:06:03.309: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4749 exec execpod-affinitypdwf4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30179: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30179 nc: connect to 10.10.190.207 port 30179 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:06:04.035: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4749 exec execpod-affinitypdwf4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30179' Jun 10 22:06:04.405: INFO: rc: 1 Jun 10 22:06:04.405: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4749 exec execpod-affinitypdwf4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30179: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30179 nc: connect to 10.10.190.207 port 30179 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:06:05.036: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4749 exec execpod-affinitypdwf4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30179' Jun 10 22:06:05.499: INFO: rc: 1 Jun 10 22:06:05.500: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4749 exec execpod-affinitypdwf4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30179: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30179 nc: connect to 10.10.190.207 port 30179 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:06:06.035: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4749 exec execpod-affinitypdwf4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30179' Jun 10 22:06:06.268: INFO: rc: 1 Jun 10 22:06:06.268: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4749 exec execpod-affinitypdwf4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30179: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30179 nc: connect to 10.10.190.207 port 30179 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:06:07.035: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4749 exec execpod-affinitypdwf4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30179' Jun 10 22:06:07.300: INFO: rc: 1 Jun 10 22:06:07.300: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4749 exec execpod-affinitypdwf4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30179: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30179 nc: connect to 10.10.190.207 port 30179 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:06:08.035: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4749 exec execpod-affinitypdwf4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30179' Jun 10 22:06:08.323: INFO: rc: 1 Jun 10 22:06:08.323: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4749 exec execpod-affinitypdwf4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30179: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30179 nc: connect to 10.10.190.207 port 30179 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:06:09.035: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4749 exec execpod-affinitypdwf4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30179' Jun 10 22:06:09.281: INFO: rc: 1 Jun 10 22:06:09.281: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4749 exec execpod-affinitypdwf4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30179: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30179 nc: connect to 10.10.190.207 port 30179 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:06:10.036: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4749 exec execpod-affinitypdwf4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30179' Jun 10 22:06:10.293: INFO: rc: 1 Jun 10 22:06:10.293: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4749 exec execpod-affinitypdwf4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30179: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30179 nc: connect to 10.10.190.207 port 30179 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:06:11.037: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4749 exec execpod-affinitypdwf4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30179' Jun 10 22:06:11.671: INFO: rc: 1 Jun 10 22:06:11.671: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4749 exec execpod-affinitypdwf4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30179: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30179 nc: connect to 10.10.190.207 port 30179 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:06:12.035: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4749 exec execpod-affinitypdwf4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30179' Jun 10 22:06:12.276: INFO: rc: 1 Jun 10 22:06:12.276: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4749 exec execpod-affinitypdwf4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30179: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30179 nc: connect to 10.10.190.207 port 30179 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:06:13.036: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4749 exec execpod-affinitypdwf4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30179' Jun 10 22:06:13.306: INFO: rc: 1 Jun 10 22:06:13.306: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4749 exec execpod-affinitypdwf4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30179: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30179 nc: connect to 10.10.190.207 port 30179 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:06:14.035: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4749 exec execpod-affinitypdwf4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30179' Jun 10 22:06:14.264: INFO: rc: 1 Jun 10 22:06:14.264: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4749 exec execpod-affinitypdwf4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30179: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30179 nc: connect to 10.10.190.207 port 30179 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:06:15.034: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4749 exec execpod-affinitypdwf4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30179' Jun 10 22:06:15.305: INFO: rc: 1 Jun 10 22:06:15.305: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4749 exec execpod-affinitypdwf4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30179: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30179 nc: connect to 10.10.190.207 port 30179 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:06:16.035: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4749 exec execpod-affinitypdwf4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30179' Jun 10 22:06:16.286: INFO: rc: 1 Jun 10 22:06:16.286: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4749 exec execpod-affinitypdwf4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30179: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30179 nc: connect to 10.10.190.207 port 30179 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:06:17.035: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4749 exec execpod-affinitypdwf4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30179' Jun 10 22:06:17.273: INFO: rc: 1 Jun 10 22:06:17.273: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4749 exec execpod-affinitypdwf4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30179: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30179 nc: connect to 10.10.190.207 port 30179 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:06:18.035: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4749 exec execpod-affinitypdwf4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30179' Jun 10 22:06:18.295: INFO: rc: 1 Jun 10 22:06:18.295: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4749 exec execpod-affinitypdwf4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30179: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30179 nc: connect to 10.10.190.207 port 30179 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:06:19.035: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4749 exec execpod-affinitypdwf4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30179' Jun 10 22:06:19.279: INFO: rc: 1 Jun 10 22:06:19.279: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4749 exec execpod-affinitypdwf4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30179: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30179 nc: connect to 10.10.190.207 port 30179 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:06:20.035: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4749 exec execpod-affinitypdwf4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30179' Jun 10 22:06:20.278: INFO: rc: 1 Jun 10 22:06:20.278: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4749 exec execpod-affinitypdwf4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30179: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30179 nc: connect to 10.10.190.207 port 30179 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:06:21.034: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4749 exec execpod-affinitypdwf4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30179' Jun 10 22:06:21.288: INFO: rc: 1 Jun 10 22:06:21.288: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4749 exec execpod-affinitypdwf4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30179: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30179 nc: connect to 10.10.190.207 port 30179 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:06:22.034: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4749 exec execpod-affinitypdwf4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30179' Jun 10 22:06:22.337: INFO: rc: 1 Jun 10 22:06:22.337: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4749 exec execpod-affinitypdwf4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30179: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30179 nc: connect to 10.10.190.207 port 30179 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:06:23.035: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4749 exec execpod-affinitypdwf4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30179' Jun 10 22:06:23.310: INFO: rc: 1 Jun 10 22:06:23.311: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4749 exec execpod-affinitypdwf4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30179: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30179 nc: connect to 10.10.190.207 port 30179 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:06:24.035: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4749 exec execpod-affinitypdwf4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30179' Jun 10 22:06:24.250: INFO: rc: 1 Jun 10 22:06:24.250: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4749 exec execpod-affinitypdwf4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30179: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30179 nc: connect to 10.10.190.207 port 30179 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:06:25.035: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4749 exec execpod-affinitypdwf4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30179' Jun 10 22:06:25.312: INFO: rc: 1 Jun 10 22:06:25.312: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4749 exec execpod-affinitypdwf4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30179: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30179 nc: connect to 10.10.190.207 port 30179 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:06:26.034: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4749 exec execpod-affinitypdwf4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30179' Jun 10 22:06:26.320: INFO: rc: 1 Jun 10 22:06:26.320: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4749 exec execpod-affinitypdwf4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30179: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30179 nc: connect to 10.10.190.207 port 30179 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:06:27.035: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4749 exec execpod-affinitypdwf4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30179' Jun 10 22:06:27.272: INFO: rc: 1 Jun 10 22:06:27.273: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4749 exec execpod-affinitypdwf4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30179: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30179 nc: connect to 10.10.190.207 port 30179 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:06:28.036: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4749 exec execpod-affinitypdwf4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30179' Jun 10 22:06:29.270: INFO: rc: 1 Jun 10 22:06:29.270: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4749 exec execpod-affinitypdwf4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30179: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30179 nc: connect to 10.10.190.207 port 30179 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:06:30.035: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4749 exec execpod-affinitypdwf4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30179' Jun 10 22:06:30.303: INFO: rc: 1 Jun 10 22:06:30.303: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4749 exec execpod-affinitypdwf4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30179: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30179 nc: connect to 10.10.190.207 port 30179 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:06:31.036: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4749 exec execpod-affinitypdwf4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30179' Jun 10 22:06:31.325: INFO: rc: 1 Jun 10 22:06:31.325: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4749 exec execpod-affinitypdwf4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30179: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30179 nc: connect to 10.10.190.207 port 30179 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:06:32.035: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4749 exec execpod-affinitypdwf4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30179' Jun 10 22:06:32.281: INFO: rc: 1 Jun 10 22:06:32.281: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4749 exec execpod-affinitypdwf4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30179: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30179 nc: connect to 10.10.190.207 port 30179 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:06:33.035: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4749 exec execpod-affinitypdwf4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30179' Jun 10 22:06:33.298: INFO: rc: 1 Jun 10 22:06:33.298: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4749 exec execpod-affinitypdwf4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30179: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30179 nc: connect to 10.10.190.207 port 30179 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:06:34.035: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4749 exec execpod-affinitypdwf4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30179' Jun 10 22:06:34.269: INFO: rc: 1 Jun 10 22:06:34.269: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4749 exec execpod-affinitypdwf4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30179: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30179 nc: connect to 10.10.190.207 port 30179 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:06:35.035: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4749 exec execpod-affinitypdwf4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30179' Jun 10 22:06:35.311: INFO: rc: 1 Jun 10 22:06:35.311: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4749 exec execpod-affinitypdwf4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30179: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30179 nc: connect to 10.10.190.207 port 30179 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:06:36.035: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4749 exec execpod-affinitypdwf4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30179' Jun 10 22:06:36.299: INFO: rc: 1 Jun 10 22:06:36.299: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4749 exec execpod-affinitypdwf4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30179: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30179 nc: connect to 10.10.190.207 port 30179 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:06:37.034: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4749 exec execpod-affinitypdwf4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30179' Jun 10 22:06:37.282: INFO: rc: 1 Jun 10 22:06:37.283: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4749 exec execpod-affinitypdwf4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30179: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30179 nc: connect to 10.10.190.207 port 30179 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:06:38.035: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4749 exec execpod-affinitypdwf4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30179' Jun 10 22:06:39.925: INFO: rc: 1 Jun 10 22:06:39.925: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4749 exec execpod-affinitypdwf4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30179: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30179 nc: connect to 10.10.190.207 port 30179 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:06:40.035: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4749 exec execpod-affinitypdwf4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30179' Jun 10 22:06:40.333: INFO: rc: 1 Jun 10 22:06:40.333: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4749 exec execpod-affinitypdwf4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30179: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30179 nc: connect to 10.10.190.207 port 30179 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:06:41.035: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4749 exec execpod-affinitypdwf4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30179' Jun 10 22:06:42.181: INFO: rc: 1 Jun 10 22:06:42.181: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4749 exec execpod-affinitypdwf4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30179: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30179 nc: connect to 10.10.190.207 port 30179 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:06:43.036: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4749 exec execpod-affinitypdwf4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30179' Jun 10 22:06:43.390: INFO: rc: 1 Jun 10 22:06:43.390: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4749 exec execpod-affinitypdwf4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30179: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30179 nc: connect to 10.10.190.207 port 30179 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:06:44.035: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4749 exec execpod-affinitypdwf4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30179' Jun 10 22:06:44.411: INFO: rc: 1 Jun 10 22:06:44.411: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4749 exec execpod-affinitypdwf4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30179: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30179 nc: connect to 10.10.190.207 port 30179 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:06:45.035: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4749 exec execpod-affinitypdwf4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30179' Jun 10 22:06:45.357: INFO: rc: 1 Jun 10 22:06:45.357: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4749 exec execpod-affinitypdwf4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30179: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30179 nc: connect to 10.10.190.207 port 30179 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:06:46.035: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4749 exec execpod-affinitypdwf4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30179' Jun 10 22:06:46.366: INFO: rc: 1 Jun 10 22:06:46.366: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4749 exec execpod-affinitypdwf4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30179: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30179 nc: connect to 10.10.190.207 port 30179 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:06:47.035: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4749 exec execpod-affinitypdwf4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30179' Jun 10 22:06:47.278: INFO: rc: 1 Jun 10 22:06:47.278: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4749 exec execpod-affinitypdwf4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30179: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30179 nc: connect to 10.10.190.207 port 30179 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:06:48.036: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4749 exec execpod-affinitypdwf4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30179' Jun 10 22:06:48.299: INFO: rc: 1 Jun 10 22:06:48.299: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4749 exec execpod-affinitypdwf4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30179: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30179 nc: connect to 10.10.190.207 port 30179 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:06:49.035: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4749 exec execpod-affinitypdwf4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30179' Jun 10 22:06:49.288: INFO: rc: 1 Jun 10 22:06:49.288: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4749 exec execpod-affinitypdwf4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30179: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30179 nc: connect to 10.10.190.207 port 30179 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:06:50.035: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4749 exec execpod-affinitypdwf4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30179' Jun 10 22:06:50.327: INFO: rc: 1 Jun 10 22:06:50.327: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4749 exec execpod-affinitypdwf4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30179: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30179 nc: connect to 10.10.190.207 port 30179 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:06:51.035: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4749 exec execpod-affinitypdwf4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30179' Jun 10 22:06:51.538: INFO: rc: 1 Jun 10 22:06:51.538: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4749 exec execpod-affinitypdwf4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30179: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30179 nc: connect to 10.10.190.207 port 30179 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:06:52.035: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4749 exec execpod-affinitypdwf4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30179' Jun 10 22:06:52.277: INFO: rc: 1 Jun 10 22:06:52.277: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4749 exec execpod-affinitypdwf4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30179: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30179 nc: connect to 10.10.190.207 port 30179 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:06:53.035: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4749 exec execpod-affinitypdwf4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30179' Jun 10 22:06:53.303: INFO: rc: 1 Jun 10 22:06:53.303: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4749 exec execpod-affinitypdwf4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30179: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30179 nc: connect to 10.10.190.207 port 30179 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:06:54.035: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4749 exec execpod-affinitypdwf4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30179' Jun 10 22:06:54.292: INFO: rc: 1 Jun 10 22:06:54.292: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4749 exec execpod-affinitypdwf4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30179: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30179 nc: connect to 10.10.190.207 port 30179 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:06:55.037: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4749 exec execpod-affinitypdwf4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30179' Jun 10 22:06:55.284: INFO: rc: 1 Jun 10 22:06:55.284: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4749 exec execpod-affinitypdwf4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30179: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30179 nc: connect to 10.10.190.207 port 30179 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:06:56.034: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4749 exec execpod-affinitypdwf4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30179' Jun 10 22:06:56.293: INFO: rc: 1 Jun 10 22:06:56.293: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4749 exec execpod-affinitypdwf4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30179: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30179 nc: connect to 10.10.190.207 port 30179 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:06:57.035: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4749 exec execpod-affinitypdwf4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30179' Jun 10 22:06:57.348: INFO: rc: 1 Jun 10 22:06:57.348: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4749 exec execpod-affinitypdwf4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30179: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30179 nc: connect to 10.10.190.207 port 30179 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:06:58.035: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4749 exec execpod-affinitypdwf4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30179' Jun 10 22:06:59.638: INFO: rc: 1 Jun 10 22:06:59.638: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4749 exec execpod-affinitypdwf4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30179: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30179 nc: connect to 10.10.190.207 port 30179 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:07:00.035: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4749 exec execpod-affinitypdwf4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30179' Jun 10 22:07:00.347: INFO: rc: 1 Jun 10 22:07:00.347: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4749 exec execpod-affinitypdwf4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30179: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30179 nc: connect to 10.10.190.207 port 30179 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:07:01.035: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4749 exec execpod-affinitypdwf4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30179' Jun 10 22:07:01.329: INFO: rc: 1 Jun 10 22:07:01.329: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4749 exec execpod-affinitypdwf4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30179: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30179 nc: connect to 10.10.190.207 port 30179 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:07:02.034: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4749 exec execpod-affinitypdwf4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30179' Jun 10 22:07:02.272: INFO: rc: 1 Jun 10 22:07:02.272: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4749 exec execpod-affinitypdwf4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30179: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30179 nc: connect to 10.10.190.207 port 30179 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:07:03.036: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4749 exec execpod-affinitypdwf4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30179' Jun 10 22:07:03.357: INFO: rc: 1 Jun 10 22:07:03.357: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4749 exec execpod-affinitypdwf4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30179: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30179 nc: connect to 10.10.190.207 port 30179 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:07:04.036: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4749 exec execpod-affinitypdwf4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30179' Jun 10 22:07:04.289: INFO: rc: 1 Jun 10 22:07:04.289: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4749 exec execpod-affinitypdwf4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30179: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30179 nc: connect to 10.10.190.207 port 30179 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:07:05.035: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4749 exec execpod-affinitypdwf4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30179' Jun 10 22:07:05.459: INFO: rc: 1 Jun 10 22:07:05.459: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4749 exec execpod-affinitypdwf4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30179: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30179 nc: connect to 10.10.190.207 port 30179 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:07:06.035: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4749 exec execpod-affinitypdwf4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30179' Jun 10 22:07:06.312: INFO: rc: 1 Jun 10 22:07:06.312: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4749 exec execpod-affinitypdwf4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30179: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30179 nc: connect to 10.10.190.207 port 30179 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:07:07.035: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4749 exec execpod-affinitypdwf4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30179' Jun 10 22:07:07.301: INFO: rc: 1 Jun 10 22:07:07.301: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4749 exec execpod-affinitypdwf4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30179: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30179 nc: connect to 10.10.190.207 port 30179 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:07:08.035: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4749 exec execpod-affinitypdwf4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30179' Jun 10 22:07:08.321: INFO: rc: 1 Jun 10 22:07:08.321: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4749 exec execpod-affinitypdwf4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30179: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30179 nc: connect to 10.10.190.207 port 30179 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:07:08.322: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4749 exec execpod-affinitypdwf4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30179' Jun 10 22:07:08.589: INFO: rc: 1 Jun 10 22:07:08.590: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4749 exec execpod-affinitypdwf4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30179: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30179 nc: connect to 10.10.190.207 port 30179 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:07:08.590: FAIL: Unexpected error: <*errors.errorString | 0xc0012fa530>: { s: "service is not reachable within 2m0s timeout on endpoint 10.10.190.207:30179 over TCP protocol", } service is not reachable within 2m0s timeout on endpoint 10.10.190.207:30179 over TCP protocol occurred Full Stack Trace k8s.io/kubernetes/test/e2e/network.execAffinityTestForNonLBServiceWithOptionalTransition(0xc001893e40, 0x77b33d8, 0xc0030886e0, 0xc000328500, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:2576 +0x625 k8s.io/kubernetes/test/e2e/network.execAffinityTestForNonLBService(...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:2535 k8s.io/kubernetes/test/e2e/network.glob..func24.25() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1829 +0xa5 k8s.io/kubernetes/test/e2e.RunE2ETests(0xc001981b00) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x36c k8s.io/kubernetes/test/e2e.TestE2E(0xc001981b00) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:144 +0x2b testing.tRunner(0xc001981b00, 0x70f99e8) /usr/local/go/src/testing/testing.go:1193 +0xef created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1238 +0x2b3 Jun 10 22:07:08.591: INFO: Cleaning up the exec pod STEP: deleting ReplicationController affinity-nodeport in namespace services-4749, will wait for the garbage collector to delete the pods Jun 10 22:07:08.667: INFO: Deleting ReplicationController affinity-nodeport took: 5.075648ms Jun 10 22:07:08.768: INFO: Terminating ReplicationController affinity-nodeport pods took: 100.384605ms [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 STEP: Collecting events from namespace "services-4749". STEP: Found 27 events. Jun 10 22:07:17.286: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for affinity-nodeport-784pk: { } Scheduled: Successfully assigned services-4749/affinity-nodeport-784pk to node2 Jun 10 22:07:17.286: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for affinity-nodeport-s59tk: { } Scheduled: Successfully assigned services-4749/affinity-nodeport-s59tk to node2 Jun 10 22:07:17.286: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for affinity-nodeport-w8zqs: { } Scheduled: Successfully assigned services-4749/affinity-nodeport-w8zqs to node2 Jun 10 22:07:17.286: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for execpod-affinitypdwf4: { } Scheduled: Successfully assigned services-4749/execpod-affinitypdwf4 to node1 Jun 10 22:07:17.286: INFO: At 2022-06-10 22:04:51 +0000 UTC - event for affinity-nodeport: {replication-controller } SuccessfulCreate: Created pod: affinity-nodeport-784pk Jun 10 22:07:17.286: INFO: At 2022-06-10 22:04:51 +0000 UTC - event for affinity-nodeport: {replication-controller } SuccessfulCreate: Created pod: affinity-nodeport-w8zqs Jun 10 22:07:17.286: INFO: At 2022-06-10 22:04:51 +0000 UTC - event for affinity-nodeport: {replication-controller } SuccessfulCreate: Created pod: affinity-nodeport-s59tk Jun 10 22:07:17.286: INFO: At 2022-06-10 22:04:54 +0000 UTC - event for affinity-nodeport-784pk: {kubelet node2} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/agnhost:2.32" Jun 10 22:07:17.286: INFO: At 2022-06-10 22:04:54 +0000 UTC - event for affinity-nodeport-w8zqs: {kubelet node2} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/agnhost:2.32" Jun 10 22:07:17.286: INFO: At 2022-06-10 22:04:55 +0000 UTC - event for affinity-nodeport-784pk: {kubelet node2} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/agnhost:2.32" in 1.362564849s Jun 10 22:07:17.286: INFO: At 2022-06-10 22:04:55 +0000 UTC - event for affinity-nodeport-784pk: {kubelet node2} Created: Created container affinity-nodeport Jun 10 22:07:17.286: INFO: At 2022-06-10 22:04:55 +0000 UTC - event for affinity-nodeport-s59tk: {kubelet node2} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/agnhost:2.32" Jun 10 22:07:17.286: INFO: At 2022-06-10 22:04:55 +0000 UTC - event for affinity-nodeport-w8zqs: {kubelet node2} Created: Created container affinity-nodeport Jun 10 22:07:17.286: INFO: At 2022-06-10 22:04:55 +0000 UTC - event for affinity-nodeport-w8zqs: {kubelet node2} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/agnhost:2.32" in 1.235997436s Jun 10 22:07:17.286: INFO: At 2022-06-10 22:04:56 +0000 UTC - event for affinity-nodeport-784pk: {kubelet node2} Started: Started container affinity-nodeport Jun 10 22:07:17.286: INFO: At 2022-06-10 22:04:56 +0000 UTC - event for affinity-nodeport-s59tk: {kubelet node2} Started: Started container affinity-nodeport Jun 10 22:07:17.286: INFO: At 2022-06-10 22:04:56 +0000 UTC - event for affinity-nodeport-s59tk: {kubelet node2} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/agnhost:2.32" in 990.598449ms Jun 10 22:07:17.286: INFO: At 2022-06-10 22:04:56 +0000 UTC - event for affinity-nodeport-s59tk: {kubelet node2} Created: Created container affinity-nodeport Jun 10 22:07:17.286: INFO: At 2022-06-10 22:04:56 +0000 UTC - event for affinity-nodeport-w8zqs: {kubelet node2} Started: Started container affinity-nodeport Jun 10 22:07:17.286: INFO: At 2022-06-10 22:05:04 +0000 UTC - event for execpod-affinitypdwf4: {kubelet node1} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/agnhost:2.32" in 422.096747ms Jun 10 22:07:17.286: INFO: At 2022-06-10 22:05:04 +0000 UTC - event for execpod-affinitypdwf4: {kubelet node1} Created: Created container agnhost-container Jun 10 22:07:17.286: INFO: At 2022-06-10 22:05:04 +0000 UTC - event for execpod-affinitypdwf4: {kubelet node1} Started: Started container agnhost-container Jun 10 22:07:17.286: INFO: At 2022-06-10 22:05:04 +0000 UTC - event for execpod-affinitypdwf4: {kubelet node1} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/agnhost:2.32" Jun 10 22:07:17.286: INFO: At 2022-06-10 22:07:08 +0000 UTC - event for affinity-nodeport-784pk: {kubelet node2} Killing: Stopping container affinity-nodeport Jun 10 22:07:17.286: INFO: At 2022-06-10 22:07:08 +0000 UTC - event for affinity-nodeport-s59tk: {kubelet node2} Killing: Stopping container affinity-nodeport Jun 10 22:07:17.286: INFO: At 2022-06-10 22:07:08 +0000 UTC - event for affinity-nodeport-w8zqs: {kubelet node2} Killing: Stopping container affinity-nodeport Jun 10 22:07:17.286: INFO: At 2022-06-10 22:07:08 +0000 UTC - event for execpod-affinitypdwf4: {kubelet node1} Killing: Stopping container agnhost-container Jun 10 22:07:17.288: INFO: POD NODE PHASE GRACE CONDITIONS Jun 10 22:07:17.288: INFO: Jun 10 22:07:17.292: INFO: Logging node info for node master1 Jun 10 22:07:17.294: INFO: Node Info: &Node{ObjectMeta:{master1 e472448e-87fd-4e8d-bbb7-98d43d3d8a87 46034 0 2022-06-10 19:57:38 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master1 kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.202 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/master.version:v0.8.2 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2022-06-10 19:57:41 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}}} {flanneld Update v1 2022-06-10 20:00:25 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2022-06-10 20:00:33 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.0.0/24\"":{}},"f:taints":{}}}} {kubelet Update v1 2022-06-10 20:05:13 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}} {nfd-master Update v1 2022-06-10 20:08:15 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/master.version":{}}}}}]},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{451201003520 0} {} 440625980Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234739200 0} {} 196518300Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{406080902496 0} {} 406080902496 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324575232 0} {} 195629468Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-06-10 20:03:20 +0000 UTC,LastTransitionTime:2022-06-10 20:03:20 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-06-10 22:07:12 +0000 UTC,LastTransitionTime:2022-06-10 19:57:36 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-06-10 22:07:12 +0000 UTC,LastTransitionTime:2022-06-10 19:57:36 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-06-10 22:07:12 +0000 UTC,LastTransitionTime:2022-06-10 19:57:36 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-06-10 22:07:12 +0000 UTC,LastTransitionTime:2022-06-10 20:00:33 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.202,},NodeAddress{Type:Hostname,Address:master1,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:3faca96dd267476388422e9ecfe8ffa5,SystemUUID:00ACFB60-0631-E711-906E-0017A4403562,BootID:a8563bde-8faa-4424-940f-741c59dd35bf,KernelVersion:3.10.0-1160.66.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.17,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:727708945,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 sirot/netperf-latest:latest],SizeBytes:282025213,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[k8s.gcr.io/nfd/node-feature-discovery@sha256:74a1cbd82354f148277e20cdce25d57816e355a896bc67f67a0f722164b16945 k8s.gcr.io/nfd/node-feature-discovery:v0.8.2],SizeBytes:108486428,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[quay.io/prometheus-operator/prometheus-operator@sha256:850c86bfeda4389bc9c757a9fd17ca5a090ea6b424968178d4467492cfa13921 quay.io/prometheus-operator/prometheus-operator:v0.44.1],SizeBytes:42617274,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e k8s.gcr.io/coredns/coredns:v1.8.0],SizeBytes:42454755,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[localhost:30500/tasextender@sha256:bec743bd4fe4525edfd5f3c9bb11da21629092dfe60d396ce7f8168ac1088695 tasextender:latest localhost:30500/tasextender:0.4],SizeBytes:28910791,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[registry@sha256:1cd9409a311350c3072fe510b52046f104416376c126a479cef9a4dfe692cf57 registry:2.7.0],SizeBytes:24191168,},ContainerImage{Names:[nginx@sha256:b92d3b942c8b84da889ac3dc6e83bd20ffb8cd2d8298eba92c8b0bf88d52f03e nginx:1.20.1-alpine],SizeBytes:22721538,},ContainerImage{Names:[@ :],SizeBytes:5577654,},ContainerImage{Names:[alpine@sha256:c0e9560cda118f9ec63ddefb4a173a2b2a0347082d7dff7dc14272e7841a5b5a alpine:3.12.1],SizeBytes:5573013,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jun 10 22:07:17.295: INFO: Logging kubelet events for node master1 Jun 10 22:07:17.297: INFO: Logging pods the kubelet thinks is on node master1 Jun 10 22:07:17.331: INFO: kube-proxy-rd4j7 started at 2022-06-10 19:59:24 +0000 UTC (0+1 container statuses recorded) Jun 10 22:07:17.331: INFO: Container kube-proxy ready: true, restart count 3 Jun 10 22:07:17.331: INFO: container-registry-65d7c44b96-rsh2n started at 2022-06-10 20:04:56 +0000 UTC (0+2 container statuses recorded) Jun 10 22:07:17.331: INFO: Container docker-registry ready: true, restart count 0 Jun 10 22:07:17.331: INFO: Container nginx ready: true, restart count 0 Jun 10 22:07:17.331: INFO: node-feature-discovery-controller-cff799f9f-74qhv started at 2022-06-10 20:08:09 +0000 UTC (0+1 container statuses recorded) Jun 10 22:07:17.331: INFO: Container nfd-controller ready: true, restart count 0 Jun 10 22:07:17.331: INFO: prometheus-operator-585ccfb458-kkb8f started at 2022-06-10 20:13:26 +0000 UTC (0+2 container statuses recorded) Jun 10 22:07:17.331: INFO: Container kube-rbac-proxy ready: true, restart count 0 Jun 10 22:07:17.331: INFO: Container prometheus-operator ready: true, restart count 0 Jun 10 22:07:17.331: INFO: node-exporter-vc67r started at 2022-06-10 20:13:33 +0000 UTC (0+2 container statuses recorded) Jun 10 22:07:17.331: INFO: Container kube-rbac-proxy ready: true, restart count 0 Jun 10 22:07:17.331: INFO: Container node-exporter ready: true, restart count 0 Jun 10 22:07:17.331: INFO: kube-apiserver-master1 started at 2022-06-10 19:58:43 +0000 UTC (0+1 container statuses recorded) Jun 10 22:07:17.331: INFO: Container kube-apiserver ready: true, restart count 0 Jun 10 22:07:17.331: INFO: kube-controller-manager-master1 started at 2022-06-10 20:06:49 +0000 UTC (0+1 container statuses recorded) Jun 10 22:07:17.331: INFO: Container kube-controller-manager ready: true, restart count 2 Jun 10 22:07:17.331: INFO: kube-scheduler-master1 started at 2022-06-10 19:58:43 +0000 UTC (0+1 container statuses recorded) Jun 10 22:07:17.331: INFO: Container kube-scheduler ready: true, restart count 0 Jun 10 22:07:17.331: INFO: kube-flannel-xx9h7 started at 2022-06-10 20:00:20 +0000 UTC (1+1 container statuses recorded) Jun 10 22:07:17.331: INFO: Init container install-cni ready: true, restart count 0 Jun 10 22:07:17.331: INFO: Container kube-flannel ready: true, restart count 1 Jun 10 22:07:17.331: INFO: kube-multus-ds-amd64-t5pr7 started at 2022-06-10 20:00:29 +0000 UTC (0+1 container statuses recorded) Jun 10 22:07:17.331: INFO: Container kube-multus ready: true, restart count 1 Jun 10 22:07:17.331: INFO: dns-autoscaler-7df78bfcfb-kz7px started at 2022-06-10 20:00:58 +0000 UTC (0+1 container statuses recorded) Jun 10 22:07:17.331: INFO: Container autoscaler ready: true, restart count 1 Jun 10 22:07:17.437: INFO: Latency metrics for node master1 Jun 10 22:07:17.437: INFO: Logging node info for node master2 Jun 10 22:07:17.440: INFO: Node Info: &Node{ObjectMeta:{master2 66c7af40-c8de-462b-933d-792f10a44a43 46020 0 2022-06-10 19:58:07 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master2 kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.203 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2022-06-10 19:58:08 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}}} {flanneld Update v1 2022-06-10 20:00:25 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2022-06-10 20:00:25 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}},"f:taints":{}}}} {kubelet Update v1 2022-06-10 20:10:54 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:capacity":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.1.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{451201003520 0} {} 440625980Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234743296 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{406080902496 0} {} 406080902496 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324579328 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-06-10 20:03:20 +0000 UTC,LastTransitionTime:2022-06-10 20:03:20 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-06-10 22:07:10 +0000 UTC,LastTransitionTime:2022-06-10 19:58:07 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-06-10 22:07:10 +0000 UTC,LastTransitionTime:2022-06-10 19:58:07 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-06-10 22:07:10 +0000 UTC,LastTransitionTime:2022-06-10 19:58:07 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-06-10 22:07:10 +0000 UTC,LastTransitionTime:2022-06-10 20:00:25 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.203,},NodeAddress{Type:Hostname,Address:master2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:31687d4b1abb46329a442e068ee56c42,SystemUUID:00A0DE53-E51D-E711-906E-0017A4403562,BootID:e234d452-a6d8-4bf0-b98d-a080613c39e9,KernelVersion:3.10.0-1160.66.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.17,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:727708945,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 sirot/netperf-latest:latest],SizeBytes:282025213,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e k8s.gcr.io/coredns/coredns:v1.8.0],SizeBytes:42454755,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 aquasec/kube-bench:0.3.1],SizeBytes:19301876,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jun 10 22:07:17.440: INFO: Logging kubelet events for node master2 Jun 10 22:07:17.444: INFO: Logging pods the kubelet thinks is on node master2 Jun 10 22:07:17.462: INFO: kube-controller-manager-master2 started at 2022-06-10 20:06:49 +0000 UTC (0+1 container statuses recorded) Jun 10 22:07:17.462: INFO: Container kube-controller-manager ready: true, restart count 1 Jun 10 22:07:17.462: INFO: kube-scheduler-master2 started at 2022-06-10 20:06:49 +0000 UTC (0+1 container statuses recorded) Jun 10 22:07:17.462: INFO: Container kube-scheduler ready: true, restart count 3 Jun 10 22:07:17.462: INFO: kube-proxy-2kbvc started at 2022-06-10 19:59:24 +0000 UTC (0+1 container statuses recorded) Jun 10 22:07:17.462: INFO: Container kube-proxy ready: true, restart count 2 Jun 10 22:07:17.462: INFO: kube-flannel-ftn9l started at 2022-06-10 20:00:20 +0000 UTC (1+1 container statuses recorded) Jun 10 22:07:17.463: INFO: Init container install-cni ready: true, restart count 2 Jun 10 22:07:17.463: INFO: Container kube-flannel ready: true, restart count 1 Jun 10 22:07:17.463: INFO: kube-multus-ds-amd64-nrmqq started at 2022-06-10 20:00:29 +0000 UTC (0+1 container statuses recorded) Jun 10 22:07:17.463: INFO: Container kube-multus ready: true, restart count 1 Jun 10 22:07:17.463: INFO: coredns-8474476ff8-hlspd started at 2022-06-10 20:01:00 +0000 UTC (0+1 container statuses recorded) Jun 10 22:07:17.463: INFO: Container coredns ready: true, restart count 1 Jun 10 22:07:17.463: INFO: kube-apiserver-master2 started at 2022-06-10 19:58:44 +0000 UTC (0+1 container statuses recorded) Jun 10 22:07:17.463: INFO: Container kube-apiserver ready: true, restart count 0 Jun 10 22:07:17.463: INFO: node-exporter-6fbrb started at 2022-06-10 20:13:33 +0000 UTC (0+2 container statuses recorded) Jun 10 22:07:17.463: INFO: Container kube-rbac-proxy ready: true, restart count 0 Jun 10 22:07:17.463: INFO: Container node-exporter ready: true, restart count 0 Jun 10 22:07:17.540: INFO: Latency metrics for node master2 Jun 10 22:07:17.540: INFO: Logging node info for node master3 Jun 10 22:07:17.542: INFO: Node Info: &Node{ObjectMeta:{master3 e51505ec-e791-4bbe-aeb1-bd0671fd4464 45961 0 2022-06-10 19:58:16 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master3 kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.204 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2022-06-10 19:58:18 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}}} {flanneld Update v1 2022-06-10 20:00:25 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2022-06-10 20:00:31 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.2.0/24\"":{}},"f:taints":{}}}} {kubelet Update v1 2022-06-10 20:10:54 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:capacity":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.2.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{451201003520 0} {} 440625980Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234739200 0} {} 196518300Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{406080902496 0} {} 406080902496 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324575232 0} {} 195629468Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-06-10 20:03:14 +0000 UTC,LastTransitionTime:2022-06-10 20:03:14 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-06-10 22:07:07 +0000 UTC,LastTransitionTime:2022-06-10 19:58:16 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-06-10 22:07:07 +0000 UTC,LastTransitionTime:2022-06-10 19:58:16 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-06-10 22:07:07 +0000 UTC,LastTransitionTime:2022-06-10 19:58:16 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-06-10 22:07:07 +0000 UTC,LastTransitionTime:2022-06-10 20:00:31 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.204,},NodeAddress{Type:Hostname,Address:master3,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:1f373495c4c54f68a37fa0d50cd1da58,SystemUUID:008B1444-141E-E711-906E-0017A4403562,BootID:a719d949-f9d1-4ee4-a79b-ab3a929b7d00,KernelVersion:3.10.0-1160.66.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.17,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:727708945,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e k8s.gcr.io/coredns/coredns:v1.8.0],SizeBytes:42454755,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jun 10 22:07:17.543: INFO: Logging kubelet events for node master3 Jun 10 22:07:17.545: INFO: Logging pods the kubelet thinks is on node master3 Jun 10 22:07:17.556: INFO: kube-flannel-jpd2j started at 2022-06-10 20:00:20 +0000 UTC (1+1 container statuses recorded) Jun 10 22:07:17.556: INFO: Init container install-cni ready: true, restart count 2 Jun 10 22:07:17.556: INFO: Container kube-flannel ready: true, restart count 2 Jun 10 22:07:17.556: INFO: kube-multus-ds-amd64-8b4tg started at 2022-06-10 20:00:29 +0000 UTC (0+1 container statuses recorded) Jun 10 22:07:17.556: INFO: Container kube-multus ready: true, restart count 1 Jun 10 22:07:17.556: INFO: coredns-8474476ff8-s8q89 started at 2022-06-10 20:00:56 +0000 UTC (0+1 container statuses recorded) Jun 10 22:07:17.556: INFO: Container coredns ready: true, restart count 1 Jun 10 22:07:17.556: INFO: node-exporter-q4rw6 started at 2022-06-10 20:13:33 +0000 UTC (0+2 container statuses recorded) Jun 10 22:07:17.556: INFO: Container kube-rbac-proxy ready: true, restart count 0 Jun 10 22:07:17.556: INFO: Container node-exporter ready: true, restart count 0 Jun 10 22:07:17.556: INFO: kube-apiserver-master3 started at 2022-06-10 20:03:07 +0000 UTC (0+1 container statuses recorded) Jun 10 22:07:17.556: INFO: Container kube-apiserver ready: true, restart count 0 Jun 10 22:07:17.556: INFO: kube-controller-manager-master3 started at 2022-06-10 20:06:49 +0000 UTC (0+1 container statuses recorded) Jun 10 22:07:17.556: INFO: Container kube-controller-manager ready: true, restart count 2 Jun 10 22:07:17.556: INFO: kube-scheduler-master3 started at 2022-06-10 20:03:07 +0000 UTC (0+1 container statuses recorded) Jun 10 22:07:17.556: INFO: Container kube-scheduler ready: true, restart count 1 Jun 10 22:07:17.556: INFO: kube-proxy-rm9n6 started at 2022-06-10 19:59:24 +0000 UTC (0+1 container statuses recorded) Jun 10 22:07:17.556: INFO: Container kube-proxy ready: true, restart count 1 Jun 10 22:07:17.631: INFO: Latency metrics for node master3 Jun 10 22:07:17.631: INFO: Logging node info for node node1 Jun 10 22:07:17.634: INFO: Node Info: &Node{ObjectMeta:{node1 fa951133-0317-499e-8a0a-fc7a0636a371 46118 0 2022-06-10 19:59:19 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux cmk.intel.com/cmk-node:true feature.node.kubernetes.io/cpu-cpuid.ADX:true feature.node.kubernetes.io/cpu-cpuid.AESNI:true feature.node.kubernetes.io/cpu-cpuid.AVX:true feature.node.kubernetes.io/cpu-cpuid.AVX2:true feature.node.kubernetes.io/cpu-cpuid.AVX512BW:true feature.node.kubernetes.io/cpu-cpuid.AVX512CD:true feature.node.kubernetes.io/cpu-cpuid.AVX512DQ:true feature.node.kubernetes.io/cpu-cpuid.AVX512F:true feature.node.kubernetes.io/cpu-cpuid.AVX512VL:true feature.node.kubernetes.io/cpu-cpuid.FMA3:true feature.node.kubernetes.io/cpu-cpuid.HLE:true feature.node.kubernetes.io/cpu-cpuid.IBPB:true feature.node.kubernetes.io/cpu-cpuid.MPX:true feature.node.kubernetes.io/cpu-cpuid.RTM:true feature.node.kubernetes.io/cpu-cpuid.SSE4:true feature.node.kubernetes.io/cpu-cpuid.SSE42:true feature.node.kubernetes.io/cpu-cpuid.STIBP:true feature.node.kubernetes.io/cpu-cpuid.VMX:true feature.node.kubernetes.io/cpu-cstate.enabled:true feature.node.kubernetes.io/cpu-hardware_multithreading:true feature.node.kubernetes.io/cpu-pstate.status:active feature.node.kubernetes.io/cpu-pstate.turbo:true feature.node.kubernetes.io/cpu-rdt.RDTCMT:true feature.node.kubernetes.io/cpu-rdt.RDTL3CA:true feature.node.kubernetes.io/cpu-rdt.RDTMBA:true feature.node.kubernetes.io/cpu-rdt.RDTMBM:true feature.node.kubernetes.io/cpu-rdt.RDTMON:true feature.node.kubernetes.io/kernel-config.NO_HZ:true feature.node.kubernetes.io/kernel-config.NO_HZ_FULL:true feature.node.kubernetes.io/kernel-selinux.enabled:true feature.node.kubernetes.io/kernel-version.full:3.10.0-1160.66.1.el7.x86_64 feature.node.kubernetes.io/kernel-version.major:3 feature.node.kubernetes.io/kernel-version.minor:10 feature.node.kubernetes.io/kernel-version.revision:0 feature.node.kubernetes.io/memory-numa:true feature.node.kubernetes.io/network-sriov.capable:true feature.node.kubernetes.io/network-sriov.configured:true feature.node.kubernetes.io/pci-0300_1a03.present:true feature.node.kubernetes.io/storage-nonrotationaldisk:true feature.node.kubernetes.io/system-os_release.ID:centos feature.node.kubernetes.io/system-os_release.VERSION_ID:7 feature.node.kubernetes.io/system-os_release.VERSION_ID.major:7 kubernetes.io/arch:amd64 kubernetes.io/hostname:node1 kubernetes.io/os:linux] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.207 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/extended-resources: nfd.node.kubernetes.io/feature-labels:cpu-cpuid.ADX,cpu-cpuid.AESNI,cpu-cpuid.AVX,cpu-cpuid.AVX2,cpu-cpuid.AVX512BW,cpu-cpuid.AVX512CD,cpu-cpuid.AVX512DQ,cpu-cpuid.AVX512F,cpu-cpuid.AVX512VL,cpu-cpuid.FMA3,cpu-cpuid.HLE,cpu-cpuid.IBPB,cpu-cpuid.MPX,cpu-cpuid.RTM,cpu-cpuid.SSE4,cpu-cpuid.SSE42,cpu-cpuid.STIBP,cpu-cpuid.VMX,cpu-cstate.enabled,cpu-hardware_multithreading,cpu-pstate.status,cpu-pstate.turbo,cpu-rdt.RDTCMT,cpu-rdt.RDTL3CA,cpu-rdt.RDTMBA,cpu-rdt.RDTMBM,cpu-rdt.RDTMON,kernel-config.NO_HZ,kernel-config.NO_HZ_FULL,kernel-selinux.enabled,kernel-version.full,kernel-version.major,kernel-version.minor,kernel-version.revision,memory-numa,network-sriov.capable,network-sriov.configured,pci-0300_1a03.present,storage-nonrotationaldisk,system-os_release.ID,system-os_release.VERSION_ID,system-os_release.VERSION_ID.major nfd.node.kubernetes.io/worker.version:v0.8.2 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2022-06-10 19:59:19 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.3.0/24\"":{}}}}} {kubeadm Update v1 2022-06-10 19:59:20 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {flanneld Update v1 2022-06-10 20:00:25 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {nfd-master Update v1 2022-06-10 20:08:16 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/extended-resources":{},"f:nfd.node.kubernetes.io/feature-labels":{},"f:nfd.node.kubernetes.io/worker.version":{}},"f:labels":{"f:feature.node.kubernetes.io/cpu-cpuid.ADX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AESNI":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX2":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512BW":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512CD":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512DQ":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512F":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512VL":{},"f:feature.node.kubernetes.io/cpu-cpuid.FMA3":{},"f:feature.node.kubernetes.io/cpu-cpuid.HLE":{},"f:feature.node.kubernetes.io/cpu-cpuid.IBPB":{},"f:feature.node.kubernetes.io/cpu-cpuid.MPX":{},"f:feature.node.kubernetes.io/cpu-cpuid.RTM":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE4":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE42":{},"f:feature.node.kubernetes.io/cpu-cpuid.STIBP":{},"f:feature.node.kubernetes.io/cpu-cpuid.VMX":{},"f:feature.node.kubernetes.io/cpu-cstate.enabled":{},"f:feature.node.kubernetes.io/cpu-hardware_multithreading":{},"f:feature.node.kubernetes.io/cpu-pstate.status":{},"f:feature.node.kubernetes.io/cpu-pstate.turbo":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTCMT":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTL3CA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBM":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMON":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ_FULL":{},"f:feature.node.kubernetes.io/kernel-selinux.enabled":{},"f:feature.node.kubernetes.io/kernel-version.full":{},"f:feature.node.kubernetes.io/kernel-version.major":{},"f:feature.node.kubernetes.io/kernel-version.minor":{},"f:feature.node.kubernetes.io/kernel-version.revision":{},"f:feature.node.kubernetes.io/memory-numa":{},"f:feature.node.kubernetes.io/network-sriov.capable":{},"f:feature.node.kubernetes.io/network-sriov.configured":{},"f:feature.node.kubernetes.io/pci-0300_1a03.present":{},"f:feature.node.kubernetes.io/storage-nonrotationaldisk":{},"f:feature.node.kubernetes.io/system-os_release.ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID.major":{}}}}} {Swagger-Codegen Update v1 2022-06-10 20:11:46 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:cmk.intel.com/cmk-node":{}}},"f:status":{"f:capacity":{"f:cmk.intel.com/exclusive-cores":{}}}}} {kubelet Update v1 2022-06-10 20:11:53 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:cmk.intel.com/exclusive-cores":{},"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:capacity":{"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.3.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.3.0/24],},Status:NodeStatus{Capacity:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{451201003520 0} {} 440625980Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{201269608448 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{77 0} {} 77 DecimalSI},ephemeral-storage: {{406080902496 0} {} 406080902496 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{178884608000 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-06-10 20:03:13 +0000 UTC,LastTransitionTime:2022-06-10 20:03:13 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-06-10 22:07:17 +0000 UTC,LastTransitionTime:2022-06-10 19:59:19 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-06-10 22:07:17 +0000 UTC,LastTransitionTime:2022-06-10 19:59:19 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-06-10 22:07:17 +0000 UTC,LastTransitionTime:2022-06-10 19:59:19 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-06-10 22:07:17 +0000 UTC,LastTransitionTime:2022-06-10 20:00:27 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.207,},NodeAddress{Type:Hostname,Address:node1,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:aabc551d0ffe4cb3b41c0db91649a9a2,SystemUUID:00CDA902-D022-E711-906E-0017A4403562,BootID:fea48af7-d08f-4093-b808-340d06faf38b,KernelVersion:3.10.0-1160.66.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.17,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[opnfv/barometer-collectd@sha256:f30e965aa6195e6ac4ca2410f5a15e3704c92e4afa5208178ca22a7911975d66],SizeBytes:1075575763,},ContainerImage{Names:[@ :],SizeBytes:1003977815,},ContainerImage{Names:[localhost:30500/cmk@sha256:fa61e6e6fee0a4d296013d2993a9ff5538ff0b2e232e6b9c661a6604d93ce888 cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:727708945,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[golang@sha256:db2475a1dbb2149508e5db31d7d77a75e6600d54be645f37681f03f2762169ba golang:alpine3.12],SizeBytes:301186719,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[k8s.gcr.io/etcd@sha256:4ad90a11b55313b182afc186b9876c8e891531b8db4c9bf1541953021618d0e2 k8s.gcr.io/etcd:3.4.13-0],SizeBytes:253392289,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:702a992280fb7c3303e84a5801acbb4c9c7fcf48cffe0e9c8be3f0c60f74cf89 k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.4],SizeBytes:253371792,},ContainerImage{Names:[grafana/grafana@sha256:ba39bf5131dcc0464134a3ff0e26e8c6380415249fa725e5f619176601255172 grafana/grafana:7.5.4],SizeBytes:203572842,},ContainerImage{Names:[quay.io/prometheus/prometheus@sha256:b899dbd1b9017b9a379f76ce5b40eead01a62762c4f2057eacef945c3c22d210 quay.io/prometheus/prometheus:v2.22.1],SizeBytes:168344243,},ContainerImage{Names:[nginx@sha256:a05b0cdd4fc1be3b224ba9662ebdf98fe44c09c0c9215b45f84344c12867002e nginx:1.21.1],SizeBytes:133175493,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:716d2f68314c5c4ddd5ecdb45183fcb4ed8019015982c1321571f863989b70b0 k8s.gcr.io/e2e-test-images/httpd:2.4.39-1],SizeBytes:126894770,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:125930239,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:1f36a24cfb5e0c3f725d7565a867c2384282fcbeccc77b07b423c9da95763a9a k8s.gcr.io/e2e-test-images/nautilus:1.4],SizeBytes:121748345,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[k8s.gcr.io/nfd/node-feature-discovery@sha256:74a1cbd82354f148277e20cdce25d57816e355a896bc67f67a0f722164b16945 k8s.gcr.io/nfd/node-feature-discovery:v0.8.2],SizeBytes:108486428,},ContainerImage{Names:[directxman12/k8s-prometheus-adapter@sha256:2b09a571757a12c0245f2f1a74db4d1b9386ff901cf57f5ce48a0a682bd0e3af directxman12/k8s-prometheus-adapter:v0.8.2],SizeBytes:68230450,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/sample-apiserver@sha256:e7fddbaac4c3451da2365ab90bad149d32f11409738034e41e0f460927f7c276 k8s.gcr.io/e2e-test-images/sample-apiserver:1.17.4],SizeBytes:58172101,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[localhost:30500/sriov-device-plugin@sha256:73408b8d6699bf382b8f7526b6d0a986fad0f037440cd9aabd8985a7e1dbea07 nfvpe/sriov-device-plugin:latest localhost:30500/sriov-device-plugin:v3.3.2],SizeBytes:42676189,},ContainerImage{Names:[localhost:30500/tasextender@sha256:bec743bd4fe4525edfd5f3c9bb11da21629092dfe60d396ce7f8168ac1088695 localhost:30500/tasextender:0.4],SizeBytes:28910791,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[prom/collectd-exporter@sha256:73fbda4d24421bff3b741c27efc36f1b6fbe7c57c378d56d4ff78101cd556654],SizeBytes:17463681,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:16032814,},ContainerImage{Names:[quay.io/prometheus-operator/prometheus-config-reloader@sha256:4dee0fcf1820355ddd6986c1317b555693776c731315544a99d6cc59a7e34ce9 quay.io/prometheus-operator/prometheus-config-reloader:v0.44.1],SizeBytes:13433274,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:11443478,},ContainerImage{Names:[appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172cead6502fec989198b98d8bb appropriate/curl:edge],SizeBytes:5654234,},ContainerImage{Names:[alpine@sha256:c75ac27b49326926b803b9ed43bf088bc220d22556de1bc5f72d742c91398f69 alpine:3.12],SizeBytes:5581590,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:1154361,},ContainerImage{Names:[busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 busybox:1.28],SizeBytes:1146369,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jun 10 22:07:17.636: INFO: Logging kubelet events for node node1 Jun 10 22:07:17.637: INFO: Logging pods the kubelet thinks is on node node1 Jun 10 22:07:17.653: INFO: cmk-init-discover-node1-hlbt6 started at 2022-06-10 20:11:42 +0000 UTC (0+3 container statuses recorded) Jun 10 22:07:17.653: INFO: Container discover ready: false, restart count 0 Jun 10 22:07:17.653: INFO: Container init ready: false, restart count 0 Jun 10 22:07:17.653: INFO: Container install ready: false, restart count 0 Jun 10 22:07:17.653: INFO: prometheus-k8s-0 started at 2022-06-10 20:13:45 +0000 UTC (0+4 container statuses recorded) Jun 10 22:07:17.654: INFO: Container config-reloader ready: true, restart count 0 Jun 10 22:07:17.654: INFO: Container custom-metrics-apiserver ready: true, restart count 0 Jun 10 22:07:17.654: INFO: Container grafana ready: true, restart count 0 Jun 10 22:07:17.654: INFO: Container prometheus ready: true, restart count 1 Jun 10 22:07:17.654: INFO: externalname-service-m9jbh started at 2022-06-10 22:07:04 +0000 UTC (0+1 container statuses recorded) Jun 10 22:07:17.654: INFO: Container externalname-service ready: true, restart count 0 Jun 10 22:07:17.654: INFO: execpodsdwps started at 2022-06-10 22:07:06 +0000 UTC (0+1 container statuses recorded) Jun 10 22:07:17.654: INFO: Container agnhost-container ready: true, restart count 0 Jun 10 22:07:17.654: INFO: node-feature-discovery-worker-9xsdt started at 2022-06-10 20:08:09 +0000 UTC (0+1 container statuses recorded) Jun 10 22:07:17.654: INFO: Container nfd-worker ready: true, restart count 0 Jun 10 22:07:17.654: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-k4f5v started at 2022-06-10 20:09:21 +0000 UTC (0+1 container statuses recorded) Jun 10 22:07:17.654: INFO: Container kube-sriovdp ready: true, restart count 0 Jun 10 22:07:17.654: INFO: node-exporter-tk8f9 started at 2022-06-10 20:13:33 +0000 UTC (0+2 container statuses recorded) Jun 10 22:07:17.654: INFO: Container kube-rbac-proxy ready: true, restart count 0 Jun 10 22:07:17.654: INFO: Container node-exporter ready: true, restart count 0 Jun 10 22:07:17.654: INFO: replace-27581646-hfs4n started at 2022-06-10 22:06:00 +0000 UTC (0+1 container statuses recorded) Jun 10 22:07:17.654: INFO: Container c ready: true, restart count 0 Jun 10 22:07:17.654: INFO: cmk-webhook-6c9d5f8578-n9w8j started at 2022-06-10 20:12:30 +0000 UTC (0+1 container statuses recorded) Jun 10 22:07:17.654: INFO: Container cmk-webhook ready: true, restart count 0 Jun 10 22:07:17.654: INFO: collectd-kpj5z started at 2022-06-10 20:17:30 +0000 UTC (0+3 container statuses recorded) Jun 10 22:07:17.654: INFO: Container collectd ready: true, restart count 0 Jun 10 22:07:17.654: INFO: Container collectd-exporter ready: true, restart count 0 Jun 10 22:07:17.654: INFO: Container rbac-proxy ready: true, restart count 0 Jun 10 22:07:17.654: INFO: cmk-qjrhs started at 2022-06-10 20:12:29 +0000 UTC (0+2 container statuses recorded) Jun 10 22:07:17.654: INFO: Container nodereport ready: true, restart count 0 Jun 10 22:07:17.654: INFO: Container reconcile ready: true, restart count 0 Jun 10 22:07:17.654: INFO: test-pod started at 2022-06-10 22:02:20 +0000 UTC (0+1 container statuses recorded) Jun 10 22:07:17.654: INFO: Container webserver ready: true, restart count 0 Jun 10 22:07:17.654: INFO: nginx-proxy-node1 started at 2022-06-10 19:59:19 +0000 UTC (0+1 container statuses recorded) Jun 10 22:07:17.654: INFO: Container nginx-proxy ready: true, restart count 2 Jun 10 22:07:17.654: INFO: execpod-affinitybwztk started at 2022-06-10 22:05:36 +0000 UTC (0+1 container statuses recorded) Jun 10 22:07:17.654: INFO: Container agnhost-container ready: true, restart count 0 Jun 10 22:07:17.654: INFO: execpodclt87 started at 2022-06-10 22:07:07 +0000 UTC (0+1 container statuses recorded) Jun 10 22:07:17.654: INFO: Container agnhost-container ready: true, restart count 0 Jun 10 22:07:17.654: INFO: affinity-nodeport-transition-cx685 started at 2022-06-10 22:05:27 +0000 UTC (0+1 container statuses recorded) Jun 10 22:07:17.654: INFO: Container affinity-nodeport-transition ready: true, restart count 0 Jun 10 22:07:17.654: INFO: kube-proxy-5bkrr started at 2022-06-10 19:59:24 +0000 UTC (0+1 container statuses recorded) Jun 10 22:07:17.654: INFO: Container kube-proxy ready: true, restart count 1 Jun 10 22:07:17.654: INFO: kube-flannel-x926c started at 2022-06-10 20:00:20 +0000 UTC (1+1 container statuses recorded) Jun 10 22:07:17.654: INFO: Init container install-cni ready: true, restart count 2 Jun 10 22:07:17.654: INFO: Container kube-flannel ready: true, restart count 2 Jun 10 22:07:17.654: INFO: kube-multus-ds-amd64-4gckf started at 2022-06-10 20:00:29 +0000 UTC (0+1 container statuses recorded) Jun 10 22:07:17.654: INFO: Container kube-multus ready: true, restart count 1 Jun 10 22:07:17.654: INFO: tas-telemetry-aware-scheduling-84ff454dfb-lb2mn started at 2022-06-10 20:16:40 +0000 UTC (0+1 container statuses recorded) Jun 10 22:07:17.654: INFO: Container tas-extender ready: true, restart count 0 Jun 10 22:07:17.654: INFO: test-webserver-130dfafe-d20f-42b6-9b5d-661bc0a0fc28 started at 2022-06-10 22:06:43 +0000 UTC (0+1 container statuses recorded) Jun 10 22:07:17.654: INFO: Container test-webserver ready: true, restart count 0 Jun 10 22:07:17.654: INFO: ss2-1 started at 2022-06-10 22:06:56 +0000 UTC (0+1 container statuses recorded) Jun 10 22:07:17.654: INFO: Container webserver ready: true, restart count 0 Jun 10 22:07:17.906: INFO: Latency metrics for node node1 Jun 10 22:07:17.906: INFO: Logging node info for node node2 Jun 10 22:07:17.909: INFO: Node Info: &Node{ObjectMeta:{node2 e3ba5b73-7a35-4d3f-9138-31db06c90dc3 46071 0 2022-06-10 19:59:19 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux cmk.intel.com/cmk-node:true feature.node.kubernetes.io/cpu-cpuid.ADX:true feature.node.kubernetes.io/cpu-cpuid.AESNI:true feature.node.kubernetes.io/cpu-cpuid.AVX:true feature.node.kubernetes.io/cpu-cpuid.AVX2:true feature.node.kubernetes.io/cpu-cpuid.AVX512BW:true feature.node.kubernetes.io/cpu-cpuid.AVX512CD:true feature.node.kubernetes.io/cpu-cpuid.AVX512DQ:true feature.node.kubernetes.io/cpu-cpuid.AVX512F:true feature.node.kubernetes.io/cpu-cpuid.AVX512VL:true feature.node.kubernetes.io/cpu-cpuid.FMA3:true feature.node.kubernetes.io/cpu-cpuid.HLE:true feature.node.kubernetes.io/cpu-cpuid.IBPB:true feature.node.kubernetes.io/cpu-cpuid.MPX:true feature.node.kubernetes.io/cpu-cpuid.RTM:true feature.node.kubernetes.io/cpu-cpuid.SSE4:true feature.node.kubernetes.io/cpu-cpuid.SSE42:true feature.node.kubernetes.io/cpu-cpuid.STIBP:true feature.node.kubernetes.io/cpu-cpuid.VMX:true feature.node.kubernetes.io/cpu-cstate.enabled:true feature.node.kubernetes.io/cpu-hardware_multithreading:true feature.node.kubernetes.io/cpu-pstate.status:active feature.node.kubernetes.io/cpu-pstate.turbo:true feature.node.kubernetes.io/cpu-rdt.RDTCMT:true feature.node.kubernetes.io/cpu-rdt.RDTL3CA:true feature.node.kubernetes.io/cpu-rdt.RDTMBA:true feature.node.kubernetes.io/cpu-rdt.RDTMBM:true feature.node.kubernetes.io/cpu-rdt.RDTMON:true feature.node.kubernetes.io/kernel-config.NO_HZ:true feature.node.kubernetes.io/kernel-config.NO_HZ_FULL:true feature.node.kubernetes.io/kernel-selinux.enabled:true feature.node.kubernetes.io/kernel-version.full:3.10.0-1160.66.1.el7.x86_64 feature.node.kubernetes.io/kernel-version.major:3 feature.node.kubernetes.io/kernel-version.minor:10 feature.node.kubernetes.io/kernel-version.revision:0 feature.node.kubernetes.io/memory-numa:true feature.node.kubernetes.io/network-sriov.capable:true feature.node.kubernetes.io/network-sriov.configured:true feature.node.kubernetes.io/pci-0300_1a03.present:true feature.node.kubernetes.io/storage-nonrotationaldisk:true feature.node.kubernetes.io/system-os_release.ID:centos feature.node.kubernetes.io/system-os_release.VERSION_ID:7 feature.node.kubernetes.io/system-os_release.VERSION_ID.major:7 kubernetes.io/arch:amd64 kubernetes.io/hostname:node2 kubernetes.io/os:linux] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.208 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/extended-resources: nfd.node.kubernetes.io/feature-labels:cpu-cpuid.ADX,cpu-cpuid.AESNI,cpu-cpuid.AVX,cpu-cpuid.AVX2,cpu-cpuid.AVX512BW,cpu-cpuid.AVX512CD,cpu-cpuid.AVX512DQ,cpu-cpuid.AVX512F,cpu-cpuid.AVX512VL,cpu-cpuid.FMA3,cpu-cpuid.HLE,cpu-cpuid.IBPB,cpu-cpuid.MPX,cpu-cpuid.RTM,cpu-cpuid.SSE4,cpu-cpuid.SSE42,cpu-cpuid.STIBP,cpu-cpuid.VMX,cpu-cstate.enabled,cpu-hardware_multithreading,cpu-pstate.status,cpu-pstate.turbo,cpu-rdt.RDTCMT,cpu-rdt.RDTL3CA,cpu-rdt.RDTMBA,cpu-rdt.RDTMBM,cpu-rdt.RDTMON,kernel-config.NO_HZ,kernel-config.NO_HZ_FULL,kernel-selinux.enabled,kernel-version.full,kernel-version.major,kernel-version.minor,kernel-version.revision,memory-numa,network-sriov.capable,network-sriov.configured,pci-0300_1a03.present,storage-nonrotationaldisk,system-os_release.ID,system-os_release.VERSION_ID,system-os_release.VERSION_ID.major nfd.node.kubernetes.io/worker.version:v0.8.2 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2022-06-10 19:59:19 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.4.0/24\"":{}}}}} {kubeadm Update v1 2022-06-10 19:59:20 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {flanneld Update v1 2022-06-10 20:00:25 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {nfd-master Update v1 2022-06-10 20:08:16 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/extended-resources":{},"f:nfd.node.kubernetes.io/feature-labels":{},"f:nfd.node.kubernetes.io/worker.version":{}},"f:labels":{"f:feature.node.kubernetes.io/cpu-cpuid.ADX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AESNI":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX2":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512BW":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512CD":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512DQ":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512F":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512VL":{},"f:feature.node.kubernetes.io/cpu-cpuid.FMA3":{},"f:feature.node.kubernetes.io/cpu-cpuid.HLE":{},"f:feature.node.kubernetes.io/cpu-cpuid.IBPB":{},"f:feature.node.kubernetes.io/cpu-cpuid.MPX":{},"f:feature.node.kubernetes.io/cpu-cpuid.RTM":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE4":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE42":{},"f:feature.node.kubernetes.io/cpu-cpuid.STIBP":{},"f:feature.node.kubernetes.io/cpu-cpuid.VMX":{},"f:feature.node.kubernetes.io/cpu-cstate.enabled":{},"f:feature.node.kubernetes.io/cpu-hardware_multithreading":{},"f:feature.node.kubernetes.io/cpu-pstate.status":{},"f:feature.node.kubernetes.io/cpu-pstate.turbo":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTCMT":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTL3CA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBM":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMON":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ_FULL":{},"f:feature.node.kubernetes.io/kernel-selinux.enabled":{},"f:feature.node.kubernetes.io/kernel-version.full":{},"f:feature.node.kubernetes.io/kernel-version.major":{},"f:feature.node.kubernetes.io/kernel-version.minor":{},"f:feature.node.kubernetes.io/kernel-version.revision":{},"f:feature.node.kubernetes.io/memory-numa":{},"f:feature.node.kubernetes.io/network-sriov.capable":{},"f:feature.node.kubernetes.io/network-sriov.configured":{},"f:feature.node.kubernetes.io/pci-0300_1a03.present":{},"f:feature.node.kubernetes.io/storage-nonrotationaldisk":{},"f:feature.node.kubernetes.io/system-os_release.ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID.major":{}}}}} {Swagger-Codegen Update v1 2022-06-10 20:12:10 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:cmk.intel.com/cmk-node":{}}},"f:status":{"f:capacity":{"f:cmk.intel.com/exclusive-cores":{}}}}} {kubelet Update v1 2022-06-10 20:12:13 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:cmk.intel.com/exclusive-cores":{},"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:capacity":{"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.4.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.4.0/24],},Status:NodeStatus{Capacity:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{451201003520 0} {} 440625980Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{201269604352 0} {} 196552348Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{77 0} {} 77 DecimalSI},ephemeral-storage: {{406080902496 0} {} 406080902496 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{178884603904 0} {} 174691996Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-06-10 20:03:16 +0000 UTC,LastTransitionTime:2022-06-10 20:03:16 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-06-10 22:07:15 +0000 UTC,LastTransitionTime:2022-06-10 19:59:19 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-06-10 22:07:15 +0000 UTC,LastTransitionTime:2022-06-10 19:59:19 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-06-10 22:07:15 +0000 UTC,LastTransitionTime:2022-06-10 19:59:19 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-06-10 22:07:15 +0000 UTC,LastTransitionTime:2022-06-10 20:00:31 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.208,},NodeAddress{Type:Hostname,Address:node2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:bb5fb4a83f9949939cd41b7583e9b343,SystemUUID:80B3CD56-852F-E711-906E-0017A4403562,BootID:bd9c2046-c9ae-4b83-a147-c07e3487254e,KernelVersion:3.10.0-1160.66.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.17,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[opnfv/barometer-collectd@sha256:f30e965aa6195e6ac4ca2410f5a15e3704c92e4afa5208178ca22a7911975d66],SizeBytes:1075575763,},ContainerImage{Names:[localhost:30500/cmk@sha256:fa61e6e6fee0a4d296013d2993a9ff5538ff0b2e232e6b9c661a6604d93ce888 localhost:30500/cmk:v1.5.1],SizeBytes:727708945,},ContainerImage{Names:[cmk:v1.5.1],SizeBytes:727708945,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[aquasec/kube-hunter@sha256:2be6820bc1d7e0f57193a9a27d5a3e16b2fd93c53747b03ce8ca48c6fc323781 aquasec/kube-hunter:0.3.1],SizeBytes:347611549,},ContainerImage{Names:[sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 sirot/netperf-latest:latest],SizeBytes:282025213,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:702a992280fb7c3303e84a5801acbb4c9c7fcf48cffe0e9c8be3f0c60f74cf89 k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.4],SizeBytes:253371792,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[nginx@sha256:a05b0cdd4fc1be3b224ba9662ebdf98fe44c09c0c9215b45f84344c12867002e nginx:1.21.1],SizeBytes:133175493,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:716d2f68314c5c4ddd5ecdb45183fcb4ed8019015982c1321571f863989b70b0 k8s.gcr.io/e2e-test-images/httpd:2.4.39-1],SizeBytes:126894770,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:125930239,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:1f36a24cfb5e0c3f725d7565a867c2384282fcbeccc77b07b423c9da95763a9a k8s.gcr.io/e2e-test-images/nautilus:1.4],SizeBytes:121748345,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[k8s.gcr.io/nfd/node-feature-discovery@sha256:74a1cbd82354f148277e20cdce25d57816e355a896bc67f67a0f722164b16945 k8s.gcr.io/nfd/node-feature-discovery:v0.8.2],SizeBytes:108486428,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[localhost:30500/sriov-device-plugin@sha256:73408b8d6699bf382b8f7526b6d0a986fad0f037440cd9aabd8985a7e1dbea07 localhost:30500/sriov-device-plugin:v3.3.2],SizeBytes:42676189,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 aquasec/kube-bench:0.3.1],SizeBytes:19301876,},ContainerImage{Names:[prom/collectd-exporter@sha256:73fbda4d24421bff3b741c27efc36f1b6fbe7c57c378d56d4ff78101cd556654],SizeBytes:17463681,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:16032814,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:11443478,},ContainerImage{Names:[appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172cead6502fec989198b98d8bb appropriate/curl:edge],SizeBytes:5654234,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:1154361,},ContainerImage{Names:[busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 busybox:1.28],SizeBytes:1146369,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jun 10 22:07:17.909: INFO: Logging kubelet events for node node2 Jun 10 22:07:17.912: INFO: Logging pods the kubelet thinks is on node node2 Jun 10 22:07:17.965: INFO: nginx-proxy-node2 started at 2022-06-10 19:59:19 +0000 UTC (0+1 container statuses recorded) Jun 10 22:07:17.965: INFO: Container nginx-proxy ready: true, restart count 2 Jun 10 22:07:17.965: INFO: kube-multus-ds-amd64-nj866 started at 2022-06-10 20:00:29 +0000 UTC (0+1 container statuses recorded) Jun 10 22:07:17.965: INFO: Container kube-multus ready: true, restart count 1 Jun 10 22:07:17.965: INFO: kubernetes-dashboard-785dcbb76d-7pmgn started at 2022-06-10 20:01:00 +0000 UTC (0+1 container statuses recorded) Jun 10 22:07:17.965: INFO: Container kubernetes-dashboard ready: true, restart count 1 Jun 10 22:07:17.965: INFO: cmk-zpstc started at 2022-06-10 20:12:29 +0000 UTC (0+2 container statuses recorded) Jun 10 22:07:17.965: INFO: Container nodereport ready: true, restart count 0 Jun 10 22:07:17.965: INFO: Container reconcile ready: true, restart count 0 Jun 10 22:07:17.965: INFO: node-feature-discovery-worker-s9mwk started at 2022-06-10 20:08:09 +0000 UTC (0+1 container statuses recorded) Jun 10 22:07:17.965: INFO: Container nfd-worker ready: true, restart count 0 Jun 10 22:07:17.965: INFO: pod-configmaps-4a596170-1b68-470c-8207-7b0693dbac74 started at 2022-06-10 22:05:48 +0000 UTC (0+1 container statuses recorded) Jun 10 22:07:17.965: INFO: Container agnhost-container ready: true, restart count 0 Jun 10 22:07:17.965: INFO: ss2-0 started at 2022-06-10 22:07:07 +0000 UTC (0+1 container statuses recorded) Jun 10 22:07:17.965: INFO: Container webserver ready: true, restart count 0 Jun 10 22:07:17.965: INFO: kube-proxy-4clxz started at 2022-06-10 19:59:24 +0000 UTC (0+1 container statuses recorded) Jun 10 22:07:17.965: INFO: Container kube-proxy ready: true, restart count 2 Jun 10 22:07:17.965: INFO: kube-flannel-8jl6m started at 2022-06-10 20:00:20 +0000 UTC (1+1 container statuses recorded) Jun 10 22:07:17.965: INFO: Init container install-cni ready: true, restart count 2 Jun 10 22:07:17.965: INFO: Container kube-flannel ready: true, restart count 2 Jun 10 22:07:17.965: INFO: cmk-init-discover-node2-jxvbr started at 2022-06-10 20:12:04 +0000 UTC (0+3 container statuses recorded) Jun 10 22:07:17.965: INFO: Container discover ready: false, restart count 0 Jun 10 22:07:17.965: INFO: Container init ready: false, restart count 0 Jun 10 22:07:17.965: INFO: Container install ready: false, restart count 0 Jun 10 22:07:17.965: INFO: externalname-service-5zzgz started at 2022-06-10 22:07:04 +0000 UTC (0+1 container statuses recorded) Jun 10 22:07:17.965: INFO: Container externalname-service ready: true, restart count 0 Jun 10 22:07:17.965: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-z4m46 started at 2022-06-10 20:09:21 +0000 UTC (0+1 container statuses recorded) Jun 10 22:07:17.965: INFO: Container kube-sriovdp ready: true, restart count 0 Jun 10 22:07:17.965: INFO: externalsvc-qg25z started at 2022-06-10 22:07:00 +0000 UTC (0+1 container statuses recorded) Jun 10 22:07:17.965: INFO: Container externalsvc ready: false, restart count 0 Jun 10 22:07:17.965: INFO: collectd-srmjh started at 2022-06-10 20:17:30 +0000 UTC (0+3 container statuses recorded) Jun 10 22:07:17.965: INFO: Container collectd ready: true, restart count 0 Jun 10 22:07:17.965: INFO: Container collectd-exporter ready: true, restart count 0 Jun 10 22:07:17.965: INFO: Container rbac-proxy ready: true, restart count 0 Jun 10 22:07:17.965: INFO: pod1 started at 2022-06-10 22:06:42 +0000 UTC (0+1 container statuses recorded) Jun 10 22:07:17.965: INFO: Container container1 ready: true, restart count 0 Jun 10 22:07:17.965: INFO: kubernetes-metrics-scraper-5558854cb-pf6tn started at 2022-06-10 20:01:01 +0000 UTC (0+1 container statuses recorded) Jun 10 22:07:17.965: INFO: Container kubernetes-metrics-scraper ready: true, restart count 1 Jun 10 22:07:17.965: INFO: affinity-nodeport-transition-rpgll started at 2022-06-10 22:05:27 +0000 UTC (0+1 container statuses recorded) Jun 10 22:07:17.965: INFO: Container affinity-nodeport-transition ready: true, restart count 0 Jun 10 22:07:17.965: INFO: pod2 started at 2022-06-10 22:06:42 +0000 UTC (0+1 container statuses recorded) Jun 10 22:07:17.965: INFO: Container container1 ready: true, restart count 0 Jun 10 22:07:17.965: INFO: node-exporter-trpg7 started at 2022-06-10 20:13:33 +0000 UTC (0+2 container statuses recorded) Jun 10 22:07:17.965: INFO: Container kube-rbac-proxy ready: true, restart count 0 Jun 10 22:07:17.965: INFO: Container node-exporter ready: true, restart count 0 Jun 10 22:07:17.965: INFO: busybox-readonly-fsa7711962-df66-45a0-ad53-92e2e9eeb954 started at 2022-06-10 22:06:53 +0000 UTC (0+1 container statuses recorded) Jun 10 22:07:17.965: INFO: Container busybox-readonly-fsa7711962-df66-45a0-ad53-92e2e9eeb954 ready: true, restart count 0 Jun 10 22:07:17.965: INFO: pod-configmaps-4505e77a-d216-45c0-a848-100bb0d6ca1d started at 2022-06-10 22:07:16 +0000 UTC (0+3 container statuses recorded) Jun 10 22:07:17.965: INFO: Container createcm-volume-test ready: false, restart count 0 Jun 10 22:07:17.965: INFO: Container delcm-volume-test ready: false, restart count 0 Jun 10 22:07:17.965: INFO: Container updcm-volume-test ready: false, restart count 0 Jun 10 22:07:17.965: INFO: affinity-nodeport-transition-st24r started at 2022-06-10 22:05:27 +0000 UTC (0+1 container statuses recorded) Jun 10 22:07:17.965: INFO: Container affinity-nodeport-transition ready: true, restart count 0 Jun 10 22:07:18.578: INFO: Latency metrics for node node2 Jun 10 22:07:18.578: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-4749" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750 • Failure [147.580 seconds] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should have session affinity work for NodePort service [LinuxOnly] [Conformance] [It] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Jun 10 22:07:08.590: Unexpected error: <*errors.errorString | 0xc0012fa530>: { s: "service is not reachable within 2m0s timeout on endpoint 10.10.190.207:30179 over TCP protocol", } service is not reachable within 2m0s timeout on endpoint 10.10.190.207:30179 over TCP protocol occurred /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:2576 ------------------------------ {"msg":"FAILED [sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","total":-1,"completed":14,"skipped":355,"failed":1,"failures":["[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]"]} SSSSSSSS ------------------------------ [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 10 22:05:48.728: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating configMap with name configmap-test-upd-e34d04ae-80f1-4d6d-92ec-13dd1124a7db STEP: Creating the pod Jun 10 22:05:48.773: INFO: The status of Pod pod-configmaps-4a596170-1b68-470c-8207-7b0693dbac74 is Pending, waiting for it to be Running (with Ready = true) Jun 10 22:05:50.776: INFO: The status of Pod pod-configmaps-4a596170-1b68-470c-8207-7b0693dbac74 is Pending, waiting for it to be Running (with Ready = true) Jun 10 22:05:52.778: INFO: The status of Pod pod-configmaps-4a596170-1b68-470c-8207-7b0693dbac74 is Pending, waiting for it to be Running (with Ready = true) Jun 10 22:05:54.777: INFO: The status of Pod pod-configmaps-4a596170-1b68-470c-8207-7b0693dbac74 is Running (Ready = true) STEP: Updating configmap configmap-test-upd-e34d04ae-80f1-4d6d-92ec-13dd1124a7db STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 10 22:07:23.343: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-6388" for this suite. • [SLOW TEST:94.625 seconds] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance]","total":-1,"completed":23,"skipped":234,"failed":0} S ------------------------------ [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 10 22:02:20.756: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 [BeforeEach] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:105 STEP: Creating service test in namespace statefulset-2827 [It] Should recreate evicted statefulset [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Looking for a node to schedule stateful set and pod STEP: Creating pod with conflicting port in namespace statefulset-2827 STEP: Creating statefulset with conflicting port in namespace statefulset-2827 STEP: Waiting until pod test-pod will start running in namespace statefulset-2827 STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace statefulset-2827 Jun 10 22:07:24.810: FAIL: Pod ss-0 expected to be re-created at least once Full Stack Trace k8s.io/kubernetes/test/e2e.RunE2ETests(0xc001a89080) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x36c k8s.io/kubernetes/test/e2e.TestE2E(0xc001a89080) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:144 +0x2b testing.tRunner(0xc001a89080, 0x70f99e8) /usr/local/go/src/testing/testing.go:1193 +0xef created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1238 +0x2b3 [AfterEach] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:116 Jun 10 22:07:24.815: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-2827 describe po test-pod' Jun 10 22:07:25.003: INFO: stderr: "" Jun 10 22:07:25.003: INFO: stdout: "Name: test-pod\nNamespace: statefulset-2827\nPriority: 0\nNode: node1/10.10.190.207\nStart Time: Fri, 10 Jun 2022 22:02:20 +0000\nLabels: \nAnnotations: k8s.v1.cni.cncf.io/network-status:\n [{\n \"name\": \"default-cni-network\",\n \"interface\": \"eth0\",\n \"ips\": [\n \"10.244.3.195\"\n ],\n \"mac\": \"16:1f:0c:c4:d3:e7\",\n \"default\": true,\n \"dns\": {}\n }]\n k8s.v1.cni.cncf.io/networks-status:\n [{\n \"name\": \"default-cni-network\",\n \"interface\": \"eth0\",\n \"ips\": [\n \"10.244.3.195\"\n ],\n \"mac\": \"16:1f:0c:c4:d3:e7\",\n \"default\": true,\n \"dns\": {}\n }]\n kubernetes.io/psp: privileged\nStatus: Running\nIP: 10.244.3.195\nIPs:\n IP: 10.244.3.195\nContainers:\n webserver:\n Container ID: docker://9e3cb8887fa9f804581633c96d381af5d2e2593e9d8c260e743628898859b7f3\n Image: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1\n Image ID: docker-pullable://k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50\n Port: 21017/TCP\n Host Port: 21017/TCP\n State: Running\n Started: Fri, 10 Jun 2022 22:02:24 +0000\n Ready: True\n Restart Count: 0\n Environment: \n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-bh2t9 (ro)\nConditions:\n Type Status\n Initialized True \n Ready True \n ContainersReady True \n PodScheduled True \nVolumes:\n kube-api-access-bh2t9:\n Type: Projected (a volume that contains injected data from multiple sources)\n TokenExpirationSeconds: 3607\n ConfigMapName: kube-root-ca.crt\n ConfigMapOptional: \n DownwardAPI: true\nQoS Class: BestEffort\nNode-Selectors: \nTolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s\n node.kubernetes.io/unreachable:NoExecute op=Exists for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Pulling 5m1s kubelet Pulling image \"k8s.gcr.io/e2e-test-images/httpd:2.4.38-1\"\n Normal Pulled 5m1s kubelet Successfully pulled image \"k8s.gcr.io/e2e-test-images/httpd:2.4.38-1\" in 278.79598ms\n Normal Created 5m kubelet Created container webserver\n Normal Started 5m kubelet Started container webserver\n" Jun 10 22:07:25.004: INFO: Output of kubectl describe test-pod: Name: test-pod Namespace: statefulset-2827 Priority: 0 Node: node1/10.10.190.207 Start Time: Fri, 10 Jun 2022 22:02:20 +0000 Labels: Annotations: k8s.v1.cni.cncf.io/network-status: [{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.3.195" ], "mac": "16:1f:0c:c4:d3:e7", "default": true, "dns": {} }] k8s.v1.cni.cncf.io/networks-status: [{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.3.195" ], "mac": "16:1f:0c:c4:d3:e7", "default": true, "dns": {} }] kubernetes.io/psp: privileged Status: Running IP: 10.244.3.195 IPs: IP: 10.244.3.195 Containers: webserver: Container ID: docker://9e3cb8887fa9f804581633c96d381af5d2e2593e9d8c260e743628898859b7f3 Image: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 Image ID: docker-pullable://k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 Port: 21017/TCP Host Port: 21017/TCP State: Running Started: Fri, 10 Jun 2022 22:02:24 +0000 Ready: True Restart Count: 0 Environment: Mounts: /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-bh2t9 (ro) Conditions: Type Status Initialized True Ready True ContainersReady True PodScheduled True Volumes: kube-api-access-bh2t9: Type: Projected (a volume that contains injected data from multiple sources) TokenExpirationSeconds: 3607 ConfigMapName: kube-root-ca.crt ConfigMapOptional: DownwardAPI: true QoS Class: BestEffort Node-Selectors: Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s node.kubernetes.io/unreachable:NoExecute op=Exists for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Pulling 5m1s kubelet Pulling image "k8s.gcr.io/e2e-test-images/httpd:2.4.38-1" Normal Pulled 5m1s kubelet Successfully pulled image "k8s.gcr.io/e2e-test-images/httpd:2.4.38-1" in 278.79598ms Normal Created 5m kubelet Created container webserver Normal Started 5m kubelet Started container webserver Jun 10 22:07:25.004: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-2827 logs test-pod --tail=100' Jun 10 22:07:25.173: INFO: stderr: "" Jun 10 22:07:25.173: INFO: stdout: "AH00558: httpd: Could not reliably determine the server's fully qualified domain name, using 10.244.3.195. Set the 'ServerName' directive globally to suppress this message\nAH00558: httpd: Could not reliably determine the server's fully qualified domain name, using 10.244.3.195. Set the 'ServerName' directive globally to suppress this message\n[Fri Jun 10 22:02:24.242311 2022] [mpm_event:notice] [pid 1:tid 139737580686184] AH00489: Apache/2.4.38 (Unix) configured -- resuming normal operations\n[Fri Jun 10 22:02:24.242352 2022] [core:notice] [pid 1:tid 139737580686184] AH00094: Command line: 'httpd -D FOREGROUND'\n" Jun 10 22:07:25.173: INFO: Last 100 log lines of test-pod: AH00558: httpd: Could not reliably determine the server's fully qualified domain name, using 10.244.3.195. Set the 'ServerName' directive globally to suppress this message AH00558: httpd: Could not reliably determine the server's fully qualified domain name, using 10.244.3.195. Set the 'ServerName' directive globally to suppress this message [Fri Jun 10 22:02:24.242311 2022] [mpm_event:notice] [pid 1:tid 139737580686184] AH00489: Apache/2.4.38 (Unix) configured -- resuming normal operations [Fri Jun 10 22:02:24.242352 2022] [core:notice] [pid 1:tid 139737580686184] AH00094: Command line: 'httpd -D FOREGROUND' Jun 10 22:07:25.173: INFO: Deleting all statefulset in ns statefulset-2827 Jun 10 22:07:25.176: INFO: Scaling statefulset ss to 0 Jun 10 22:07:25.185: INFO: Waiting for statefulset status.replicas updated to 0 Jun 10 22:07:25.187: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 STEP: Collecting events from namespace "statefulset-2827". STEP: Found 7 events. Jun 10 22:07:25.199: INFO: At 2022-06-10 22:02:20 +0000 UTC - event for ss: {statefulset-controller } FailedCreate: create Pod ss-0 in StatefulSet ss failed error: pods "ss-0" is forbidden: PodSecurityPolicy: unable to admit pod: [spec.containers[0].hostPort: Invalid value: 21017: Host port 21017 is not allowed to be used. Allowed ports: [9100] spec.containers[0].hostPort: Invalid value: 21017: Host port 21017 is not allowed to be used. Allowed ports: [] spec.containers[0].hostPort: Invalid value: 21017: Host port 21017 is not allowed to be used. Allowed ports: [9103-9104]] Jun 10 22:07:25.199: INFO: At 2022-06-10 22:02:20 +0000 UTC - event for ss: {statefulset-controller } FailedCreate: create Pod ss-0 in StatefulSet ss failed error: pods "ss-0" is forbidden: PodSecurityPolicy: unable to admit pod: [spec.containers[0].hostPort: Invalid value: 21017: Host port 21017 is not allowed to be used. Allowed ports: [] spec.containers[0].hostPort: Invalid value: 21017: Host port 21017 is not allowed to be used. Allowed ports: [9103-9104] spec.containers[0].hostPort: Invalid value: 21017: Host port 21017 is not allowed to be used. Allowed ports: [9100]] Jun 10 22:07:25.199: INFO: At 2022-06-10 22:02:20 +0000 UTC - event for ss: {statefulset-controller } FailedCreate: create Pod ss-0 in StatefulSet ss failed error: pods "ss-0" is forbidden: PodSecurityPolicy: unable to admit pod: [spec.containers[0].hostPort: Invalid value: 21017: Host port 21017 is not allowed to be used. Allowed ports: [9103-9104] spec.containers[0].hostPort: Invalid value: 21017: Host port 21017 is not allowed to be used. Allowed ports: [9100] spec.containers[0].hostPort: Invalid value: 21017: Host port 21017 is not allowed to be used. Allowed ports: []] Jun 10 22:07:25.199: INFO: At 2022-06-10 22:02:23 +0000 UTC - event for test-pod: {kubelet node1} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/httpd:2.4.38-1" Jun 10 22:07:25.199: INFO: At 2022-06-10 22:02:23 +0000 UTC - event for test-pod: {kubelet node1} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/httpd:2.4.38-1" in 278.79598ms Jun 10 22:07:25.199: INFO: At 2022-06-10 22:02:24 +0000 UTC - event for test-pod: {kubelet node1} Created: Created container webserver Jun 10 22:07:25.199: INFO: At 2022-06-10 22:02:24 +0000 UTC - event for test-pod: {kubelet node1} Started: Started container webserver Jun 10 22:07:25.202: INFO: POD NODE PHASE GRACE CONDITIONS Jun 10 22:07:25.202: INFO: test-pod node1 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-06-10 22:02:20 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2022-06-10 22:02:24 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-06-10 22:02:24 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-06-10 22:02:20 +0000 UTC }] Jun 10 22:07:25.202: INFO: Jun 10 22:07:25.206: INFO: Logging node info for node master1 Jun 10 22:07:25.208: INFO: Node Info: &Node{ObjectMeta:{master1 e472448e-87fd-4e8d-bbb7-98d43d3d8a87 46215 0 2022-06-10 19:57:38 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master1 kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.202 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/master.version:v0.8.2 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2022-06-10 19:57:41 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}}} {flanneld Update v1 2022-06-10 20:00:25 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2022-06-10 20:00:33 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.0.0/24\"":{}},"f:taints":{}}}} {kubelet Update v1 2022-06-10 20:05:13 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}} {nfd-master Update v1 2022-06-10 20:08:15 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/master.version":{}}}}}]},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{451201003520 0} {} 440625980Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234739200 0} {} 196518300Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{406080902496 0} {} 406080902496 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324575232 0} {} 195629468Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-06-10 20:03:20 +0000 UTC,LastTransitionTime:2022-06-10 20:03:20 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-06-10 22:07:22 +0000 UTC,LastTransitionTime:2022-06-10 19:57:36 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-06-10 22:07:22 +0000 UTC,LastTransitionTime:2022-06-10 19:57:36 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-06-10 22:07:22 +0000 UTC,LastTransitionTime:2022-06-10 19:57:36 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-06-10 22:07:22 +0000 UTC,LastTransitionTime:2022-06-10 20:00:33 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.202,},NodeAddress{Type:Hostname,Address:master1,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:3faca96dd267476388422e9ecfe8ffa5,SystemUUID:00ACFB60-0631-E711-906E-0017A4403562,BootID:a8563bde-8faa-4424-940f-741c59dd35bf,KernelVersion:3.10.0-1160.66.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.17,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:727708945,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 sirot/netperf-latest:latest],SizeBytes:282025213,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[k8s.gcr.io/nfd/node-feature-discovery@sha256:74a1cbd82354f148277e20cdce25d57816e355a896bc67f67a0f722164b16945 k8s.gcr.io/nfd/node-feature-discovery:v0.8.2],SizeBytes:108486428,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[quay.io/prometheus-operator/prometheus-operator@sha256:850c86bfeda4389bc9c757a9fd17ca5a090ea6b424968178d4467492cfa13921 quay.io/prometheus-operator/prometheus-operator:v0.44.1],SizeBytes:42617274,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e k8s.gcr.io/coredns/coredns:v1.8.0],SizeBytes:42454755,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[localhost:30500/tasextender@sha256:bec743bd4fe4525edfd5f3c9bb11da21629092dfe60d396ce7f8168ac1088695 tasextender:latest localhost:30500/tasextender:0.4],SizeBytes:28910791,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[registry@sha256:1cd9409a311350c3072fe510b52046f104416376c126a479cef9a4dfe692cf57 registry:2.7.0],SizeBytes:24191168,},ContainerImage{Names:[nginx@sha256:b92d3b942c8b84da889ac3dc6e83bd20ffb8cd2d8298eba92c8b0bf88d52f03e nginx:1.20.1-alpine],SizeBytes:22721538,},ContainerImage{Names:[@ :],SizeBytes:5577654,},ContainerImage{Names:[alpine@sha256:c0e9560cda118f9ec63ddefb4a173a2b2a0347082d7dff7dc14272e7841a5b5a alpine:3.12.1],SizeBytes:5573013,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jun 10 22:07:25.209: INFO: Logging kubelet events for node master1 Jun 10 22:07:25.212: INFO: Logging pods the kubelet thinks is on node master1 Jun 10 22:07:25.223: INFO: kube-flannel-xx9h7 started at 2022-06-10 20:00:20 +0000 UTC (1+1 container statuses recorded) Jun 10 22:07:25.223: INFO: Init container install-cni ready: true, restart count 0 Jun 10 22:07:25.223: INFO: Container kube-flannel ready: true, restart count 1 Jun 10 22:07:25.223: INFO: kube-multus-ds-amd64-t5pr7 started at 2022-06-10 20:00:29 +0000 UTC (0+1 container statuses recorded) Jun 10 22:07:25.223: INFO: Container kube-multus ready: true, restart count 1 Jun 10 22:07:25.223: INFO: dns-autoscaler-7df78bfcfb-kz7px started at 2022-06-10 20:00:58 +0000 UTC (0+1 container statuses recorded) Jun 10 22:07:25.223: INFO: Container autoscaler ready: true, restart count 1 Jun 10 22:07:25.223: INFO: container-registry-65d7c44b96-rsh2n started at 2022-06-10 20:04:56 +0000 UTC (0+2 container statuses recorded) Jun 10 22:07:25.223: INFO: Container docker-registry ready: true, restart count 0 Jun 10 22:07:25.223: INFO: Container nginx ready: true, restart count 0 Jun 10 22:07:25.223: INFO: node-feature-discovery-controller-cff799f9f-74qhv started at 2022-06-10 20:08:09 +0000 UTC (0+1 container statuses recorded) Jun 10 22:07:25.223: INFO: Container nfd-controller ready: true, restart count 0 Jun 10 22:07:25.223: INFO: prometheus-operator-585ccfb458-kkb8f started at 2022-06-10 20:13:26 +0000 UTC (0+2 container statuses recorded) Jun 10 22:07:25.223: INFO: Container kube-rbac-proxy ready: true, restart count 0 Jun 10 22:07:25.223: INFO: Container prometheus-operator ready: true, restart count 0 Jun 10 22:07:25.223: INFO: node-exporter-vc67r started at 2022-06-10 20:13:33 +0000 UTC (0+2 container statuses recorded) Jun 10 22:07:25.223: INFO: Container kube-rbac-proxy ready: true, restart count 0 Jun 10 22:07:25.223: INFO: Container node-exporter ready: true, restart count 0 Jun 10 22:07:25.223: INFO: kube-apiserver-master1 started at 2022-06-10 19:58:43 +0000 UTC (0+1 container statuses recorded) Jun 10 22:07:25.223: INFO: Container kube-apiserver ready: true, restart count 0 Jun 10 22:07:25.223: INFO: kube-controller-manager-master1 started at 2022-06-10 20:06:49 +0000 UTC (0+1 container statuses recorded) Jun 10 22:07:25.223: INFO: Container kube-controller-manager ready: true, restart count 2 Jun 10 22:07:25.223: INFO: kube-scheduler-master1 started at 2022-06-10 19:58:43 +0000 UTC (0+1 container statuses recorded) Jun 10 22:07:25.223: INFO: Container kube-scheduler ready: true, restart count 0 Jun 10 22:07:25.223: INFO: kube-proxy-rd4j7 started at 2022-06-10 19:59:24 +0000 UTC (0+1 container statuses recorded) Jun 10 22:07:25.223: INFO: Container kube-proxy ready: true, restart count 3 Jun 10 22:07:25.326: INFO: Latency metrics for node master1 Jun 10 22:07:25.326: INFO: Logging node info for node master2 Jun 10 22:07:25.329: INFO: Node Info: &Node{ObjectMeta:{master2 66c7af40-c8de-462b-933d-792f10a44a43 46189 0 2022-06-10 19:58:07 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master2 kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.203 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2022-06-10 19:58:08 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}}} {flanneld Update v1 2022-06-10 20:00:25 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2022-06-10 20:00:25 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}},"f:taints":{}}}} {kubelet Update v1 2022-06-10 20:10:54 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:capacity":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.1.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{451201003520 0} {} 440625980Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234743296 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{406080902496 0} {} 406080902496 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324579328 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-06-10 20:03:20 +0000 UTC,LastTransitionTime:2022-06-10 20:03:20 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-06-10 22:07:20 +0000 UTC,LastTransitionTime:2022-06-10 19:58:07 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-06-10 22:07:20 +0000 UTC,LastTransitionTime:2022-06-10 19:58:07 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-06-10 22:07:20 +0000 UTC,LastTransitionTime:2022-06-10 19:58:07 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-06-10 22:07:20 +0000 UTC,LastTransitionTime:2022-06-10 20:00:25 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.203,},NodeAddress{Type:Hostname,Address:master2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:31687d4b1abb46329a442e068ee56c42,SystemUUID:00A0DE53-E51D-E711-906E-0017A4403562,BootID:e234d452-a6d8-4bf0-b98d-a080613c39e9,KernelVersion:3.10.0-1160.66.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.17,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:727708945,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 sirot/netperf-latest:latest],SizeBytes:282025213,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e k8s.gcr.io/coredns/coredns:v1.8.0],SizeBytes:42454755,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 aquasec/kube-bench:0.3.1],SizeBytes:19301876,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jun 10 22:07:25.330: INFO: Logging kubelet events for node master2 Jun 10 22:07:25.332: INFO: Logging pods the kubelet thinks is on node master2 Jun 10 22:07:25.340: INFO: kube-multus-ds-amd64-nrmqq started at 2022-06-10 20:00:29 +0000 UTC (0+1 container statuses recorded) Jun 10 22:07:25.340: INFO: Container kube-multus ready: true, restart count 1 Jun 10 22:07:25.340: INFO: coredns-8474476ff8-hlspd started at 2022-06-10 20:01:00 +0000 UTC (0+1 container statuses recorded) Jun 10 22:07:25.340: INFO: Container coredns ready: true, restart count 1 Jun 10 22:07:25.340: INFO: kube-controller-manager-master2 started at 2022-06-10 20:06:49 +0000 UTC (0+1 container statuses recorded) Jun 10 22:07:25.340: INFO: Container kube-controller-manager ready: true, restart count 1 Jun 10 22:07:25.340: INFO: kube-scheduler-master2 started at 2022-06-10 20:06:49 +0000 UTC (0+1 container statuses recorded) Jun 10 22:07:25.340: INFO: Container kube-scheduler ready: true, restart count 3 Jun 10 22:07:25.340: INFO: kube-proxy-2kbvc started at 2022-06-10 19:59:24 +0000 UTC (0+1 container statuses recorded) Jun 10 22:07:25.340: INFO: Container kube-proxy ready: true, restart count 2 Jun 10 22:07:25.340: INFO: kube-flannel-ftn9l started at 2022-06-10 20:00:20 +0000 UTC (1+1 container statuses recorded) Jun 10 22:07:25.340: INFO: Init container install-cni ready: true, restart count 2 Jun 10 22:07:25.340: INFO: Container kube-flannel ready: true, restart count 1 Jun 10 22:07:25.340: INFO: kube-apiserver-master2 started at 2022-06-10 19:58:44 +0000 UTC (0+1 container statuses recorded) Jun 10 22:07:25.340: INFO: Container kube-apiserver ready: true, restart count 0 Jun 10 22:07:25.340: INFO: node-exporter-6fbrb started at 2022-06-10 20:13:33 +0000 UTC (0+2 container statuses recorded) Jun 10 22:07:25.340: INFO: Container kube-rbac-proxy ready: true, restart count 0 Jun 10 22:07:25.340: INFO: Container node-exporter ready: true, restart count 0 Jun 10 22:07:25.427: INFO: Latency metrics for node master2 Jun 10 22:07:25.427: INFO: Logging node info for node master3 Jun 10 22:07:25.430: INFO: Node Info: &Node{ObjectMeta:{master3 e51505ec-e791-4bbe-aeb1-bd0671fd4464 46137 0 2022-06-10 19:58:16 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master3 kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.204 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2022-06-10 19:58:18 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}}} {flanneld Update v1 2022-06-10 20:00:25 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2022-06-10 20:00:31 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.2.0/24\"":{}},"f:taints":{}}}} {kubelet Update v1 2022-06-10 20:10:54 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:capacity":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.2.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{451201003520 0} {} 440625980Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234739200 0} {} 196518300Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{406080902496 0} {} 406080902496 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324575232 0} {} 195629468Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-06-10 20:03:14 +0000 UTC,LastTransitionTime:2022-06-10 20:03:14 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-06-10 22:07:17 +0000 UTC,LastTransitionTime:2022-06-10 19:58:16 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-06-10 22:07:17 +0000 UTC,LastTransitionTime:2022-06-10 19:58:16 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-06-10 22:07:17 +0000 UTC,LastTransitionTime:2022-06-10 19:58:16 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-06-10 22:07:17 +0000 UTC,LastTransitionTime:2022-06-10 20:00:31 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.204,},NodeAddress{Type:Hostname,Address:master3,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:1f373495c4c54f68a37fa0d50cd1da58,SystemUUID:008B1444-141E-E711-906E-0017A4403562,BootID:a719d949-f9d1-4ee4-a79b-ab3a929b7d00,KernelVersion:3.10.0-1160.66.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.17,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:727708945,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e k8s.gcr.io/coredns/coredns:v1.8.0],SizeBytes:42454755,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jun 10 22:07:25.430: INFO: Logging kubelet events for node master3 Jun 10 22:07:25.433: INFO: Logging pods the kubelet thinks is on node master3 Jun 10 22:07:25.441: INFO: kube-apiserver-master3 started at 2022-06-10 20:03:07 +0000 UTC (0+1 container statuses recorded) Jun 10 22:07:25.441: INFO: Container kube-apiserver ready: true, restart count 0 Jun 10 22:07:25.441: INFO: kube-controller-manager-master3 started at 2022-06-10 20:06:49 +0000 UTC (0+1 container statuses recorded) Jun 10 22:07:25.441: INFO: Container kube-controller-manager ready: true, restart count 2 Jun 10 22:07:25.441: INFO: kube-scheduler-master3 started at 2022-06-10 20:03:07 +0000 UTC (0+1 container statuses recorded) Jun 10 22:07:25.441: INFO: Container kube-scheduler ready: true, restart count 1 Jun 10 22:07:25.441: INFO: kube-proxy-rm9n6 started at 2022-06-10 19:59:24 +0000 UTC (0+1 container statuses recorded) Jun 10 22:07:25.441: INFO: Container kube-proxy ready: true, restart count 1 Jun 10 22:07:25.441: INFO: kube-flannel-jpd2j started at 2022-06-10 20:00:20 +0000 UTC (1+1 container statuses recorded) Jun 10 22:07:25.441: INFO: Init container install-cni ready: true, restart count 2 Jun 10 22:07:25.441: INFO: Container kube-flannel ready: true, restart count 2 Jun 10 22:07:25.441: INFO: kube-multus-ds-amd64-8b4tg started at 2022-06-10 20:00:29 +0000 UTC (0+1 container statuses recorded) Jun 10 22:07:25.441: INFO: Container kube-multus ready: true, restart count 1 Jun 10 22:07:25.441: INFO: coredns-8474476ff8-s8q89 started at 2022-06-10 20:00:56 +0000 UTC (0+1 container statuses recorded) Jun 10 22:07:25.441: INFO: Container coredns ready: true, restart count 1 Jun 10 22:07:25.441: INFO: node-exporter-q4rw6 started at 2022-06-10 20:13:33 +0000 UTC (0+2 container statuses recorded) Jun 10 22:07:25.441: INFO: Container kube-rbac-proxy ready: true, restart count 0 Jun 10 22:07:25.441: INFO: Container node-exporter ready: true, restart count 0 Jun 10 22:07:25.520: INFO: Latency metrics for node master3 Jun 10 22:07:25.520: INFO: Logging node info for node node1 Jun 10 22:07:25.524: INFO: Node Info: &Node{ObjectMeta:{node1 fa951133-0317-499e-8a0a-fc7a0636a371 46118 0 2022-06-10 19:59:19 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux cmk.intel.com/cmk-node:true feature.node.kubernetes.io/cpu-cpuid.ADX:true feature.node.kubernetes.io/cpu-cpuid.AESNI:true feature.node.kubernetes.io/cpu-cpuid.AVX:true feature.node.kubernetes.io/cpu-cpuid.AVX2:true feature.node.kubernetes.io/cpu-cpuid.AVX512BW:true feature.node.kubernetes.io/cpu-cpuid.AVX512CD:true feature.node.kubernetes.io/cpu-cpuid.AVX512DQ:true feature.node.kubernetes.io/cpu-cpuid.AVX512F:true feature.node.kubernetes.io/cpu-cpuid.AVX512VL:true feature.node.kubernetes.io/cpu-cpuid.FMA3:true feature.node.kubernetes.io/cpu-cpuid.HLE:true feature.node.kubernetes.io/cpu-cpuid.IBPB:true feature.node.kubernetes.io/cpu-cpuid.MPX:true feature.node.kubernetes.io/cpu-cpuid.RTM:true feature.node.kubernetes.io/cpu-cpuid.SSE4:true feature.node.kubernetes.io/cpu-cpuid.SSE42:true feature.node.kubernetes.io/cpu-cpuid.STIBP:true feature.node.kubernetes.io/cpu-cpuid.VMX:true feature.node.kubernetes.io/cpu-cstate.enabled:true feature.node.kubernetes.io/cpu-hardware_multithreading:true feature.node.kubernetes.io/cpu-pstate.status:active feature.node.kubernetes.io/cpu-pstate.turbo:true feature.node.kubernetes.io/cpu-rdt.RDTCMT:true feature.node.kubernetes.io/cpu-rdt.RDTL3CA:true feature.node.kubernetes.io/cpu-rdt.RDTMBA:true feature.node.kubernetes.io/cpu-rdt.RDTMBM:true feature.node.kubernetes.io/cpu-rdt.RDTMON:true feature.node.kubernetes.io/kernel-config.NO_HZ:true feature.node.kubernetes.io/kernel-config.NO_HZ_FULL:true feature.node.kubernetes.io/kernel-selinux.enabled:true feature.node.kubernetes.io/kernel-version.full:3.10.0-1160.66.1.el7.x86_64 feature.node.kubernetes.io/kernel-version.major:3 feature.node.kubernetes.io/kernel-version.minor:10 feature.node.kubernetes.io/kernel-version.revision:0 feature.node.kubernetes.io/memory-numa:true feature.node.kubernetes.io/network-sriov.capable:true feature.node.kubernetes.io/network-sriov.configured:true feature.node.kubernetes.io/pci-0300_1a03.present:true feature.node.kubernetes.io/storage-nonrotationaldisk:true feature.node.kubernetes.io/system-os_release.ID:centos feature.node.kubernetes.io/system-os_release.VERSION_ID:7 feature.node.kubernetes.io/system-os_release.VERSION_ID.major:7 kubernetes.io/arch:amd64 kubernetes.io/hostname:node1 kubernetes.io/os:linux] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.207 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/extended-resources: nfd.node.kubernetes.io/feature-labels:cpu-cpuid.ADX,cpu-cpuid.AESNI,cpu-cpuid.AVX,cpu-cpuid.AVX2,cpu-cpuid.AVX512BW,cpu-cpuid.AVX512CD,cpu-cpuid.AVX512DQ,cpu-cpuid.AVX512F,cpu-cpuid.AVX512VL,cpu-cpuid.FMA3,cpu-cpuid.HLE,cpu-cpuid.IBPB,cpu-cpuid.MPX,cpu-cpuid.RTM,cpu-cpuid.SSE4,cpu-cpuid.SSE42,cpu-cpuid.STIBP,cpu-cpuid.VMX,cpu-cstate.enabled,cpu-hardware_multithreading,cpu-pstate.status,cpu-pstate.turbo,cpu-rdt.RDTCMT,cpu-rdt.RDTL3CA,cpu-rdt.RDTMBA,cpu-rdt.RDTMBM,cpu-rdt.RDTMON,kernel-config.NO_HZ,kernel-config.NO_HZ_FULL,kernel-selinux.enabled,kernel-version.full,kernel-version.major,kernel-version.minor,kernel-version.revision,memory-numa,network-sriov.capable,network-sriov.configured,pci-0300_1a03.present,storage-nonrotationaldisk,system-os_release.ID,system-os_release.VERSION_ID,system-os_release.VERSION_ID.major nfd.node.kubernetes.io/worker.version:v0.8.2 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2022-06-10 19:59:19 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.3.0/24\"":{}}}}} {kubeadm Update v1 2022-06-10 19:59:20 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {flanneld Update v1 2022-06-10 20:00:25 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {nfd-master Update v1 2022-06-10 20:08:16 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/extended-resources":{},"f:nfd.node.kubernetes.io/feature-labels":{},"f:nfd.node.kubernetes.io/worker.version":{}},"f:labels":{"f:feature.node.kubernetes.io/cpu-cpuid.ADX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AESNI":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX2":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512BW":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512CD":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512DQ":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512F":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512VL":{},"f:feature.node.kubernetes.io/cpu-cpuid.FMA3":{},"f:feature.node.kubernetes.io/cpu-cpuid.HLE":{},"f:feature.node.kubernetes.io/cpu-cpuid.IBPB":{},"f:feature.node.kubernetes.io/cpu-cpuid.MPX":{},"f:feature.node.kubernetes.io/cpu-cpuid.RTM":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE4":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE42":{},"f:feature.node.kubernetes.io/cpu-cpuid.STIBP":{},"f:feature.node.kubernetes.io/cpu-cpuid.VMX":{},"f:feature.node.kubernetes.io/cpu-cstate.enabled":{},"f:feature.node.kubernetes.io/cpu-hardware_multithreading":{},"f:feature.node.kubernetes.io/cpu-pstate.status":{},"f:feature.node.kubernetes.io/cpu-pstate.turbo":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTCMT":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTL3CA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBM":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMON":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ_FULL":{},"f:feature.node.kubernetes.io/kernel-selinux.enabled":{},"f:feature.node.kubernetes.io/kernel-version.full":{},"f:feature.node.kubernetes.io/kernel-version.major":{},"f:feature.node.kubernetes.io/kernel-version.minor":{},"f:feature.node.kubernetes.io/kernel-version.revision":{},"f:feature.node.kubernetes.io/memory-numa":{},"f:feature.node.kubernetes.io/network-sriov.capable":{},"f:feature.node.kubernetes.io/network-sriov.configured":{},"f:feature.node.kubernetes.io/pci-0300_1a03.present":{},"f:feature.node.kubernetes.io/storage-nonrotationaldisk":{},"f:feature.node.kubernetes.io/system-os_release.ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID.major":{}}}}} {Swagger-Codegen Update v1 2022-06-10 20:11:46 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:cmk.intel.com/cmk-node":{}}},"f:status":{"f:capacity":{"f:cmk.intel.com/exclusive-cores":{}}}}} {kubelet Update v1 2022-06-10 20:11:53 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:cmk.intel.com/exclusive-cores":{},"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:capacity":{"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.3.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.3.0/24],},Status:NodeStatus{Capacity:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{451201003520 0} {} 440625980Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{201269608448 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{77 0} {} 77 DecimalSI},ephemeral-storage: {{406080902496 0} {} 406080902496 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{178884608000 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-06-10 20:03:13 +0000 UTC,LastTransitionTime:2022-06-10 20:03:13 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-06-10 22:07:17 +0000 UTC,LastTransitionTime:2022-06-10 19:59:19 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-06-10 22:07:17 +0000 UTC,LastTransitionTime:2022-06-10 19:59:19 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-06-10 22:07:17 +0000 UTC,LastTransitionTime:2022-06-10 19:59:19 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-06-10 22:07:17 +0000 UTC,LastTransitionTime:2022-06-10 20:00:27 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.207,},NodeAddress{Type:Hostname,Address:node1,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:aabc551d0ffe4cb3b41c0db91649a9a2,SystemUUID:00CDA902-D022-E711-906E-0017A4403562,BootID:fea48af7-d08f-4093-b808-340d06faf38b,KernelVersion:3.10.0-1160.66.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.17,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[opnfv/barometer-collectd@sha256:f30e965aa6195e6ac4ca2410f5a15e3704c92e4afa5208178ca22a7911975d66],SizeBytes:1075575763,},ContainerImage{Names:[@ :],SizeBytes:1003977815,},ContainerImage{Names:[localhost:30500/cmk@sha256:fa61e6e6fee0a4d296013d2993a9ff5538ff0b2e232e6b9c661a6604d93ce888 cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:727708945,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[golang@sha256:db2475a1dbb2149508e5db31d7d77a75e6600d54be645f37681f03f2762169ba golang:alpine3.12],SizeBytes:301186719,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[k8s.gcr.io/etcd@sha256:4ad90a11b55313b182afc186b9876c8e891531b8db4c9bf1541953021618d0e2 k8s.gcr.io/etcd:3.4.13-0],SizeBytes:253392289,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:702a992280fb7c3303e84a5801acbb4c9c7fcf48cffe0e9c8be3f0c60f74cf89 k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.4],SizeBytes:253371792,},ContainerImage{Names:[grafana/grafana@sha256:ba39bf5131dcc0464134a3ff0e26e8c6380415249fa725e5f619176601255172 grafana/grafana:7.5.4],SizeBytes:203572842,},ContainerImage{Names:[quay.io/prometheus/prometheus@sha256:b899dbd1b9017b9a379f76ce5b40eead01a62762c4f2057eacef945c3c22d210 quay.io/prometheus/prometheus:v2.22.1],SizeBytes:168344243,},ContainerImage{Names:[nginx@sha256:a05b0cdd4fc1be3b224ba9662ebdf98fe44c09c0c9215b45f84344c12867002e nginx:1.21.1],SizeBytes:133175493,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:716d2f68314c5c4ddd5ecdb45183fcb4ed8019015982c1321571f863989b70b0 k8s.gcr.io/e2e-test-images/httpd:2.4.39-1],SizeBytes:126894770,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:125930239,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:1f36a24cfb5e0c3f725d7565a867c2384282fcbeccc77b07b423c9da95763a9a k8s.gcr.io/e2e-test-images/nautilus:1.4],SizeBytes:121748345,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[k8s.gcr.io/nfd/node-feature-discovery@sha256:74a1cbd82354f148277e20cdce25d57816e355a896bc67f67a0f722164b16945 k8s.gcr.io/nfd/node-feature-discovery:v0.8.2],SizeBytes:108486428,},ContainerImage{Names:[directxman12/k8s-prometheus-adapter@sha256:2b09a571757a12c0245f2f1a74db4d1b9386ff901cf57f5ce48a0a682bd0e3af directxman12/k8s-prometheus-adapter:v0.8.2],SizeBytes:68230450,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/sample-apiserver@sha256:e7fddbaac4c3451da2365ab90bad149d32f11409738034e41e0f460927f7c276 k8s.gcr.io/e2e-test-images/sample-apiserver:1.17.4],SizeBytes:58172101,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[localhost:30500/sriov-device-plugin@sha256:73408b8d6699bf382b8f7526b6d0a986fad0f037440cd9aabd8985a7e1dbea07 nfvpe/sriov-device-plugin:latest localhost:30500/sriov-device-plugin:v3.3.2],SizeBytes:42676189,},ContainerImage{Names:[localhost:30500/tasextender@sha256:bec743bd4fe4525edfd5f3c9bb11da21629092dfe60d396ce7f8168ac1088695 localhost:30500/tasextender:0.4],SizeBytes:28910791,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[prom/collectd-exporter@sha256:73fbda4d24421bff3b741c27efc36f1b6fbe7c57c378d56d4ff78101cd556654],SizeBytes:17463681,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:16032814,},ContainerImage{Names:[quay.io/prometheus-operator/prometheus-config-reloader@sha256:4dee0fcf1820355ddd6986c1317b555693776c731315544a99d6cc59a7e34ce9 quay.io/prometheus-operator/prometheus-config-reloader:v0.44.1],SizeBytes:13433274,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:11443478,},ContainerImage{Names:[appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172cead6502fec989198b98d8bb appropriate/curl:edge],SizeBytes:5654234,},ContainerImage{Names:[alpine@sha256:c75ac27b49326926b803b9ed43bf088bc220d22556de1bc5f72d742c91398f69 alpine:3.12],SizeBytes:5581590,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:1154361,},ContainerImage{Names:[busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 busybox:1.28],SizeBytes:1146369,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jun 10 22:07:25.525: INFO: Logging kubelet events for node node1 Jun 10 22:07:25.529: INFO: Logging pods the kubelet thinks is on node node1 Jun 10 22:07:25.547: INFO: prometheus-k8s-0 started at 2022-06-10 20:13:45 +0000 UTC (0+4 container statuses recorded) Jun 10 22:07:25.547: INFO: Container config-reloader ready: true, restart count 0 Jun 10 22:07:25.547: INFO: Container custom-metrics-apiserver ready: true, restart count 0 Jun 10 22:07:25.547: INFO: Container grafana ready: true, restart count 0 Jun 10 22:07:25.547: INFO: Container prometheus ready: true, restart count 1 Jun 10 22:07:25.547: INFO: externalname-service-m9jbh started at 2022-06-10 22:07:04 +0000 UTC (0+1 container statuses recorded) Jun 10 22:07:25.547: INFO: Container externalname-service ready: true, restart count 0 Jun 10 22:07:25.547: INFO: execpodsdwps started at 2022-06-10 22:07:06 +0000 UTC (0+1 container statuses recorded) Jun 10 22:07:25.547: INFO: Container agnhost-container ready: true, restart count 0 Jun 10 22:07:25.547: INFO: cmk-init-discover-node1-hlbt6 started at 2022-06-10 20:11:42 +0000 UTC (0+3 container statuses recorded) Jun 10 22:07:25.547: INFO: Container discover ready: false, restart count 0 Jun 10 22:07:25.547: INFO: Container init ready: false, restart count 0 Jun 10 22:07:25.547: INFO: Container install ready: false, restart count 0 Jun 10 22:07:25.547: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-k4f5v started at 2022-06-10 20:09:21 +0000 UTC (0+1 container statuses recorded) Jun 10 22:07:25.547: INFO: Container kube-sriovdp ready: true, restart count 0 Jun 10 22:07:25.547: INFO: node-exporter-tk8f9 started at 2022-06-10 20:13:33 +0000 UTC (0+2 container statuses recorded) Jun 10 22:07:25.547: INFO: Container kube-rbac-proxy ready: true, restart count 0 Jun 10 22:07:25.547: INFO: Container node-exporter ready: true, restart count 0 Jun 10 22:07:25.547: INFO: replace-27581646-hfs4n started at 2022-06-10 22:06:00 +0000 UTC (0+1 container statuses recorded) Jun 10 22:07:25.547: INFO: Container c ready: true, restart count 0 Jun 10 22:07:25.547: INFO: node-feature-discovery-worker-9xsdt started at 2022-06-10 20:08:09 +0000 UTC (0+1 container statuses recorded) Jun 10 22:07:25.547: INFO: Container nfd-worker ready: true, restart count 0 Jun 10 22:07:25.547: INFO: collectd-kpj5z started at 2022-06-10 20:17:30 +0000 UTC (0+3 container statuses recorded) Jun 10 22:07:25.547: INFO: Container collectd ready: true, restart count 0 Jun 10 22:07:25.547: INFO: Container collectd-exporter ready: true, restart count 0 Jun 10 22:07:25.547: INFO: Container rbac-proxy ready: true, restart count 0 Jun 10 22:07:25.547: INFO: cmk-webhook-6c9d5f8578-n9w8j started at 2022-06-10 20:12:30 +0000 UTC (0+1 container statuses recorded) Jun 10 22:07:25.547: INFO: Container cmk-webhook ready: true, restart count 0 Jun 10 22:07:25.547: INFO: test-pod started at 2022-06-10 22:02:20 +0000 UTC (0+1 container statuses recorded) Jun 10 22:07:25.547: INFO: Container webserver ready: true, restart count 0 Jun 10 22:07:25.547: INFO: cmk-qjrhs started at 2022-06-10 20:12:29 +0000 UTC (0+2 container statuses recorded) Jun 10 22:07:25.547: INFO: Container nodereport ready: true, restart count 0 Jun 10 22:07:25.547: INFO: Container reconcile ready: true, restart count 0 Jun 10 22:07:25.547: INFO: execpod-affinitybwztk started at 2022-06-10 22:05:36 +0000 UTC (0+1 container statuses recorded) Jun 10 22:07:25.547: INFO: Container agnhost-container ready: true, restart count 0 Jun 10 22:07:25.547: INFO: execpodclt87 started at 2022-06-10 22:07:07 +0000 UTC (0+1 container statuses recorded) Jun 10 22:07:25.547: INFO: Container agnhost-container ready: true, restart count 0 Jun 10 22:07:25.547: INFO: affinity-nodeport-transition-cx685 started at 2022-06-10 22:05:27 +0000 UTC (0+1 container statuses recorded) Jun 10 22:07:25.547: INFO: Container affinity-nodeport-transition ready: true, restart count 0 Jun 10 22:07:25.547: INFO: foo-5bj4z started at 2022-06-10 22:07:18 +0000 UTC (0+1 container statuses recorded) Jun 10 22:07:25.547: INFO: Container c ready: true, restart count 0 Jun 10 22:07:25.547: INFO: nginx-proxy-node1 started at 2022-06-10 19:59:19 +0000 UTC (0+1 container statuses recorded) Jun 10 22:07:25.547: INFO: Container nginx-proxy ready: true, restart count 2 Jun 10 22:07:25.547: INFO: kube-flannel-x926c started at 2022-06-10 20:00:20 +0000 UTC (1+1 container statuses recorded) Jun 10 22:07:25.547: INFO: Init container install-cni ready: true, restart count 2 Jun 10 22:07:25.547: INFO: Container kube-flannel ready: true, restart count 2 Jun 10 22:07:25.547: INFO: kube-multus-ds-amd64-4gckf started at 2022-06-10 20:00:29 +0000 UTC (0+1 container statuses recorded) Jun 10 22:07:25.547: INFO: Container kube-multus ready: true, restart count 1 Jun 10 22:07:25.547: INFO: tas-telemetry-aware-scheduling-84ff454dfb-lb2mn started at 2022-06-10 20:16:40 +0000 UTC (0+1 container statuses recorded) Jun 10 22:07:25.547: INFO: Container tas-extender ready: true, restart count 0 Jun 10 22:07:25.547: INFO: test-webserver-130dfafe-d20f-42b6-9b5d-661bc0a0fc28 started at 2022-06-10 22:06:43 +0000 UTC (0+1 container statuses recorded) Jun 10 22:07:25.547: INFO: Container test-webserver ready: true, restart count 0 Jun 10 22:07:25.547: INFO: kube-proxy-5bkrr started at 2022-06-10 19:59:24 +0000 UTC (0+1 container statuses recorded) Jun 10 22:07:25.547: INFO: Container kube-proxy ready: true, restart count 1 Jun 10 22:07:25.792: INFO: Latency metrics for node node1 Jun 10 22:07:25.792: INFO: Logging node info for node node2 Jun 10 22:07:25.794: INFO: Node Info: &Node{ObjectMeta:{node2 e3ba5b73-7a35-4d3f-9138-31db06c90dc3 46071 0 2022-06-10 19:59:19 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux cmk.intel.com/cmk-node:true feature.node.kubernetes.io/cpu-cpuid.ADX:true feature.node.kubernetes.io/cpu-cpuid.AESNI:true feature.node.kubernetes.io/cpu-cpuid.AVX:true feature.node.kubernetes.io/cpu-cpuid.AVX2:true feature.node.kubernetes.io/cpu-cpuid.AVX512BW:true feature.node.kubernetes.io/cpu-cpuid.AVX512CD:true feature.node.kubernetes.io/cpu-cpuid.AVX512DQ:true feature.node.kubernetes.io/cpu-cpuid.AVX512F:true feature.node.kubernetes.io/cpu-cpuid.AVX512VL:true feature.node.kubernetes.io/cpu-cpuid.FMA3:true feature.node.kubernetes.io/cpu-cpuid.HLE:true feature.node.kubernetes.io/cpu-cpuid.IBPB:true feature.node.kubernetes.io/cpu-cpuid.MPX:true feature.node.kubernetes.io/cpu-cpuid.RTM:true feature.node.kubernetes.io/cpu-cpuid.SSE4:true feature.node.kubernetes.io/cpu-cpuid.SSE42:true feature.node.kubernetes.io/cpu-cpuid.STIBP:true feature.node.kubernetes.io/cpu-cpuid.VMX:true feature.node.kubernetes.io/cpu-cstate.enabled:true feature.node.kubernetes.io/cpu-hardware_multithreading:true feature.node.kubernetes.io/cpu-pstate.status:active feature.node.kubernetes.io/cpu-pstate.turbo:true feature.node.kubernetes.io/cpu-rdt.RDTCMT:true feature.node.kubernetes.io/cpu-rdt.RDTL3CA:true feature.node.kubernetes.io/cpu-rdt.RDTMBA:true feature.node.kubernetes.io/cpu-rdt.RDTMBM:true feature.node.kubernetes.io/cpu-rdt.RDTMON:true feature.node.kubernetes.io/kernel-config.NO_HZ:true feature.node.kubernetes.io/kernel-config.NO_HZ_FULL:true feature.node.kubernetes.io/kernel-selinux.enabled:true feature.node.kubernetes.io/kernel-version.full:3.10.0-1160.66.1.el7.x86_64 feature.node.kubernetes.io/kernel-version.major:3 feature.node.kubernetes.io/kernel-version.minor:10 feature.node.kubernetes.io/kernel-version.revision:0 feature.node.kubernetes.io/memory-numa:true feature.node.kubernetes.io/network-sriov.capable:true feature.node.kubernetes.io/network-sriov.configured:true feature.node.kubernetes.io/pci-0300_1a03.present:true feature.node.kubernetes.io/storage-nonrotationaldisk:true feature.node.kubernetes.io/system-os_release.ID:centos feature.node.kubernetes.io/system-os_release.VERSION_ID:7 feature.node.kubernetes.io/system-os_release.VERSION_ID.major:7 kubernetes.io/arch:amd64 kubernetes.io/hostname:node2 kubernetes.io/os:linux] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.208 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/extended-resources: nfd.node.kubernetes.io/feature-labels:cpu-cpuid.ADX,cpu-cpuid.AESNI,cpu-cpuid.AVX,cpu-cpuid.AVX2,cpu-cpuid.AVX512BW,cpu-cpuid.AVX512CD,cpu-cpuid.AVX512DQ,cpu-cpuid.AVX512F,cpu-cpuid.AVX512VL,cpu-cpuid.FMA3,cpu-cpuid.HLE,cpu-cpuid.IBPB,cpu-cpuid.MPX,cpu-cpuid.RTM,cpu-cpuid.SSE4,cpu-cpuid.SSE42,cpu-cpuid.STIBP,cpu-cpuid.VMX,cpu-cstate.enabled,cpu-hardware_multithreading,cpu-pstate.status,cpu-pstate.turbo,cpu-rdt.RDTCMT,cpu-rdt.RDTL3CA,cpu-rdt.RDTMBA,cpu-rdt.RDTMBM,cpu-rdt.RDTMON,kernel-config.NO_HZ,kernel-config.NO_HZ_FULL,kernel-selinux.enabled,kernel-version.full,kernel-version.major,kernel-version.minor,kernel-version.revision,memory-numa,network-sriov.capable,network-sriov.configured,pci-0300_1a03.present,storage-nonrotationaldisk,system-os_release.ID,system-os_release.VERSION_ID,system-os_release.VERSION_ID.major nfd.node.kubernetes.io/worker.version:v0.8.2 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2022-06-10 19:59:19 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.4.0/24\"":{}}}}} {kubeadm Update v1 2022-06-10 19:59:20 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {flanneld Update v1 2022-06-10 20:00:25 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {nfd-master Update v1 2022-06-10 20:08:16 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/extended-resources":{},"f:nfd.node.kubernetes.io/feature-labels":{},"f:nfd.node.kubernetes.io/worker.version":{}},"f:labels":{"f:feature.node.kubernetes.io/cpu-cpuid.ADX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AESNI":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX2":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512BW":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512CD":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512DQ":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512F":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512VL":{},"f:feature.node.kubernetes.io/cpu-cpuid.FMA3":{},"f:feature.node.kubernetes.io/cpu-cpuid.HLE":{},"f:feature.node.kubernetes.io/cpu-cpuid.IBPB":{},"f:feature.node.kubernetes.io/cpu-cpuid.MPX":{},"f:feature.node.kubernetes.io/cpu-cpuid.RTM":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE4":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE42":{},"f:feature.node.kubernetes.io/cpu-cpuid.STIBP":{},"f:feature.node.kubernetes.io/cpu-cpuid.VMX":{},"f:feature.node.kubernetes.io/cpu-cstate.enabled":{},"f:feature.node.kubernetes.io/cpu-hardware_multithreading":{},"f:feature.node.kubernetes.io/cpu-pstate.status":{},"f:feature.node.kubernetes.io/cpu-pstate.turbo":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTCMT":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTL3CA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBM":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMON":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ_FULL":{},"f:feature.node.kubernetes.io/kernel-selinux.enabled":{},"f:feature.node.kubernetes.io/kernel-version.full":{},"f:feature.node.kubernetes.io/kernel-version.major":{},"f:feature.node.kubernetes.io/kernel-version.minor":{},"f:feature.node.kubernetes.io/kernel-version.revision":{},"f:feature.node.kubernetes.io/memory-numa":{},"f:feature.node.kubernetes.io/network-sriov.capable":{},"f:feature.node.kubernetes.io/network-sriov.configured":{},"f:feature.node.kubernetes.io/pci-0300_1a03.present":{},"f:feature.node.kubernetes.io/storage-nonrotationaldisk":{},"f:feature.node.kubernetes.io/system-os_release.ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID.major":{}}}}} {Swagger-Codegen Update v1 2022-06-10 20:12:10 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:cmk.intel.com/cmk-node":{}}},"f:status":{"f:capacity":{"f:cmk.intel.com/exclusive-cores":{}}}}} {kubelet Update v1 2022-06-10 20:12:13 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:cmk.intel.com/exclusive-cores":{},"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:capacity":{"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.4.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.4.0/24],},Status:NodeStatus{Capacity:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{451201003520 0} {} 440625980Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{201269604352 0} {} 196552348Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{77 0} {} 77 DecimalSI},ephemeral-storage: {{406080902496 0} {} 406080902496 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{178884603904 0} {} 174691996Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-06-10 20:03:16 +0000 UTC,LastTransitionTime:2022-06-10 20:03:16 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-06-10 22:07:15 +0000 UTC,LastTransitionTime:2022-06-10 19:59:19 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-06-10 22:07:15 +0000 UTC,LastTransitionTime:2022-06-10 19:59:19 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-06-10 22:07:15 +0000 UTC,LastTransitionTime:2022-06-10 19:59:19 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-06-10 22:07:15 +0000 UTC,LastTransitionTime:2022-06-10 20:00:31 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.208,},NodeAddress{Type:Hostname,Address:node2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:bb5fb4a83f9949939cd41b7583e9b343,SystemUUID:80B3CD56-852F-E711-906E-0017A4403562,BootID:bd9c2046-c9ae-4b83-a147-c07e3487254e,KernelVersion:3.10.0-1160.66.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.17,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[opnfv/barometer-collectd@sha256:f30e965aa6195e6ac4ca2410f5a15e3704c92e4afa5208178ca22a7911975d66],SizeBytes:1075575763,},ContainerImage{Names:[localhost:30500/cmk@sha256:fa61e6e6fee0a4d296013d2993a9ff5538ff0b2e232e6b9c661a6604d93ce888 localhost:30500/cmk:v1.5.1],SizeBytes:727708945,},ContainerImage{Names:[cmk:v1.5.1],SizeBytes:727708945,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[aquasec/kube-hunter@sha256:2be6820bc1d7e0f57193a9a27d5a3e16b2fd93c53747b03ce8ca48c6fc323781 aquasec/kube-hunter:0.3.1],SizeBytes:347611549,},ContainerImage{Names:[sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 sirot/netperf-latest:latest],SizeBytes:282025213,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:702a992280fb7c3303e84a5801acbb4c9c7fcf48cffe0e9c8be3f0c60f74cf89 k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.4],SizeBytes:253371792,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[nginx@sha256:a05b0cdd4fc1be3b224ba9662ebdf98fe44c09c0c9215b45f84344c12867002e nginx:1.21.1],SizeBytes:133175493,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:716d2f68314c5c4ddd5ecdb45183fcb4ed8019015982c1321571f863989b70b0 k8s.gcr.io/e2e-test-images/httpd:2.4.39-1],SizeBytes:126894770,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:125930239,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:1f36a24cfb5e0c3f725d7565a867c2384282fcbeccc77b07b423c9da95763a9a k8s.gcr.io/e2e-test-images/nautilus:1.4],SizeBytes:121748345,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[k8s.gcr.io/nfd/node-feature-discovery@sha256:74a1cbd82354f148277e20cdce25d57816e355a896bc67f67a0f722164b16945 k8s.gcr.io/nfd/node-feature-discovery:v0.8.2],SizeBytes:108486428,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[localhost:30500/sriov-device-plugin@sha256:73408b8d6699bf382b8f7526b6d0a986fad0f037440cd9aabd8985a7e1dbea07 localhost:30500/sriov-device-plugin:v3.3.2],SizeBytes:42676189,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 aquasec/kube-bench:0.3.1],SizeBytes:19301876,},ContainerImage{Names:[prom/collectd-exporter@sha256:73fbda4d24421bff3b741c27efc36f1b6fbe7c57c378d56d4ff78101cd556654],SizeBytes:17463681,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:16032814,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:11443478,},ContainerImage{Names:[appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172cead6502fec989198b98d8bb appropriate/curl:edge],SizeBytes:5654234,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:1154361,},ContainerImage{Names:[busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 busybox:1.28],SizeBytes:1146369,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jun 10 22:07:25.795: INFO: Logging kubelet events for node node2 Jun 10 22:07:25.797: INFO: Logging pods the kubelet thinks is on node node2 Jun 10 22:07:25.807: INFO: cmk-init-discover-node2-jxvbr started at 2022-06-10 20:12:04 +0000 UTC (0+3 container statuses recorded) Jun 10 22:07:25.807: INFO: Container discover ready: false, restart count 0 Jun 10 22:07:25.807: INFO: Container init ready: false, restart count 0 Jun 10 22:07:25.807: INFO: Container install ready: false, restart count 0 Jun 10 22:07:25.807: INFO: externalname-service-5zzgz started at 2022-06-10 22:07:04 +0000 UTC (0+1 container statuses recorded) Jun 10 22:07:25.807: INFO: Container externalname-service ready: true, restart count 0 Jun 10 22:07:25.807: INFO: kube-proxy-4clxz started at 2022-06-10 19:59:24 +0000 UTC (0+1 container statuses recorded) Jun 10 22:07:25.807: INFO: Container kube-proxy ready: true, restart count 2 Jun 10 22:07:25.807: INFO: kube-flannel-8jl6m started at 2022-06-10 20:00:20 +0000 UTC (1+1 container statuses recorded) Jun 10 22:07:25.807: INFO: Init container install-cni ready: true, restart count 2 Jun 10 22:07:25.807: INFO: Container kube-flannel ready: true, restart count 2 Jun 10 22:07:25.807: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-z4m46 started at 2022-06-10 20:09:21 +0000 UTC (0+1 container statuses recorded) Jun 10 22:07:25.807: INFO: Container kube-sriovdp ready: true, restart count 0 Jun 10 22:07:25.807: INFO: externalsvc-qg25z started at 2022-06-10 22:07:00 +0000 UTC (0+1 container statuses recorded) Jun 10 22:07:25.807: INFO: Container externalsvc ready: false, restart count 0 Jun 10 22:07:25.807: INFO: pod1 started at 2022-06-10 22:06:42 +0000 UTC (0+1 container statuses recorded) Jun 10 22:07:25.807: INFO: Container container1 ready: false, restart count 0 Jun 10 22:07:25.807: INFO: collectd-srmjh started at 2022-06-10 20:17:30 +0000 UTC (0+3 container statuses recorded) Jun 10 22:07:25.807: INFO: Container collectd ready: true, restart count 0 Jun 10 22:07:25.807: INFO: Container collectd-exporter ready: true, restart count 0 Jun 10 22:07:25.807: INFO: Container rbac-proxy ready: true, restart count 0 Jun 10 22:07:25.807: INFO: foo-bx2t9 started at 2022-06-10 22:07:18 +0000 UTC (0+1 container statuses recorded) Jun 10 22:07:25.807: INFO: Container c ready: true, restart count 0 Jun 10 22:07:25.807: INFO: pod2 started at 2022-06-10 22:06:42 +0000 UTC (0+1 container statuses recorded) Jun 10 22:07:25.807: INFO: Container container1 ready: false, restart count 0 Jun 10 22:07:25.807: INFO: kubernetes-metrics-scraper-5558854cb-pf6tn started at 2022-06-10 20:01:01 +0000 UTC (0+1 container statuses recorded) Jun 10 22:07:25.807: INFO: Container kubernetes-metrics-scraper ready: true, restart count 1 Jun 10 22:07:25.807: INFO: affinity-nodeport-transition-rpgll started at 2022-06-10 22:05:27 +0000 UTC (0+1 container statuses recorded) Jun 10 22:07:25.807: INFO: Container affinity-nodeport-transition ready: true, restart count 0 Jun 10 22:07:25.807: INFO: pod-configmaps-4505e77a-d216-45c0-a848-100bb0d6ca1d started at 2022-06-10 22:07:16 +0000 UTC (0+3 container statuses recorded) Jun 10 22:07:25.807: INFO: Container createcm-volume-test ready: true, restart count 0 Jun 10 22:07:25.807: INFO: Container delcm-volume-test ready: true, restart count 0 Jun 10 22:07:25.807: INFO: Container updcm-volume-test ready: true, restart count 0 Jun 10 22:07:25.807: INFO: node-exporter-trpg7 started at 2022-06-10 20:13:33 +0000 UTC (0+2 container statuses recorded) Jun 10 22:07:25.807: INFO: Container kube-rbac-proxy ready: true, restart count 0 Jun 10 22:07:25.807: INFO: Container node-exporter ready: true, restart count 0 Jun 10 22:07:25.807: INFO: busybox-readonly-fsa7711962-df66-45a0-ad53-92e2e9eeb954 started at 2022-06-10 22:06:53 +0000 UTC (0+1 container statuses recorded) Jun 10 22:07:25.807: INFO: Container busybox-readonly-fsa7711962-df66-45a0-ad53-92e2e9eeb954 ready: true, restart count 0 Jun 10 22:07:25.807: INFO: test-rolling-update-controller-ctzzq started at 2022-06-10 22:07:23 +0000 UTC (0+1 container statuses recorded) Jun 10 22:07:25.807: INFO: Container httpd ready: false, restart count 0 Jun 10 22:07:25.807: INFO: affinity-nodeport-transition-st24r started at 2022-06-10 22:05:27 +0000 UTC (0+1 container statuses recorded) Jun 10 22:07:25.807: INFO: Container affinity-nodeport-transition ready: true, restart count 0 Jun 10 22:07:25.807: INFO: kubernetes-dashboard-785dcbb76d-7pmgn started at 2022-06-10 20:01:00 +0000 UTC (0+1 container statuses recorded) Jun 10 22:07:25.807: INFO: Container kubernetes-dashboard ready: true, restart count 1 Jun 10 22:07:25.807: INFO: cmk-zpstc started at 2022-06-10 20:12:29 +0000 UTC (0+2 container statuses recorded) Jun 10 22:07:25.807: INFO: Container nodereport ready: true, restart count 0 Jun 10 22:07:25.807: INFO: Container reconcile ready: true, restart count 0 Jun 10 22:07:25.807: INFO: nginx-proxy-node2 started at 2022-06-10 19:59:19 +0000 UTC (0+1 container statuses recorded) Jun 10 22:07:25.807: INFO: Container nginx-proxy ready: true, restart count 2 Jun 10 22:07:25.807: INFO: kube-multus-ds-amd64-nj866 started at 2022-06-10 20:00:29 +0000 UTC (0+1 container statuses recorded) Jun 10 22:07:25.807: INFO: Container kube-multus ready: true, restart count 1 Jun 10 22:07:25.807: INFO: ss2-0 started at 2022-06-10 22:07:07 +0000 UTC (0+1 container statuses recorded) Jun 10 22:07:25.807: INFO: Container webserver ready: false, restart count 0 Jun 10 22:07:25.807: INFO: node-feature-discovery-worker-s9mwk started at 2022-06-10 20:08:09 +0000 UTC (0+1 container statuses recorded) Jun 10 22:07:25.807: INFO: Container nfd-worker ready: true, restart count 0 Jun 10 22:07:25.807: INFO: pod-configmaps-4a596170-1b68-470c-8207-7b0693dbac74 started at 2022-06-10 22:05:48 +0000 UTC (0+1 container statuses recorded) Jun 10 22:07:25.807: INFO: Container agnhost-container ready: true, restart count 0 Jun 10 22:07:26.096: INFO: Latency metrics for node node2 Jun 10 22:07:26.096: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-2827" for this suite. • Failure [305.350 seconds] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:95 Should recreate evicted statefulset [Conformance] [It] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Jun 10 22:07:24.810: Pod ss-0 expected to be re-created at least once /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes/runner.go:113 ------------------------------ {"msg":"FAILED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","total":-1,"completed":24,"skipped":484,"failed":1,"failures":["[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSSSSSS ------------------------------ [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 10 22:07:16.512: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating configMap with name cm-test-opt-del-a78445a4-0388-4b3f-8d1d-ae14c1eb264c STEP: Creating configMap with name cm-test-opt-upd-36c6c9ed-7fb1-47b6-af4b-324b0e56715d STEP: Creating the pod Jun 10 22:07:16.570: INFO: The status of Pod pod-configmaps-4505e77a-d216-45c0-a848-100bb0d6ca1d is Pending, waiting for it to be Running (with Ready = true) Jun 10 22:07:18.575: INFO: The status of Pod pod-configmaps-4505e77a-d216-45c0-a848-100bb0d6ca1d is Pending, waiting for it to be Running (with Ready = true) Jun 10 22:07:20.576: INFO: The status of Pod pod-configmaps-4505e77a-d216-45c0-a848-100bb0d6ca1d is Pending, waiting for it to be Running (with Ready = true) Jun 10 22:07:22.574: INFO: The status of Pod pod-configmaps-4505e77a-d216-45c0-a848-100bb0d6ca1d is Running (Ready = true) STEP: Deleting configmap cm-test-opt-del-a78445a4-0388-4b3f-8d1d-ae14c1eb264c STEP: Updating configmap cm-test-opt-upd-36c6c9ed-7fb1-47b6-af4b-324b0e56715d STEP: Creating configMap with name cm-test-opt-create-8d926d23-5fc1-4beb-962b-1a3041ff7525 STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 10 22:07:26.754: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-5921" for this suite. • [SLOW TEST:10.251 seconds] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":-1,"completed":24,"skipped":412,"failed":2,"failures":["[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should be able to create a functioning NodePort service [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 10 22:07:00.359: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746 [It] should be able to change the type from ClusterIP to ExternalName [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating a service clusterip-service with the type=ClusterIP in namespace services-8075 STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service STEP: creating service externalsvc in namespace services-8075 STEP: creating replication controller externalsvc in namespace services-8075 I0610 22:07:00.394304 27 runners.go:190] Created replication controller with name: externalsvc, namespace: services-8075, replica count: 2 I0610 22:07:03.446274 27 runners.go:190] externalsvc Pods: 2 out of 2 created, 1 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0610 22:07:06.447204 27 runners.go:190] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady STEP: changing the ClusterIP service to type=ExternalName Jun 10 22:07:06.458: INFO: Creating new exec pod Jun 10 22:07:10.477: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8075 exec execpodsdwps -- /bin/sh -x -c nslookup clusterip-service.services-8075.svc.cluster.local' Jun 10 22:07:10.741: INFO: stderr: "+ nslookup clusterip-service.services-8075.svc.cluster.local\n" Jun 10 22:07:10.741: INFO: stdout: "Server:\t\t10.233.0.3\nAddress:\t10.233.0.3#53\n\nclusterip-service.services-8075.svc.cluster.local\tcanonical name = externalsvc.services-8075.svc.cluster.local.\nName:\texternalsvc.services-8075.svc.cluster.local\nAddress: 10.233.50.65\n\n" STEP: deleting ReplicationController externalsvc in namespace services-8075, will wait for the garbage collector to delete the pods Jun 10 22:07:10.800: INFO: Deleting ReplicationController externalsvc took: 5.483931ms Jun 10 22:07:10.900: INFO: Terminating ReplicationController externalsvc pods took: 100.400165ms Jun 10 22:07:27.410: INFO: Cleaning up the ClusterIP to ExternalName test service [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 10 22:07:27.416: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-8075" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750 • [SLOW TEST:27.066 seconds] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should be able to change the type from ClusterIP to ExternalName [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance]","total":-1,"completed":40,"skipped":637,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 10 22:06:00.314: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 [BeforeEach] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:105 STEP: Creating service test in namespace statefulset-8699 [It] should perform canary updates and phased rolling updates of template modifications [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a new StatefulSet Jun 10 22:06:00.346: INFO: Found 0 stateful pods, waiting for 3 Jun 10 22:06:10.350: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Jun 10 22:06:10.350: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Jun 10 22:06:10.350: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false Jun 10 22:06:20.350: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Jun 10 22:06:20.350: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Jun 10 22:06:20.350: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Updating stateful set template: update image from k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 to k8s.gcr.io/e2e-test-images/httpd:2.4.39-1 Jun 10 22:06:20.379: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Not applying an update when the partition is greater than the number of replicas STEP: Performing a canary update Jun 10 22:06:30.409: INFO: Updating stateful set ss2 Jun 10 22:06:30.415: INFO: Waiting for Pod statefulset-8699/ss2-2 to have revision ss2-5bbbc9fc94 update revision ss2-677d6db895 STEP: Restoring Pods to the correct revision when they are deleted Jun 10 22:06:40.439: INFO: Found 1 stateful pods, waiting for 3 Jun 10 22:06:50.444: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Jun 10 22:06:50.444: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Jun 10 22:06:50.444: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Performing a phased rolling update Jun 10 22:06:50.466: INFO: Updating stateful set ss2 Jun 10 22:06:50.472: INFO: Waiting for Pod statefulset-8699/ss2-1 to have revision ss2-5bbbc9fc94 update revision ss2-677d6db895 Jun 10 22:07:00.495: INFO: Updating stateful set ss2 Jun 10 22:07:00.501: INFO: Waiting for StatefulSet statefulset-8699/ss2 to complete update Jun 10 22:07:00.501: INFO: Waiting for Pod statefulset-8699/ss2-0 to have revision ss2-5bbbc9fc94 update revision ss2-677d6db895 [AfterEach] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:116 Jun 10 22:07:10.507: INFO: Deleting all statefulset in ns statefulset-8699 Jun 10 22:07:10.510: INFO: Scaling statefulset ss2 to 0 Jun 10 22:07:30.525: INFO: Waiting for statefulset status.replicas updated to 0 Jun 10 22:07:30.529: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 10 22:07:30.538: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-8699" for this suite. • [SLOW TEST:90.234 seconds] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:95 should perform canary updates and phased rolling updates of template modifications [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]","total":-1,"completed":32,"skipped":760,"failed":0} SSSSS ------------------------------ [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 10 22:07:30.563: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:241 [It] should check if v1 is in available api versions [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: validating api versions Jun 10 22:07:30.583: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-3955 api-versions' Jun 10 22:07:30.701: INFO: stderr: "" Jun 10 22:07:30.701: INFO: stdout: "admissionregistration.k8s.io/v1\nadmissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1\ncoordination.k8s.io/v1beta1\ncustom.metrics.k8s.io/v1beta1\ndiscovery.k8s.io/v1\ndiscovery.k8s.io/v1beta1\nevents.k8s.io/v1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nflowcontrol.apiserver.k8s.io/v1beta1\nintel.com/v1\nk8s.cni.cncf.io/v1\nmonitoring.coreos.com/v1\nmonitoring.coreos.com/v1alpha1\nnetworking.k8s.io/v1\nnetworking.k8s.io/v1beta1\nnode.k8s.io/v1\nnode.k8s.io/v1beta1\npolicy/v1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\ntelemetry.intel.com/v1alpha1\nv1\n" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 10 22:07:30.701: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3955" for this suite. • ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions [Conformance]","total":-1,"completed":33,"skipped":765,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 10 22:07:30.744: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] should include custom resource definition resources in discovery documents [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: fetching the /apis discovery document STEP: finding the apiextensions.k8s.io API group in the /apis discovery document STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis discovery document STEP: fetching the /apis/apiextensions.k8s.io discovery document STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis/apiextensions.k8s.io discovery document STEP: fetching the /apis/apiextensions.k8s.io/v1 discovery document STEP: finding customresourcedefinitions resources in the /apis/apiextensions.k8s.io/v1 discovery document [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 10 22:07:30.766: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-7776" for this suite. • ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance]","total":-1,"completed":34,"skipped":781,"failed":0} SSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 10 22:07:26.826: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Jun 10 22:07:31.883: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 10 22:07:31.891: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-4812" for this suite. • [SLOW TEST:5.073 seconds] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 blackbox test /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:41 on terminated container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:134 should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]","total":-1,"completed":25,"skipped":441,"failed":2,"failures":["[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should be able to create a functioning NodePort service [Conformance]"]} SSS ------------------------------ [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 10 22:07:23.359: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:86 [It] RollingUpdateDeployment should delete old pods and create new ones [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Jun 10 22:07:23.384: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted) Jun 10 22:07:23.390: INFO: Pod name sample-pod: Found 0 pods out of 1 Jun 10 22:07:28.394: INFO: Pod name sample-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Jun 10 22:07:28.395: INFO: Creating deployment "test-rolling-update-deployment" Jun 10 22:07:28.400: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has Jun 10 22:07:28.405: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created Jun 10 22:07:30.412: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected Jun 10 22:07:30.414: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63790495648, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63790495648, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63790495648, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63790495648, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-585b757574\" is progressing."}}, CollisionCount:(*int32)(nil)} Jun 10 22:07:32.416: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63790495648, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63790495648, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63790495648, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63790495648, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-585b757574\" is progressing."}}, CollisionCount:(*int32)(nil)} Jun 10 22:07:34.419: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted) [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:80 Jun 10 22:07:34.426: INFO: Deployment "test-rolling-update-deployment": &Deployment{ObjectMeta:{test-rolling-update-deployment deployment-3675 2116d564-1359-45d6-a1fe-8368e87b2dfa 46982 1 2022-06-10 22:07:28 +0000 UTC map[name:sample-pod] map[deployment.kubernetes.io/revision:3546343826724305833] [] [] [{e2e.test Update apps/v1 2022-06-10 22:07:28 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2022-06-10 22:07:32 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:availableReplicas":{},"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{},"f:updatedReplicas":{}}}}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.32 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc006d99908 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2022-06-10 22:07:28 +0000 UTC,LastTransitionTime:2022-06-10 22:07:28 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rolling-update-deployment-585b757574" has successfully progressed.,LastUpdateTime:2022-06-10 22:07:32 +0000 UTC,LastTransitionTime:2022-06-10 22:07:28 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} Jun 10 22:07:34.429: INFO: New ReplicaSet "test-rolling-update-deployment-585b757574" of Deployment "test-rolling-update-deployment": &ReplicaSet{ObjectMeta:{test-rolling-update-deployment-585b757574 deployment-3675 f4ca5a2f-5ad8-400d-9b18-961c64606ab7 46969 1 2022-06-10 22:07:28 +0000 UTC map[name:sample-pod pod-template-hash:585b757574] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305833] [{apps/v1 Deployment test-rolling-update-deployment 2116d564-1359-45d6-a1fe-8368e87b2dfa 0xc006d99db7 0xc006d99db8}] [] [{kube-controller-manager Update apps/v1 2022-06-10 22:07:32 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"2116d564-1359-45d6-a1fe-8368e87b2dfa\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: 585b757574,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod pod-template-hash:585b757574] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.32 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc006d99e48 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} Jun 10 22:07:34.429: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment": Jun 10 22:07:34.429: INFO: &ReplicaSet{ObjectMeta:{test-rolling-update-controller deployment-3675 6bab68ff-d21d-47df-a8b0-8d8cda15dc93 46981 2 2022-06-10 22:07:23 +0000 UTC map[name:sample-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305832] [{apps/v1 Deployment test-rolling-update-deployment 2116d564-1359-45d6-a1fe-8368e87b2dfa 0xc006d99ca7 0xc006d99ca8}] [] [{e2e.test Update apps/v1 2022-06-10 22:07:23 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2022-06-10 22:07:32 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"2116d564-1359-45d6-a1fe-8368e87b2dfa\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{}},"f:status":{"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod pod:httpd] map[] [] [] []} {[] [] [{httpd k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc006d99d48 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Jun 10 22:07:34.433: INFO: Pod "test-rolling-update-deployment-585b757574-k4x6f" is available: &Pod{ObjectMeta:{test-rolling-update-deployment-585b757574-k4x6f test-rolling-update-deployment-585b757574- deployment-3675 143f03c2-299d-47f8-b6fc-f59c37744918 46968 0 2022-06-10 22:07:28 +0000 UTC map[name:sample-pod pod-template-hash:585b757574] map[k8s.v1.cni.cncf.io/network-status:[{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.3.18" ], "mac": "36:11:7a:93:43:f7", "default": true, "dns": {} }] k8s.v1.cni.cncf.io/networks-status:[{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.3.18" ], "mac": "36:11:7a:93:43:f7", "default": true, "dns": {} }] kubernetes.io/psp:collectd] [{apps/v1 ReplicaSet test-rolling-update-deployment-585b757574 f4ca5a2f-5ad8-400d-9b18-961c64606ab7 0xc003a3a26f 0xc003a3a280}] [] [{kube-controller-manager Update v1 2022-06-10 22:07:28 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f4ca5a2f-5ad8-400d-9b18-961c64606ab7\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {multus Update v1 2022-06-10 22:07:30 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:k8s.v1.cni.cncf.io/network-status":{},"f:k8s.v1.cni.cncf.io/networks-status":{}}}}} {kubelet Update v1 2022-06-10 22:07:32 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.3.18\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-nv8sv,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:k8s.gcr.io/e2e-test-images/agnhost:2.32,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-nv8sv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node1,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-06-10 22:07:28 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-06-10 22:07:32 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-06-10 22:07:32 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-06-10 22:07:28 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.10.190.207,PodIP:10.244.3.18,StartTime:2022-06-10 22:07:28 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2022-06-10 22:07:31 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/agnhost:2.32,ImageID:docker-pullable://k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1,ContainerID:docker://23f82ed165fe892cdb14ccdbf0932e6a50639209feb0ca6bd5bee3565cbdf646,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.3.18,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 10 22:07:34.433: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-3675" for this suite. • [SLOW TEST:11.083 seconds] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 RollingUpdateDeployment should delete old pods and create new ones [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance]","total":-1,"completed":24,"skipped":235,"failed":0} SSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 10 22:07:30.797: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Jun 10 22:07:31.515: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Jun 10 22:07:33.523: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63790495651, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63790495651, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63790495651, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63790495651, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jun 10 22:07:36.531: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] listing mutating webhooks should work [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Listing all of the created validation webhooks STEP: Creating a configMap that should be mutated STEP: Deleting the collection of validation webhooks STEP: Creating a configMap that should not be mutated [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 10 22:07:36.640: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-5507" for this suite. STEP: Destroying namespace "webhook-5507-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:5.873 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 listing mutating webhooks should work [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","total":-1,"completed":35,"skipped":791,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] Service endpoints latency /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 10 22:07:26.129: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svc-latency STEP: Waiting for a default service account to be provisioned in namespace [It] should not be very high [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Jun 10 22:07:26.154: INFO: >>> kubeConfig: /root/.kube/config STEP: creating replication controller svc-latency-rc in namespace svc-latency-4850 I0610 22:07:26.173496 28 runners.go:190] Created replication controller with name: svc-latency-rc, namespace: svc-latency-4850, replica count: 1 I0610 22:07:27.225208 28 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0610 22:07:28.227130 28 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0610 22:07:29.228263 28 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jun 10 22:07:29.337: INFO: Created: latency-svc-9vvwn Jun 10 22:07:29.341: INFO: Got endpoints: latency-svc-9vvwn [13.071398ms] Jun 10 22:07:29.346: INFO: Created: latency-svc-bchfc Jun 10 22:07:29.349: INFO: Got endpoints: latency-svc-bchfc [7.087392ms] Jun 10 22:07:29.353: INFO: Created: latency-svc-wz4mw Jun 10 22:07:29.355: INFO: Got endpoints: latency-svc-wz4mw [13.01472ms] Jun 10 22:07:29.355: INFO: Created: latency-svc-zc7w9 Jun 10 22:07:29.357: INFO: Got endpoints: latency-svc-zc7w9 [15.25136ms] Jun 10 22:07:29.357: INFO: Created: latency-svc-vf68j Jun 10 22:07:29.359: INFO: Got endpoints: latency-svc-vf68j [17.417663ms] Jun 10 22:07:29.360: INFO: Created: latency-svc-fbk7s Jun 10 22:07:29.362: INFO: Got endpoints: latency-svc-fbk7s [20.504593ms] Jun 10 22:07:29.363: INFO: Created: latency-svc-8v9vs Jun 10 22:07:29.365: INFO: Got endpoints: latency-svc-8v9vs [8.145228ms] Jun 10 22:07:29.365: INFO: Created: latency-svc-j68dv Jun 10 22:07:29.368: INFO: Got endpoints: latency-svc-j68dv [25.707779ms] Jun 10 22:07:29.369: INFO: Created: latency-svc-dqvzm Jun 10 22:07:29.371: INFO: Got endpoints: latency-svc-dqvzm [28.802179ms] Jun 10 22:07:29.372: INFO: Created: latency-svc-lhk7k Jun 10 22:07:29.374: INFO: Got endpoints: latency-svc-lhk7k [31.463701ms] Jun 10 22:07:29.375: INFO: Created: latency-svc-757c4 Jun 10 22:07:29.377: INFO: Got endpoints: latency-svc-757c4 [35.103336ms] Jun 10 22:07:29.378: INFO: Created: latency-svc-xcxlr Jun 10 22:07:29.380: INFO: Got endpoints: latency-svc-xcxlr [37.642ms] Jun 10 22:07:29.381: INFO: Created: latency-svc-6bd75 Jun 10 22:07:29.383: INFO: Got endpoints: latency-svc-6bd75 [40.488495ms] Jun 10 22:07:29.384: INFO: Created: latency-svc-5l8p4 Jun 10 22:07:29.386: INFO: Got endpoints: latency-svc-5l8p4 [43.479391ms] Jun 10 22:07:29.387: INFO: Created: latency-svc-9kfvt Jun 10 22:07:29.389: INFO: Got endpoints: latency-svc-9kfvt [46.047125ms] Jun 10 22:07:29.389: INFO: Created: latency-svc-sch6t Jun 10 22:07:29.391: INFO: Got endpoints: latency-svc-sch6t [48.82134ms] Jun 10 22:07:29.392: INFO: Created: latency-svc-zn865 Jun 10 22:07:29.394: INFO: Got endpoints: latency-svc-zn865 [52.033328ms] Jun 10 22:07:29.395: INFO: Created: latency-svc-7mvpt Jun 10 22:07:29.397: INFO: Got endpoints: latency-svc-7mvpt [48.308981ms] Jun 10 22:07:29.398: INFO: Created: latency-svc-7rp7p Jun 10 22:07:29.400: INFO: Got endpoints: latency-svc-7rp7p [45.357186ms] Jun 10 22:07:29.401: INFO: Created: latency-svc-n4h4x Jun 10 22:07:29.403: INFO: Got endpoints: latency-svc-n4h4x [43.532779ms] Jun 10 22:07:29.404: INFO: Created: latency-svc-pd5lk Jun 10 22:07:29.407: INFO: Got endpoints: latency-svc-pd5lk [44.108507ms] Jun 10 22:07:29.407: INFO: Created: latency-svc-2kflb Jun 10 22:07:29.409: INFO: Created: latency-svc-2wf7m Jun 10 22:07:29.409: INFO: Got endpoints: latency-svc-2kflb [43.705202ms] Jun 10 22:07:29.411: INFO: Got endpoints: latency-svc-2wf7m [43.223566ms] Jun 10 22:07:29.412: INFO: Created: latency-svc-drff8 Jun 10 22:07:29.414: INFO: Got endpoints: latency-svc-drff8 [42.808587ms] Jun 10 22:07:29.415: INFO: Created: latency-svc-8k2nd Jun 10 22:07:29.417: INFO: Got endpoints: latency-svc-8k2nd [43.177853ms] Jun 10 22:07:29.417: INFO: Created: latency-svc-4qgzq Jun 10 22:07:29.420: INFO: Got endpoints: latency-svc-4qgzq [42.511921ms] Jun 10 22:07:29.420: INFO: Created: latency-svc-ftzdm Jun 10 22:07:29.423: INFO: Got endpoints: latency-svc-ftzdm [42.981907ms] Jun 10 22:07:29.423: INFO: Created: latency-svc-8ncz9 Jun 10 22:07:29.425: INFO: Got endpoints: latency-svc-8ncz9 [41.840316ms] Jun 10 22:07:29.426: INFO: Created: latency-svc-rhrmv Jun 10 22:07:29.428: INFO: Got endpoints: latency-svc-rhrmv [41.829476ms] Jun 10 22:07:29.429: INFO: Created: latency-svc-6nk7r Jun 10 22:07:29.431: INFO: Got endpoints: latency-svc-6nk7r [42.139294ms] Jun 10 22:07:29.432: INFO: Created: latency-svc-nx4vj Jun 10 22:07:29.434: INFO: Got endpoints: latency-svc-nx4vj [42.736463ms] Jun 10 22:07:29.434: INFO: Created: latency-svc-5tcmg Jun 10 22:07:29.438: INFO: Created: latency-svc-55r7p Jun 10 22:07:29.439: INFO: Got endpoints: latency-svc-5tcmg [45.00124ms] Jun 10 22:07:29.440: INFO: Created: latency-svc-rq7dr Jun 10 22:07:29.443: INFO: Created: latency-svc-jgsg2 Jun 10 22:07:29.446: INFO: Created: latency-svc-vdzn8 Jun 10 22:07:29.448: INFO: Created: latency-svc-xlmhn Jun 10 22:07:29.450: INFO: Created: latency-svc-hqdhg Jun 10 22:07:29.453: INFO: Created: latency-svc-52zmd Jun 10 22:07:29.456: INFO: Created: latency-svc-7bv6s Jun 10 22:07:29.459: INFO: Created: latency-svc-shs8v Jun 10 22:07:29.462: INFO: Created: latency-svc-gkw44 Jun 10 22:07:29.464: INFO: Created: latency-svc-zczsf Jun 10 22:07:29.466: INFO: Created: latency-svc-vmr4c Jun 10 22:07:29.469: INFO: Created: latency-svc-9q9cv Jun 10 22:07:29.472: INFO: Created: latency-svc-r4npc Jun 10 22:07:29.474: INFO: Created: latency-svc-kf5ht Jun 10 22:07:29.490: INFO: Got endpoints: latency-svc-55r7p [92.53406ms] Jun 10 22:07:29.495: INFO: Created: latency-svc-cgjqr Jun 10 22:07:29.540: INFO: Got endpoints: latency-svc-rq7dr [139.749291ms] Jun 10 22:07:29.546: INFO: Created: latency-svc-frhzs Jun 10 22:07:29.589: INFO: Got endpoints: latency-svc-jgsg2 [186.305317ms] Jun 10 22:07:29.594: INFO: Created: latency-svc-b56qh Jun 10 22:07:29.639: INFO: Got endpoints: latency-svc-vdzn8 [232.766935ms] Jun 10 22:07:29.645: INFO: Created: latency-svc-9jm2s Jun 10 22:07:29.689: INFO: Got endpoints: latency-svc-xlmhn [280.195335ms] Jun 10 22:07:29.694: INFO: Created: latency-svc-2jqrh Jun 10 22:07:29.740: INFO: Got endpoints: latency-svc-hqdhg [328.640017ms] Jun 10 22:07:29.745: INFO: Created: latency-svc-pdfhr Jun 10 22:07:29.790: INFO: Got endpoints: latency-svc-52zmd [375.967713ms] Jun 10 22:07:29.796: INFO: Created: latency-svc-lqlfq Jun 10 22:07:29.841: INFO: Got endpoints: latency-svc-7bv6s [424.20594ms] Jun 10 22:07:29.846: INFO: Created: latency-svc-fqgzz Jun 10 22:07:29.890: INFO: Got endpoints: latency-svc-shs8v [469.841813ms] Jun 10 22:07:29.895: INFO: Created: latency-svc-5sjnn Jun 10 22:07:29.940: INFO: Got endpoints: latency-svc-gkw44 [516.985141ms] Jun 10 22:07:29.946: INFO: Created: latency-svc-zxc7s Jun 10 22:07:29.989: INFO: Got endpoints: latency-svc-zczsf [564.402703ms] Jun 10 22:07:29.996: INFO: Created: latency-svc-z4bwt Jun 10 22:07:30.040: INFO: Got endpoints: latency-svc-vmr4c [611.56461ms] Jun 10 22:07:30.045: INFO: Created: latency-svc-8fgc8 Jun 10 22:07:30.090: INFO: Got endpoints: latency-svc-9q9cv [658.60442ms] Jun 10 22:07:30.095: INFO: Created: latency-svc-4j2rz Jun 10 22:07:30.139: INFO: Got endpoints: latency-svc-r4npc [704.917726ms] Jun 10 22:07:30.146: INFO: Created: latency-svc-xrnm6 Jun 10 22:07:30.190: INFO: Got endpoints: latency-svc-kf5ht [750.578698ms] Jun 10 22:07:30.196: INFO: Created: latency-svc-jczqb Jun 10 22:07:30.239: INFO: Got endpoints: latency-svc-cgjqr [749.342226ms] Jun 10 22:07:30.244: INFO: Created: latency-svc-whcpf Jun 10 22:07:30.289: INFO: Got endpoints: latency-svc-frhzs [749.124671ms] Jun 10 22:07:30.295: INFO: Created: latency-svc-gcx95 Jun 10 22:07:30.339: INFO: Got endpoints: latency-svc-b56qh [749.999641ms] Jun 10 22:07:30.344: INFO: Created: latency-svc-t7fp4 Jun 10 22:07:30.390: INFO: Got endpoints: latency-svc-9jm2s [750.8692ms] Jun 10 22:07:30.396: INFO: Created: latency-svc-ksgp2 Jun 10 22:07:30.439: INFO: Got endpoints: latency-svc-2jqrh [749.751356ms] Jun 10 22:07:30.446: INFO: Created: latency-svc-88l7s Jun 10 22:07:30.490: INFO: Got endpoints: latency-svc-pdfhr [750.077798ms] Jun 10 22:07:30.496: INFO: Created: latency-svc-d842h Jun 10 22:07:30.540: INFO: Got endpoints: latency-svc-lqlfq [750.741181ms] Jun 10 22:07:30.548: INFO: Created: latency-svc-ng2qp Jun 10 22:07:30.589: INFO: Got endpoints: latency-svc-fqgzz [748.032649ms] Jun 10 22:07:30.594: INFO: Created: latency-svc-vfrk9 Jun 10 22:07:30.640: INFO: Got endpoints: latency-svc-5sjnn [750.217567ms] Jun 10 22:07:30.646: INFO: Created: latency-svc-tcpfs Jun 10 22:07:30.690: INFO: Got endpoints: latency-svc-zxc7s [749.924368ms] Jun 10 22:07:30.695: INFO: Created: latency-svc-cr6hq Jun 10 22:07:30.739: INFO: Got endpoints: latency-svc-z4bwt [749.609064ms] Jun 10 22:07:30.745: INFO: Created: latency-svc-zdrw8 Jun 10 22:07:30.789: INFO: Got endpoints: latency-svc-8fgc8 [749.212594ms] Jun 10 22:07:30.794: INFO: Created: latency-svc-hnm26 Jun 10 22:07:30.839: INFO: Got endpoints: latency-svc-4j2rz [749.04388ms] Jun 10 22:07:30.845: INFO: Created: latency-svc-ss766 Jun 10 22:07:30.889: INFO: Got endpoints: latency-svc-xrnm6 [749.625895ms] Jun 10 22:07:30.894: INFO: Created: latency-svc-8fm8f Jun 10 22:07:30.940: INFO: Got endpoints: latency-svc-jczqb [749.566297ms] Jun 10 22:07:30.945: INFO: Created: latency-svc-gnljj Jun 10 22:07:30.990: INFO: Got endpoints: latency-svc-whcpf [750.5383ms] Jun 10 22:07:30.995: INFO: Created: latency-svc-p8c45 Jun 10 22:07:31.039: INFO: Got endpoints: latency-svc-gcx95 [750.171614ms] Jun 10 22:07:31.045: INFO: Created: latency-svc-pt9bn Jun 10 22:07:31.088: INFO: Got endpoints: latency-svc-t7fp4 [749.217873ms] Jun 10 22:07:31.093: INFO: Created: latency-svc-c78j5 Jun 10 22:07:31.140: INFO: Got endpoints: latency-svc-ksgp2 [749.395295ms] Jun 10 22:07:31.145: INFO: Created: latency-svc-5gmfk Jun 10 22:07:31.188: INFO: Got endpoints: latency-svc-88l7s [749.41541ms] Jun 10 22:07:31.207: INFO: Created: latency-svc-cg2sl Jun 10 22:07:31.239: INFO: Got endpoints: latency-svc-d842h [748.744482ms] Jun 10 22:07:31.246: INFO: Created: latency-svc-qsz8q Jun 10 22:07:31.289: INFO: Got endpoints: latency-svc-ng2qp [748.082107ms] Jun 10 22:07:31.295: INFO: Created: latency-svc-t8t87 Jun 10 22:07:31.339: INFO: Got endpoints: latency-svc-vfrk9 [749.665819ms] Jun 10 22:07:31.344: INFO: Created: latency-svc-fh5p6 Jun 10 22:07:31.389: INFO: Got endpoints: latency-svc-tcpfs [749.168261ms] Jun 10 22:07:31.394: INFO: Created: latency-svc-9gjst Jun 10 22:07:31.439: INFO: Got endpoints: latency-svc-cr6hq [749.284181ms] Jun 10 22:07:31.445: INFO: Created: latency-svc-7zgjg Jun 10 22:07:31.489: INFO: Got endpoints: latency-svc-zdrw8 [749.997172ms] Jun 10 22:07:31.494: INFO: Created: latency-svc-dscnm Jun 10 22:07:31.539: INFO: Got endpoints: latency-svc-hnm26 [750.269101ms] Jun 10 22:07:31.544: INFO: Created: latency-svc-28xlp Jun 10 22:07:31.591: INFO: Got endpoints: latency-svc-ss766 [751.95821ms] Jun 10 22:07:31.597: INFO: Created: latency-svc-9v84v Jun 10 22:07:31.640: INFO: Got endpoints: latency-svc-8fm8f [751.085192ms] Jun 10 22:07:31.646: INFO: Created: latency-svc-dpk82 Jun 10 22:07:31.688: INFO: Got endpoints: latency-svc-gnljj [748.623604ms] Jun 10 22:07:31.693: INFO: Created: latency-svc-t5sb2 Jun 10 22:07:31.740: INFO: Got endpoints: latency-svc-p8c45 [750.101694ms] Jun 10 22:07:31.746: INFO: Created: latency-svc-2xf92 Jun 10 22:07:31.789: INFO: Got endpoints: latency-svc-pt9bn [750.162588ms] Jun 10 22:07:31.796: INFO: Created: latency-svc-c49j9 Jun 10 22:07:31.839: INFO: Got endpoints: latency-svc-c78j5 [750.454193ms] Jun 10 22:07:31.844: INFO: Created: latency-svc-zt6r9 Jun 10 22:07:31.889: INFO: Got endpoints: latency-svc-5gmfk [749.670345ms] Jun 10 22:07:31.895: INFO: Created: latency-svc-qfd7f Jun 10 22:07:31.989: INFO: Got endpoints: latency-svc-cg2sl [800.66621ms] Jun 10 22:07:31.995: INFO: Created: latency-svc-h54kn Jun 10 22:07:32.039: INFO: Got endpoints: latency-svc-qsz8q [800.293044ms] Jun 10 22:07:32.044: INFO: Created: latency-svc-rcz7j Jun 10 22:07:32.089: INFO: Got endpoints: latency-svc-t8t87 [800.454362ms] Jun 10 22:07:32.097: INFO: Created: latency-svc-b8cfc Jun 10 22:07:32.138: INFO: Got endpoints: latency-svc-fh5p6 [799.501942ms] Jun 10 22:07:32.144: INFO: Created: latency-svc-fwps8 Jun 10 22:07:32.189: INFO: Got endpoints: latency-svc-9gjst [799.516529ms] Jun 10 22:07:32.194: INFO: Created: latency-svc-7862h Jun 10 22:07:32.239: INFO: Got endpoints: latency-svc-7zgjg [799.276238ms] Jun 10 22:07:32.244: INFO: Created: latency-svc-b7ss6 Jun 10 22:07:32.290: INFO: Got endpoints: latency-svc-dscnm [800.522289ms] Jun 10 22:07:32.295: INFO: Created: latency-svc-j5pv4 Jun 10 22:07:32.339: INFO: Got endpoints: latency-svc-28xlp [799.875049ms] Jun 10 22:07:32.344: INFO: Created: latency-svc-t2rc8 Jun 10 22:07:32.389: INFO: Got endpoints: latency-svc-9v84v [798.017031ms] Jun 10 22:07:32.395: INFO: Created: latency-svc-xr9sj Jun 10 22:07:32.439: INFO: Got endpoints: latency-svc-dpk82 [799.142004ms] Jun 10 22:07:32.444: INFO: Created: latency-svc-97zkt Jun 10 22:07:32.488: INFO: Got endpoints: latency-svc-t5sb2 [799.946267ms] Jun 10 22:07:32.495: INFO: Created: latency-svc-ndcmx Jun 10 22:07:32.539: INFO: Got endpoints: latency-svc-2xf92 [798.860816ms] Jun 10 22:07:32.545: INFO: Created: latency-svc-kcjq8 Jun 10 22:07:32.588: INFO: Got endpoints: latency-svc-c49j9 [798.649709ms] Jun 10 22:07:32.594: INFO: Created: latency-svc-jgtl8 Jun 10 22:07:32.639: INFO: Got endpoints: latency-svc-zt6r9 [800.235799ms] Jun 10 22:07:32.645: INFO: Created: latency-svc-fszzk Jun 10 22:07:32.690: INFO: Got endpoints: latency-svc-qfd7f [800.764677ms] Jun 10 22:07:32.696: INFO: Created: latency-svc-pd8t2 Jun 10 22:07:32.738: INFO: Got endpoints: latency-svc-h54kn [749.217031ms] Jun 10 22:07:32.744: INFO: Created: latency-svc-v2q9v Jun 10 22:07:32.789: INFO: Got endpoints: latency-svc-rcz7j [749.757097ms] Jun 10 22:07:32.794: INFO: Created: latency-svc-pt9wv Jun 10 22:07:32.839: INFO: Got endpoints: latency-svc-b8cfc [749.492048ms] Jun 10 22:07:32.845: INFO: Created: latency-svc-pxqpk Jun 10 22:07:32.889: INFO: Got endpoints: latency-svc-fwps8 [750.263667ms] Jun 10 22:07:32.894: INFO: Created: latency-svc-r2tnn Jun 10 22:07:32.939: INFO: Got endpoints: latency-svc-7862h [750.384092ms] Jun 10 22:07:32.944: INFO: Created: latency-svc-t5qtj Jun 10 22:07:32.994: INFO: Got endpoints: latency-svc-b7ss6 [755.164624ms] Jun 10 22:07:33.000: INFO: Created: latency-svc-8chb9 Jun 10 22:07:33.040: INFO: Got endpoints: latency-svc-j5pv4 [750.10394ms] Jun 10 22:07:33.045: INFO: Created: latency-svc-2rdcb Jun 10 22:07:33.090: INFO: Got endpoints: latency-svc-t2rc8 [751.258103ms] Jun 10 22:07:33.096: INFO: Created: latency-svc-m4w8z Jun 10 22:07:33.139: INFO: Got endpoints: latency-svc-xr9sj [749.791363ms] Jun 10 22:07:33.144: INFO: Created: latency-svc-7ftdk Jun 10 22:07:33.189: INFO: Got endpoints: latency-svc-97zkt [749.664202ms] Jun 10 22:07:33.194: INFO: Created: latency-svc-sn4pn Jun 10 22:07:33.239: INFO: Got endpoints: latency-svc-ndcmx [750.131116ms] Jun 10 22:07:33.244: INFO: Created: latency-svc-szhmm Jun 10 22:07:33.290: INFO: Got endpoints: latency-svc-kcjq8 [750.949946ms] Jun 10 22:07:33.295: INFO: Created: latency-svc-hx4zf Jun 10 22:07:33.340: INFO: Got endpoints: latency-svc-jgtl8 [751.7114ms] Jun 10 22:07:33.346: INFO: Created: latency-svc-pjmb9 Jun 10 22:07:33.389: INFO: Got endpoints: latency-svc-fszzk [749.287331ms] Jun 10 22:07:33.394: INFO: Created: latency-svc-wgwsj Jun 10 22:07:33.439: INFO: Got endpoints: latency-svc-pd8t2 [748.979484ms] Jun 10 22:07:33.445: INFO: Created: latency-svc-ckvns Jun 10 22:07:33.489: INFO: Got endpoints: latency-svc-v2q9v [750.668434ms] Jun 10 22:07:33.496: INFO: Created: latency-svc-tckr6 Jun 10 22:07:33.540: INFO: Got endpoints: latency-svc-pt9wv [750.76074ms] Jun 10 22:07:33.546: INFO: Created: latency-svc-vhlzv Jun 10 22:07:33.590: INFO: Got endpoints: latency-svc-pxqpk [751.160807ms] Jun 10 22:07:33.595: INFO: Created: latency-svc-7c8cj Jun 10 22:07:33.639: INFO: Got endpoints: latency-svc-r2tnn [749.935116ms] Jun 10 22:07:33.645: INFO: Created: latency-svc-tsg2n Jun 10 22:07:33.690: INFO: Got endpoints: latency-svc-t5qtj [750.955048ms] Jun 10 22:07:33.695: INFO: Created: latency-svc-6zwb9 Jun 10 22:07:33.739: INFO: Got endpoints: latency-svc-8chb9 [745.042215ms] Jun 10 22:07:33.744: INFO: Created: latency-svc-fdc8z Jun 10 22:07:33.788: INFO: Got endpoints: latency-svc-2rdcb [748.496339ms] Jun 10 22:07:33.794: INFO: Created: latency-svc-fl7ln Jun 10 22:07:33.839: INFO: Got endpoints: latency-svc-m4w8z [748.815983ms] Jun 10 22:07:33.845: INFO: Created: latency-svc-48rkz Jun 10 22:07:33.889: INFO: Got endpoints: latency-svc-7ftdk [750.303791ms] Jun 10 22:07:33.895: INFO: Created: latency-svc-cvnmd Jun 10 22:07:33.939: INFO: Got endpoints: latency-svc-sn4pn [750.736197ms] Jun 10 22:07:33.945: INFO: Created: latency-svc-lff4t Jun 10 22:07:33.990: INFO: Got endpoints: latency-svc-szhmm [751.153464ms] Jun 10 22:07:33.995: INFO: Created: latency-svc-rwcjf Jun 10 22:07:34.039: INFO: Got endpoints: latency-svc-hx4zf [749.678569ms] Jun 10 22:07:34.045: INFO: Created: latency-svc-lvj6q Jun 10 22:07:34.090: INFO: Got endpoints: latency-svc-pjmb9 [749.96738ms] Jun 10 22:07:34.097: INFO: Created: latency-svc-bk9cb Jun 10 22:07:34.139: INFO: Got endpoints: latency-svc-wgwsj [750.466793ms] Jun 10 22:07:34.145: INFO: Created: latency-svc-l8xbs Jun 10 22:07:34.189: INFO: Got endpoints: latency-svc-ckvns [749.596995ms] Jun 10 22:07:34.195: INFO: Created: latency-svc-fb6dd Jun 10 22:07:34.239: INFO: Got endpoints: latency-svc-tckr6 [749.80269ms] Jun 10 22:07:34.245: INFO: Created: latency-svc-dgt4n Jun 10 22:07:34.289: INFO: Got endpoints: latency-svc-vhlzv [749.166189ms] Jun 10 22:07:34.294: INFO: Created: latency-svc-7zg76 Jun 10 22:07:34.339: INFO: Got endpoints: latency-svc-7c8cj [749.467292ms] Jun 10 22:07:34.345: INFO: Created: latency-svc-tzrdn Jun 10 22:07:34.390: INFO: Got endpoints: latency-svc-tsg2n [750.906464ms] Jun 10 22:07:34.395: INFO: Created: latency-svc-zqkvm Jun 10 22:07:34.438: INFO: Got endpoints: latency-svc-6zwb9 [748.340643ms] Jun 10 22:07:34.445: INFO: Created: latency-svc-fsxqn Jun 10 22:07:34.590: INFO: Got endpoints: latency-svc-fdc8z [850.644724ms] Jun 10 22:07:34.595: INFO: Created: latency-svc-nvfg2 Jun 10 22:07:34.640: INFO: Got endpoints: latency-svc-fl7ln [851.668716ms] Jun 10 22:07:34.645: INFO: Created: latency-svc-tnphv Jun 10 22:07:34.689: INFO: Got endpoints: latency-svc-48rkz [849.350127ms] Jun 10 22:07:34.695: INFO: Created: latency-svc-22k7x Jun 10 22:07:34.740: INFO: Got endpoints: latency-svc-cvnmd [851.024392ms] Jun 10 22:07:34.746: INFO: Created: latency-svc-vxqpv Jun 10 22:07:34.790: INFO: Got endpoints: latency-svc-lff4t [850.911118ms] Jun 10 22:07:34.798: INFO: Created: latency-svc-5kscg Jun 10 22:07:34.839: INFO: Got endpoints: latency-svc-rwcjf [848.880817ms] Jun 10 22:07:34.844: INFO: Created: latency-svc-v27qc Jun 10 22:07:34.890: INFO: Got endpoints: latency-svc-lvj6q [850.159923ms] Jun 10 22:07:34.895: INFO: Created: latency-svc-rfdn9 Jun 10 22:07:34.939: INFO: Got endpoints: latency-svc-bk9cb [849.30029ms] Jun 10 22:07:34.945: INFO: Created: latency-svc-cmxtd Jun 10 22:07:34.989: INFO: Got endpoints: latency-svc-l8xbs [849.850342ms] Jun 10 22:07:34.995: INFO: Created: latency-svc-qrmpt Jun 10 22:07:35.069: INFO: Got endpoints: latency-svc-fb6dd [880.053816ms] Jun 10 22:07:35.075: INFO: Created: latency-svc-m8c2t Jun 10 22:07:35.089: INFO: Got endpoints: latency-svc-dgt4n [849.658347ms] Jun 10 22:07:35.094: INFO: Created: latency-svc-bpksf Jun 10 22:07:35.139: INFO: Got endpoints: latency-svc-7zg76 [850.108097ms] Jun 10 22:07:35.145: INFO: Created: latency-svc-2fws5 Jun 10 22:07:35.190: INFO: Got endpoints: latency-svc-tzrdn [850.202263ms] Jun 10 22:07:35.195: INFO: Created: latency-svc-4n86v Jun 10 22:07:35.240: INFO: Got endpoints: latency-svc-zqkvm [849.877762ms] Jun 10 22:07:35.246: INFO: Created: latency-svc-f7hnn Jun 10 22:07:35.289: INFO: Got endpoints: latency-svc-fsxqn [850.242944ms] Jun 10 22:07:35.295: INFO: Created: latency-svc-hkk8r Jun 10 22:07:35.339: INFO: Got endpoints: latency-svc-nvfg2 [749.509114ms] Jun 10 22:07:35.345: INFO: Created: latency-svc-jktdq Jun 10 22:07:35.389: INFO: Got endpoints: latency-svc-tnphv [749.18949ms] Jun 10 22:07:35.394: INFO: Created: latency-svc-wvnhd Jun 10 22:07:35.439: INFO: Got endpoints: latency-svc-22k7x [749.866545ms] Jun 10 22:07:35.444: INFO: Created: latency-svc-qrrqh Jun 10 22:07:35.490: INFO: Got endpoints: latency-svc-vxqpv [750.198893ms] Jun 10 22:07:35.496: INFO: Created: latency-svc-4l47z Jun 10 22:07:35.539: INFO: Got endpoints: latency-svc-5kscg [748.866376ms] Jun 10 22:07:35.545: INFO: Created: latency-svc-5rxss Jun 10 22:07:35.590: INFO: Got endpoints: latency-svc-v27qc [751.059552ms] Jun 10 22:07:35.595: INFO: Created: latency-svc-gcwnh Jun 10 22:07:35.640: INFO: Got endpoints: latency-svc-rfdn9 [750.196155ms] Jun 10 22:07:35.646: INFO: Created: latency-svc-w4ljh Jun 10 22:07:35.689: INFO: Got endpoints: latency-svc-cmxtd [749.791948ms] Jun 10 22:07:35.695: INFO: Created: latency-svc-9wtjc Jun 10 22:07:35.739: INFO: Got endpoints: latency-svc-qrmpt [749.698263ms] Jun 10 22:07:35.745: INFO: Created: latency-svc-z8998 Jun 10 22:07:35.789: INFO: Got endpoints: latency-svc-m8c2t [719.684641ms] Jun 10 22:07:35.794: INFO: Created: latency-svc-tj9gk Jun 10 22:07:35.839: INFO: Got endpoints: latency-svc-bpksf [749.876251ms] Jun 10 22:07:35.845: INFO: Created: latency-svc-4pdfx Jun 10 22:07:35.889: INFO: Got endpoints: latency-svc-2fws5 [750.105504ms] Jun 10 22:07:35.896: INFO: Created: latency-svc-5fqsx Jun 10 22:07:35.938: INFO: Got endpoints: latency-svc-4n86v [748.672855ms] Jun 10 22:07:35.945: INFO: Created: latency-svc-8qwzt Jun 10 22:07:35.989: INFO: Got endpoints: latency-svc-f7hnn [749.448203ms] Jun 10 22:07:35.994: INFO: Created: latency-svc-bmjh7 Jun 10 22:07:36.039: INFO: Got endpoints: latency-svc-hkk8r [750.121781ms] Jun 10 22:07:36.045: INFO: Created: latency-svc-pkk95 Jun 10 22:07:36.088: INFO: Got endpoints: latency-svc-jktdq [749.204462ms] Jun 10 22:07:36.094: INFO: Created: latency-svc-q28xz Jun 10 22:07:36.139: INFO: Got endpoints: latency-svc-wvnhd [749.202243ms] Jun 10 22:07:36.144: INFO: Created: latency-svc-vc28w Jun 10 22:07:36.189: INFO: Got endpoints: latency-svc-qrrqh [750.023198ms] Jun 10 22:07:36.194: INFO: Created: latency-svc-k5597 Jun 10 22:07:36.239: INFO: Got endpoints: latency-svc-4l47z [748.820305ms] Jun 10 22:07:36.245: INFO: Created: latency-svc-gf2fp Jun 10 22:07:36.339: INFO: Got endpoints: latency-svc-5rxss [799.175909ms] Jun 10 22:07:36.345: INFO: Created: latency-svc-s9mm5 Jun 10 22:07:36.439: INFO: Got endpoints: latency-svc-gcwnh [848.821071ms] Jun 10 22:07:36.445: INFO: Created: latency-svc-s45s2 Jun 10 22:07:36.488: INFO: Got endpoints: latency-svc-w4ljh [848.595515ms] Jun 10 22:07:36.494: INFO: Created: latency-svc-q45nf Jun 10 22:07:36.539: INFO: Got endpoints: latency-svc-9wtjc [849.63049ms] Jun 10 22:07:36.546: INFO: Created: latency-svc-4tr7c Jun 10 22:07:36.588: INFO: Got endpoints: latency-svc-z8998 [849.411466ms] Jun 10 22:07:36.594: INFO: Created: latency-svc-pptmz Jun 10 22:07:36.639: INFO: Got endpoints: latency-svc-tj9gk [850.013156ms] Jun 10 22:07:36.645: INFO: Created: latency-svc-n8rws Jun 10 22:07:36.694: INFO: Got endpoints: latency-svc-4pdfx [855.426695ms] Jun 10 22:07:36.706: INFO: Created: latency-svc-xqdjc Jun 10 22:07:36.739: INFO: Got endpoints: latency-svc-5fqsx [849.81199ms] Jun 10 22:07:36.745: INFO: Created: latency-svc-8f82h Jun 10 22:07:36.789: INFO: Got endpoints: latency-svc-8qwzt [850.446052ms] Jun 10 22:07:36.795: INFO: Created: latency-svc-jwcf7 Jun 10 22:07:36.839: INFO: Got endpoints: latency-svc-bmjh7 [849.885466ms] Jun 10 22:07:36.844: INFO: Created: latency-svc-5hkz2 Jun 10 22:07:36.889: INFO: Got endpoints: latency-svc-pkk95 [850.300951ms] Jun 10 22:07:36.895: INFO: Created: latency-svc-5hz7m Jun 10 22:07:36.939: INFO: Got endpoints: latency-svc-q28xz [850.287111ms] Jun 10 22:07:36.944: INFO: Created: latency-svc-2mfph Jun 10 22:07:36.989: INFO: Got endpoints: latency-svc-vc28w [849.879177ms] Jun 10 22:07:36.993: INFO: Created: latency-svc-x6zmz Jun 10 22:07:37.039: INFO: Got endpoints: latency-svc-k5597 [849.95044ms] Jun 10 22:07:37.044: INFO: Created: latency-svc-twlcv Jun 10 22:07:37.091: INFO: Got endpoints: latency-svc-gf2fp [851.812164ms] Jun 10 22:07:37.097: INFO: Created: latency-svc-pz76b Jun 10 22:07:37.139: INFO: Got endpoints: latency-svc-s9mm5 [800.102384ms] Jun 10 22:07:37.145: INFO: Created: latency-svc-8wz8z Jun 10 22:07:37.189: INFO: Got endpoints: latency-svc-s45s2 [749.761671ms] Jun 10 22:07:37.194: INFO: Created: latency-svc-clwjx Jun 10 22:07:37.239: INFO: Got endpoints: latency-svc-q45nf [750.654031ms] Jun 10 22:07:37.245: INFO: Created: latency-svc-ktzgt Jun 10 22:07:37.290: INFO: Got endpoints: latency-svc-4tr7c [750.968974ms] Jun 10 22:07:37.296: INFO: Created: latency-svc-84q8j Jun 10 22:07:37.340: INFO: Got endpoints: latency-svc-pptmz [751.561938ms] Jun 10 22:07:37.346: INFO: Created: latency-svc-94jlh Jun 10 22:07:37.390: INFO: Got endpoints: latency-svc-n8rws [751.123739ms] Jun 10 22:07:37.396: INFO: Created: latency-svc-xk7hb Jun 10 22:07:37.489: INFO: Got endpoints: latency-svc-xqdjc [795.210173ms] Jun 10 22:07:37.539: INFO: Got endpoints: latency-svc-8f82h [800.173593ms] Jun 10 22:07:37.590: INFO: Got endpoints: latency-svc-jwcf7 [800.818412ms] Jun 10 22:07:37.639: INFO: Got endpoints: latency-svc-5hkz2 [799.611775ms] Jun 10 22:07:37.697: INFO: Got endpoints: latency-svc-5hz7m [808.092902ms] Jun 10 22:07:37.740: INFO: Got endpoints: latency-svc-2mfph [801.342147ms] Jun 10 22:07:37.789: INFO: Got endpoints: latency-svc-x6zmz [800.273943ms] Jun 10 22:07:37.840: INFO: Got endpoints: latency-svc-twlcv [801.127713ms] Jun 10 22:07:37.890: INFO: Got endpoints: latency-svc-pz76b [799.096098ms] Jun 10 22:07:37.939: INFO: Got endpoints: latency-svc-8wz8z [800.539304ms] Jun 10 22:07:37.989: INFO: Got endpoints: latency-svc-clwjx [800.334563ms] Jun 10 22:07:38.040: INFO: Got endpoints: latency-svc-ktzgt [800.359227ms] Jun 10 22:07:38.089: INFO: Got endpoints: latency-svc-84q8j [799.619265ms] Jun 10 22:07:38.139: INFO: Got endpoints: latency-svc-94jlh [799.055618ms] Jun 10 22:07:38.189: INFO: Got endpoints: latency-svc-xk7hb [799.28787ms] Jun 10 22:07:38.190: INFO: Latencies: [7.087392ms 8.145228ms 13.01472ms 15.25136ms 17.417663ms 20.504593ms 25.707779ms 28.802179ms 31.463701ms 35.103336ms 37.642ms 40.488495ms 41.829476ms 41.840316ms 42.139294ms 42.511921ms 42.736463ms 42.808587ms 42.981907ms 43.177853ms 43.223566ms 43.479391ms 43.532779ms 43.705202ms 44.108507ms 45.00124ms 45.357186ms 46.047125ms 48.308981ms 48.82134ms 52.033328ms 92.53406ms 139.749291ms 186.305317ms 232.766935ms 280.195335ms 328.640017ms 375.967713ms 424.20594ms 469.841813ms 516.985141ms 564.402703ms 611.56461ms 658.60442ms 704.917726ms 719.684641ms 745.042215ms 748.032649ms 748.082107ms 748.340643ms 748.496339ms 748.623604ms 748.672855ms 748.744482ms 748.815983ms 748.820305ms 748.866376ms 748.979484ms 749.04388ms 749.124671ms 749.166189ms 749.168261ms 749.18949ms 749.202243ms 749.204462ms 749.212594ms 749.217031ms 749.217873ms 749.284181ms 749.287331ms 749.342226ms 749.395295ms 749.41541ms 749.448203ms 749.467292ms 749.492048ms 749.509114ms 749.566297ms 749.596995ms 749.609064ms 749.625895ms 749.664202ms 749.665819ms 749.670345ms 749.678569ms 749.698263ms 749.751356ms 749.757097ms 749.761671ms 749.791363ms 749.791948ms 749.80269ms 749.866545ms 749.876251ms 749.924368ms 749.935116ms 749.96738ms 749.997172ms 749.999641ms 750.023198ms 750.077798ms 750.101694ms 750.10394ms 750.105504ms 750.121781ms 750.131116ms 750.162588ms 750.171614ms 750.196155ms 750.198893ms 750.217567ms 750.263667ms 750.269101ms 750.303791ms 750.384092ms 750.454193ms 750.466793ms 750.5383ms 750.578698ms 750.654031ms 750.668434ms 750.736197ms 750.741181ms 750.76074ms 750.8692ms 750.906464ms 750.949946ms 750.955048ms 750.968974ms 751.059552ms 751.085192ms 751.123739ms 751.153464ms 751.160807ms 751.258103ms 751.561938ms 751.7114ms 751.95821ms 755.164624ms 795.210173ms 798.017031ms 798.649709ms 798.860816ms 799.055618ms 799.096098ms 799.142004ms 799.175909ms 799.276238ms 799.28787ms 799.501942ms 799.516529ms 799.611775ms 799.619265ms 799.875049ms 799.946267ms 800.102384ms 800.173593ms 800.235799ms 800.273943ms 800.293044ms 800.334563ms 800.359227ms 800.454362ms 800.522289ms 800.539304ms 800.66621ms 800.764677ms 800.818412ms 801.127713ms 801.342147ms 808.092902ms 848.595515ms 848.821071ms 848.880817ms 849.30029ms 849.350127ms 849.411466ms 849.63049ms 849.658347ms 849.81199ms 849.850342ms 849.877762ms 849.879177ms 849.885466ms 849.95044ms 850.013156ms 850.108097ms 850.159923ms 850.202263ms 850.242944ms 850.287111ms 850.300951ms 850.446052ms 850.644724ms 850.911118ms 851.024392ms 851.668716ms 851.812164ms 855.426695ms 880.053816ms] Jun 10 22:07:38.190: INFO: 50 %ile: 750.077798ms Jun 10 22:07:38.190: INFO: 90 %ile: 849.850342ms Jun 10 22:07:38.190: INFO: 99 %ile: 855.426695ms Jun 10 22:07:38.190: INFO: Total sample count: 200 [AfterEach] [sig-network] Service endpoints latency /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 10 22:07:38.190: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svc-latency-4850" for this suite. • [SLOW TEST:12.074 seconds] [sig-network] Service endpoints latency /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should not be very high [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-network] Service endpoints latency should not be very high [Conformance]","total":-1,"completed":25,"skipped":491,"failed":1,"failures":["[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 10 22:07:27.460: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for services [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-8096.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-8096.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-8096.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-8096.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-8096.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-8096.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-8096.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-8096.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-8096.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-8096.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-8096.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-8096.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-8096.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 175.30.233.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.233.30.175_udp@PTR;check="$$(dig +tcp +noall +answer +search 175.30.233.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.233.30.175_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-8096.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-8096.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-8096.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-8096.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-8096.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-8096.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-8096.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-8096.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-8096.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-8096.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-8096.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-8096.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-8096.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 175.30.233.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.233.30.175_udp@PTR;check="$$(dig +tcp +noall +answer +search 175.30.233.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.233.30.175_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Jun 10 22:07:35.517: INFO: Unable to read wheezy_udp@dns-test-service.dns-8096.svc.cluster.local from pod dns-8096/dns-test-68a24f55-399d-4e92-aed9-dd0abbd82570: the server could not find the requested resource (get pods dns-test-68a24f55-399d-4e92-aed9-dd0abbd82570) Jun 10 22:07:35.520: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8096.svc.cluster.local from pod dns-8096/dns-test-68a24f55-399d-4e92-aed9-dd0abbd82570: the server could not find the requested resource (get pods dns-test-68a24f55-399d-4e92-aed9-dd0abbd82570) Jun 10 22:07:35.523: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-8096.svc.cluster.local from pod dns-8096/dns-test-68a24f55-399d-4e92-aed9-dd0abbd82570: the server could not find the requested resource (get pods dns-test-68a24f55-399d-4e92-aed9-dd0abbd82570) Jun 10 22:07:35.525: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-8096.svc.cluster.local from pod dns-8096/dns-test-68a24f55-399d-4e92-aed9-dd0abbd82570: the server could not find the requested resource (get pods dns-test-68a24f55-399d-4e92-aed9-dd0abbd82570) Jun 10 22:07:35.545: INFO: Unable to read jessie_udp@dns-test-service.dns-8096.svc.cluster.local from pod dns-8096/dns-test-68a24f55-399d-4e92-aed9-dd0abbd82570: the server could not find the requested resource (get pods dns-test-68a24f55-399d-4e92-aed9-dd0abbd82570) Jun 10 22:07:35.547: INFO: Unable to read jessie_tcp@dns-test-service.dns-8096.svc.cluster.local from pod dns-8096/dns-test-68a24f55-399d-4e92-aed9-dd0abbd82570: the server could not find the requested resource (get pods dns-test-68a24f55-399d-4e92-aed9-dd0abbd82570) Jun 10 22:07:35.549: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-8096.svc.cluster.local from pod dns-8096/dns-test-68a24f55-399d-4e92-aed9-dd0abbd82570: the server could not find the requested resource (get pods dns-test-68a24f55-399d-4e92-aed9-dd0abbd82570) Jun 10 22:07:35.552: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-8096.svc.cluster.local from pod dns-8096/dns-test-68a24f55-399d-4e92-aed9-dd0abbd82570: the server could not find the requested resource (get pods dns-test-68a24f55-399d-4e92-aed9-dd0abbd82570) Jun 10 22:07:35.570: INFO: Lookups using dns-8096/dns-test-68a24f55-399d-4e92-aed9-dd0abbd82570 failed for: [wheezy_udp@dns-test-service.dns-8096.svc.cluster.local wheezy_tcp@dns-test-service.dns-8096.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-8096.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-8096.svc.cluster.local jessie_udp@dns-test-service.dns-8096.svc.cluster.local jessie_tcp@dns-test-service.dns-8096.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-8096.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-8096.svc.cluster.local] Jun 10 22:07:40.628: INFO: DNS probes using dns-8096/dns-test-68a24f55-399d-4e92-aed9-dd0abbd82570 succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 10 22:07:40.652: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-8096" for this suite. • [SLOW TEST:13.201 seconds] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should provide DNS for services [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for services [Conformance]","total":-1,"completed":41,"skipped":653,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 10 22:07:36.768: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating secret with name secret-test-map-2c30386b-f5e9-41b9-a00d-e47d1130963c STEP: Creating a pod to test consume secrets Jun 10 22:07:36.803: INFO: Waiting up to 5m0s for pod "pod-secrets-942b2b49-0e41-49ff-b12a-0e5a9e4b3d3e" in namespace "secrets-2429" to be "Succeeded or Failed" Jun 10 22:07:36.805: INFO: Pod "pod-secrets-942b2b49-0e41-49ff-b12a-0e5a9e4b3d3e": Phase="Pending", Reason="", readiness=false. Elapsed: 1.84202ms Jun 10 22:07:38.809: INFO: Pod "pod-secrets-942b2b49-0e41-49ff-b12a-0e5a9e4b3d3e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006593215s Jun 10 22:07:40.813: INFO: Pod "pod-secrets-942b2b49-0e41-49ff-b12a-0e5a9e4b3d3e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010394305s STEP: Saw pod success Jun 10 22:07:40.813: INFO: Pod "pod-secrets-942b2b49-0e41-49ff-b12a-0e5a9e4b3d3e" satisfied condition "Succeeded or Failed" Jun 10 22:07:40.816: INFO: Trying to get logs from node node1 pod pod-secrets-942b2b49-0e41-49ff-b12a-0e5a9e4b3d3e container secret-volume-test: STEP: delete the pod Jun 10 22:07:40.828: INFO: Waiting for pod pod-secrets-942b2b49-0e41-49ff-b12a-0e5a9e4b3d3e to disappear Jun 10 22:07:40.830: INFO: Pod pod-secrets-942b2b49-0e41-49ff-b12a-0e5a9e4b3d3e no longer exists [AfterEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 10 22:07:40.830: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-2429" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":-1,"completed":36,"skipped":841,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 10 22:07:40.742: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:241 [It] should check is all data is printed [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Jun 10 22:07:40.766: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-9169 version' Jun 10 22:07:40.872: INFO: stderr: "" Jun 10 22:07:40.872: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"21\", GitVersion:\"v1.21.9\", GitCommit:\"b631974d68ac5045e076c86a5c66fba6f128dc72\", GitTreeState:\"clean\", BuildDate:\"2022-01-19T17:51:12Z\", GoVersion:\"go1.16.12\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"21\", GitVersion:\"v1.21.1\", GitCommit:\"5e58841cce77d4bc13713ad2b91fa0d961e69192\", GitTreeState:\"clean\", BuildDate:\"2021-05-12T14:12:29Z\", GoVersion:\"go1.16.4\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 10 22:07:40.872: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9169" for this suite. • ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl version should check is all data is printed [Conformance]","total":-1,"completed":42,"skipped":698,"failed":0} SSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 10 22:07:34.473: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating secret with name s-test-opt-del-b831afd6-fa74-4db3-9ad5-1258c3d2775f STEP: Creating secret with name s-test-opt-upd-d6515173-c2ca-4563-8f85-68bb0d0344f9 STEP: Creating the pod Jun 10 22:07:34.519: INFO: The status of Pod pod-projected-secrets-e620d93b-b5c7-438f-992a-02c62e949421 is Pending, waiting for it to be Running (with Ready = true) Jun 10 22:07:36.523: INFO: The status of Pod pod-projected-secrets-e620d93b-b5c7-438f-992a-02c62e949421 is Pending, waiting for it to be Running (with Ready = true) Jun 10 22:07:38.524: INFO: The status of Pod pod-projected-secrets-e620d93b-b5c7-438f-992a-02c62e949421 is Running (Ready = true) STEP: Deleting secret s-test-opt-del-b831afd6-fa74-4db3-9ad5-1258c3d2775f STEP: Updating secret s-test-opt-upd-d6515173-c2ca-4563-8f85-68bb0d0344f9 STEP: Creating secret with name s-test-opt-create-283a2f85-7c15-4a52-a89c-05b76ec1d957 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 10 22:07:42.590: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9880" for this suite. • [SLOW TEST:8.126 seconds] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance]","total":-1,"completed":25,"skipped":248,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-apps] DisruptionController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 10 22:07:40.867: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename disruption STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] DisruptionController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/disruption.go:69 [BeforeEach] Listing PodDisruptionBudgets for all namespaces /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 10 22:07:40.890: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename disruption-2 STEP: Waiting for a default service account to be provisioned in namespace [It] should list and delete a collection of PodDisruptionBudgets [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Waiting for the pdb to be processed STEP: Waiting for the pdb to be processed STEP: Waiting for the pdb to be processed STEP: listing a collection of PDBs across all namespaces STEP: listing a collection of PDBs in namespace disruption-6733 STEP: deleting a collection of PDBs STEP: Waiting for the PDB collection to be deleted [AfterEach] Listing PodDisruptionBudgets for all namespaces /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 10 22:07:44.949: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "disruption-2-1631" for this suite. [AfterEach] [sig-apps] DisruptionController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 10 22:07:44.956: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "disruption-6733" for this suite. • ------------------------------ {"msg":"PASSED [sig-apps] DisruptionController Listing PodDisruptionBudgets for all namespaces should list and delete a collection of PodDisruptionBudgets [Conformance]","total":-1,"completed":37,"skipped":857,"failed":0} SSSSSS ------------------------------ [BeforeEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 10 22:07:38.242: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/rc.go:54 [It] should test the lifecycle of a ReplicationController [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating a ReplicationController STEP: waiting for RC to be added STEP: waiting for available Replicas STEP: patching ReplicationController STEP: waiting for RC to be modified STEP: patching ReplicationController status STEP: waiting for RC to be modified STEP: waiting for available Replicas STEP: fetching ReplicationController status STEP: patching ReplicationController scale STEP: waiting for RC to be modified STEP: waiting for ReplicationController's scale to be the max amount STEP: fetching ReplicationController; ensuring that it's patched STEP: updating ReplicationController status STEP: waiting for RC to be modified STEP: listing all ReplicationControllers STEP: checking that ReplicationController has expected values STEP: deleting ReplicationControllers by collection STEP: waiting for ReplicationController to have a DELETED watchEvent [AfterEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 10 22:07:46.565: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-3747" for this suite. • [SLOW TEST:8.331 seconds] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should test the lifecycle of a ReplicationController [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should test the lifecycle of a ReplicationController [Conformance]","total":-1,"completed":26,"skipped":509,"failed":1,"failures":["[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 10 22:07:46.638: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow composing env vars into new env vars [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test env composition Jun 10 22:07:46.671: INFO: Waiting up to 5m0s for pod "var-expansion-b23d6c2e-2b29-473a-8348-6f8384e42a47" in namespace "var-expansion-6693" to be "Succeeded or Failed" Jun 10 22:07:46.673: INFO: Pod "var-expansion-b23d6c2e-2b29-473a-8348-6f8384e42a47": Phase="Pending", Reason="", readiness=false. Elapsed: 2.048631ms Jun 10 22:07:48.676: INFO: Pod "var-expansion-b23d6c2e-2b29-473a-8348-6f8384e42a47": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00504079s Jun 10 22:07:50.680: INFO: Pod "var-expansion-b23d6c2e-2b29-473a-8348-6f8384e42a47": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.009008066s STEP: Saw pod success Jun 10 22:07:50.680: INFO: Pod "var-expansion-b23d6c2e-2b29-473a-8348-6f8384e42a47" satisfied condition "Succeeded or Failed" Jun 10 22:07:50.683: INFO: Trying to get logs from node node2 pod var-expansion-b23d6c2e-2b29-473a-8348-6f8384e42a47 container dapi-container: STEP: delete the pod Jun 10 22:07:50.755: INFO: Waiting for pod var-expansion-b23d6c2e-2b29-473a-8348-6f8384e42a47 to disappear Jun 10 22:07:50.757: INFO: Pod var-expansion-b23d6c2e-2b29-473a-8348-6f8384e42a47 no longer exists [AfterEach] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 10 22:07:50.757: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-6693" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance]","total":-1,"completed":27,"skipped":543,"failed":1,"failures":["[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-apps] DisruptionController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 10 22:07:40.901: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename disruption STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] DisruptionController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/disruption.go:69 [It] should observe PodDisruptionBudget status updated [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Waiting for the pdb to be processed STEP: Waiting for all pods to be running Jun 10 22:07:42.960: INFO: running pods: 0 < 3 Jun 10 22:07:44.965: INFO: running pods: 0 < 3 Jun 10 22:07:46.964: INFO: running pods: 1 < 3 Jun 10 22:07:48.965: INFO: running pods: 1 < 3 [AfterEach] [sig-apps] DisruptionController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 10 22:07:50.966: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "disruption-6446" for this suite. • [SLOW TEST:10.075 seconds] [sig-apps] DisruptionController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should observe PodDisruptionBudget status updated [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] DisruptionController should observe PodDisruptionBudget status updated [Conformance]","total":-1,"completed":43,"skipped":707,"failed":0} SSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 10 22:07:44.977: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating configMap with name projected-configmap-test-volume-dbba4a97-595e-4c66-b65c-312ee3eda2df STEP: Creating a pod to test consume configMaps Jun 10 22:07:45.012: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-da3cdb21-72df-41d0-a77d-c5ec9433e134" in namespace "projected-6155" to be "Succeeded or Failed" Jun 10 22:07:45.015: INFO: Pod "pod-projected-configmaps-da3cdb21-72df-41d0-a77d-c5ec9433e134": Phase="Pending", Reason="", readiness=false. Elapsed: 2.892468ms Jun 10 22:07:47.018: INFO: Pod "pod-projected-configmaps-da3cdb21-72df-41d0-a77d-c5ec9433e134": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006441885s Jun 10 22:07:49.023: INFO: Pod "pod-projected-configmaps-da3cdb21-72df-41d0-a77d-c5ec9433e134": Phase="Pending", Reason="", readiness=false. Elapsed: 4.011115284s Jun 10 22:07:51.027: INFO: Pod "pod-projected-configmaps-da3cdb21-72df-41d0-a77d-c5ec9433e134": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.015312327s STEP: Saw pod success Jun 10 22:07:51.027: INFO: Pod "pod-projected-configmaps-da3cdb21-72df-41d0-a77d-c5ec9433e134" satisfied condition "Succeeded or Failed" Jun 10 22:07:51.030: INFO: Trying to get logs from node node1 pod pod-projected-configmaps-da3cdb21-72df-41d0-a77d-c5ec9433e134 container agnhost-container: STEP: delete the pod Jun 10 22:07:51.159: INFO: Waiting for pod pod-projected-configmaps-da3cdb21-72df-41d0-a77d-c5ec9433e134 to disappear Jun 10 22:07:51.162: INFO: Pod pod-projected-configmaps-da3cdb21-72df-41d0-a77d-c5ec9433e134 no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 10 22:07:51.162: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6155" for this suite. • [SLOW TEST:6.193 seconds] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":-1,"completed":38,"skipped":863,"failed":0} SSSSSSSSSS ------------------------------ [BeforeEach] [sig-apps] CronJob /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 10 22:02:52.743: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename cronjob STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] CronJob /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/cronjob.go:63 W0610 22:02:52.766812 35 warnings.go:70] batch/v1beta1 CronJob is deprecated in v1.21+, unavailable in v1.25+; use batch/v1 CronJob [It] should not schedule jobs when suspended [Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a suspended cronjob STEP: Ensuring no jobs are scheduled STEP: Ensuring no job exists by listing jobs explicitly STEP: Removing cronjob [AfterEach] [sig-apps] CronJob /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 10 22:07:52.785: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "cronjob-1949" for this suite. • [SLOW TEST:300.049 seconds] [sig-apps] CronJob /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should not schedule jobs when suspended [Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] CronJob should not schedule jobs when suspended [Slow] [Conformance]","total":-1,"completed":15,"skipped":149,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] RuntimeClass /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 10 22:07:52.855: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename runtimeclass STEP: Waiting for a default service account to be provisioned in namespace [It] should support RuntimeClasses API operations [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: getting /apis STEP: getting /apis/node.k8s.io STEP: getting /apis/node.k8s.io/v1 STEP: creating STEP: watching Jun 10 22:07:52.890: INFO: starting watch STEP: getting STEP: listing STEP: patching STEP: updating Jun 10 22:07:52.904: INFO: waiting for watch events with expected annotations STEP: deleting STEP: deleting a collection [AfterEach] [sig-node] RuntimeClass /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 10 22:07:52.921: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "runtimeclass-6860" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] RuntimeClass should support RuntimeClasses API operations [Conformance]","total":-1,"completed":16,"skipped":184,"failed":0} SSSSSSSSSS ------------------------------ [BeforeEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 10 22:07:42.721: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/rc.go:54 [It] should serve a basic image on each replica with a public image [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating replication controller my-hostname-basic-e09333b3-d6ca-48b8-8503-32c829e85026 Jun 10 22:07:42.753: INFO: Pod name my-hostname-basic-e09333b3-d6ca-48b8-8503-32c829e85026: Found 0 pods out of 1 Jun 10 22:07:47.757: INFO: Pod name my-hostname-basic-e09333b3-d6ca-48b8-8503-32c829e85026: Found 1 pods out of 1 Jun 10 22:07:47.757: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-e09333b3-d6ca-48b8-8503-32c829e85026" are running Jun 10 22:07:49.763: INFO: Pod "my-hostname-basic-e09333b3-d6ca-48b8-8503-32c829e85026-jlzsg" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-10 22:07:42 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-10 22:07:42 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-e09333b3-d6ca-48b8-8503-32c829e85026]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-10 22:07:42 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-e09333b3-d6ca-48b8-8503-32c829e85026]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-10 22:07:42 +0000 UTC Reason: Message:}]) Jun 10 22:07:49.764: INFO: Trying to dial the pod Jun 10 22:07:54.774: INFO: Controller my-hostname-basic-e09333b3-d6ca-48b8-8503-32c829e85026: Got expected result from replica 1 [my-hostname-basic-e09333b3-d6ca-48b8-8503-32c829e85026-jlzsg]: "my-hostname-basic-e09333b3-d6ca-48b8-8503-32c829e85026-jlzsg", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 10 22:07:54.774: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-7083" for this suite. • [SLOW TEST:12.062 seconds] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance]","total":-1,"completed":26,"skipped":312,"failed":0} S ------------------------------ [BeforeEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 10 22:07:50.854: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating secret with name secret-test-099f0dfa-db56-478c-942a-b0c7d050aea7 STEP: Creating a pod to test consume secrets Jun 10 22:07:50.890: INFO: Waiting up to 5m0s for pod "pod-secrets-d492d513-af66-4789-8520-1ec6e78f79d7" in namespace "secrets-3094" to be "Succeeded or Failed" Jun 10 22:07:50.893: INFO: Pod "pod-secrets-d492d513-af66-4789-8520-1ec6e78f79d7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.195021ms Jun 10 22:07:52.896: INFO: Pod "pod-secrets-d492d513-af66-4789-8520-1ec6e78f79d7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005201686s Jun 10 22:07:54.902: INFO: Pod "pod-secrets-d492d513-af66-4789-8520-1ec6e78f79d7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011398228s STEP: Saw pod success Jun 10 22:07:54.902: INFO: Pod "pod-secrets-d492d513-af66-4789-8520-1ec6e78f79d7" satisfied condition "Succeeded or Failed" Jun 10 22:07:54.905: INFO: Trying to get logs from node node2 pod pod-secrets-d492d513-af66-4789-8520-1ec6e78f79d7 container secret-volume-test: STEP: delete the pod Jun 10 22:07:54.946: INFO: Waiting for pod pod-secrets-d492d513-af66-4789-8520-1ec6e78f79d7 to disappear Jun 10 22:07:54.947: INFO: Pod pod-secrets-d492d513-af66-4789-8520-1ec6e78f79d7 no longer exists [AfterEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 10 22:07:54.948: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-3094" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":28,"skipped":584,"failed":1,"failures":["[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 10 22:07:52.949: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41 [It] should provide container's cpu limit [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test downward API volume plugin Jun 10 22:07:52.979: INFO: Waiting up to 5m0s for pod "downwardapi-volume-3197efae-3ee2-4e42-a8a5-55abf4f646f2" in namespace "projected-1652" to be "Succeeded or Failed" Jun 10 22:07:52.982: INFO: Pod "downwardapi-volume-3197efae-3ee2-4e42-a8a5-55abf4f646f2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.413532ms Jun 10 22:07:54.985: INFO: Pod "downwardapi-volume-3197efae-3ee2-4e42-a8a5-55abf4f646f2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005781882s Jun 10 22:07:56.988: INFO: Pod "downwardapi-volume-3197efae-3ee2-4e42-a8a5-55abf4f646f2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.00906889s STEP: Saw pod success Jun 10 22:07:56.988: INFO: Pod "downwardapi-volume-3197efae-3ee2-4e42-a8a5-55abf4f646f2" satisfied condition "Succeeded or Failed" Jun 10 22:07:56.991: INFO: Trying to get logs from node node2 pod downwardapi-volume-3197efae-3ee2-4e42-a8a5-55abf4f646f2 container client-container: STEP: delete the pod Jun 10 22:07:57.003: INFO: Waiting for pod downwardapi-volume-3197efae-3ee2-4e42-a8a5-55abf4f646f2 to disappear Jun 10 22:07:57.005: INFO: Pod downwardapi-volume-3197efae-3ee2-4e42-a8a5-55abf4f646f2 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 10 22:07:57.005: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1652" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance]","total":-1,"completed":17,"skipped":194,"failed":0} SS ------------------------------ [BeforeEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 10 22:07:50.994: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating secret with name secret-test-map-9db76506-9ac5-459d-8eb1-8f49dd0aee92 STEP: Creating a pod to test consume secrets Jun 10 22:07:51.030: INFO: Waiting up to 5m0s for pod "pod-secrets-107193fe-9be7-4df5-8bfd-bf13e745cfc0" in namespace "secrets-8097" to be "Succeeded or Failed" Jun 10 22:07:51.032: INFO: Pod "pod-secrets-107193fe-9be7-4df5-8bfd-bf13e745cfc0": Phase="Pending", Reason="", readiness=false. Elapsed: 1.951758ms Jun 10 22:07:53.035: INFO: Pod "pod-secrets-107193fe-9be7-4df5-8bfd-bf13e745cfc0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.004781871s Jun 10 22:07:55.039: INFO: Pod "pod-secrets-107193fe-9be7-4df5-8bfd-bf13e745cfc0": Phase="Pending", Reason="", readiness=false. Elapsed: 4.008987155s Jun 10 22:07:57.042: INFO: Pod "pod-secrets-107193fe-9be7-4df5-8bfd-bf13e745cfc0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.011964558s STEP: Saw pod success Jun 10 22:07:57.042: INFO: Pod "pod-secrets-107193fe-9be7-4df5-8bfd-bf13e745cfc0" satisfied condition "Succeeded or Failed" Jun 10 22:07:57.045: INFO: Trying to get logs from node node1 pod pod-secrets-107193fe-9be7-4df5-8bfd-bf13e745cfc0 container secret-volume-test: STEP: delete the pod Jun 10 22:07:57.656: INFO: Waiting for pod pod-secrets-107193fe-9be7-4df5-8bfd-bf13e745cfc0 to disappear Jun 10 22:07:57.658: INFO: Pod pod-secrets-107193fe-9be7-4df5-8bfd-bf13e745cfc0 no longer exists [AfterEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 10 22:07:57.658: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-8097" for this suite. • [SLOW TEST:6.672 seconds] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":44,"skipped":715,"failed":0} SSSSSSSSSS ------------------------------ [BeforeEach] [sig-apps] Job /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 10 22:07:18.614: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should delete a job [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a job STEP: Ensuring active pods == parallelism STEP: delete a job STEP: deleting Job.batch foo in namespace job-2358, will wait for the garbage collector to delete the pods Jun 10 22:07:22.699: INFO: Deleting Job.batch foo took: 3.453061ms Jun 10 22:07:22.799: INFO: Terminating Job.batch foo pods took: 100.209847ms STEP: Ensuring job was deleted [AfterEach] [sig-apps] Job /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 10 22:07:58.902: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-2358" for this suite. • [SLOW TEST:40.301 seconds] [sig-apps] Job /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should delete a job [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 10 22:05:27.517: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746 [It] should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating service in namespace services-5999 STEP: creating service affinity-nodeport-transition in namespace services-5999 STEP: creating replication controller affinity-nodeport-transition in namespace services-5999 I0610 22:05:27.550825 39 runners.go:190] Created replication controller with name: affinity-nodeport-transition, namespace: services-5999, replica count: 3 I0610 22:05:30.602679 39 runners.go:190] affinity-nodeport-transition Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0610 22:05:33.602937 39 runners.go:190] affinity-nodeport-transition Pods: 3 out of 3 created, 1 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0610 22:05:36.603672 39 runners.go:190] affinity-nodeport-transition Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jun 10 22:05:36.614: INFO: Creating new exec pod Jun 10 22:05:43.636: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5999 exec execpod-affinitybwztk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-transition 80' Jun 10 22:05:43.910: INFO: stderr: "+ nc -v -t -w 2 affinity-nodeport-transition 80\n+ echo hostName\nConnection to affinity-nodeport-transition 80 port [tcp/http] succeeded!\n" Jun 10 22:05:43.910: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" Jun 10 22:05:43.911: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5999 exec execpod-affinitybwztk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.233.51.56 80' Jun 10 22:05:44.151: INFO: stderr: "+ nc -v -t -w 2 10.233.51.56 80\n+ echo hostName\nConnection to 10.233.51.56 80 port [tcp/http] succeeded!\n" Jun 10 22:05:44.151: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" Jun 10 22:05:44.151: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5999 exec execpod-affinitybwztk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30715' Jun 10 22:05:44.399: INFO: rc: 1 Jun 10 22:05:44.399: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5999 exec execpod-affinitybwztk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30715: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30715 nc: connect to 10.10.190.207 port 30715 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:05:45.400: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5999 exec execpod-affinitybwztk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30715' Jun 10 22:05:45.647: INFO: rc: 1 Jun 10 22:05:45.647: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5999 exec execpod-affinitybwztk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30715: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30715 nc: connect to 10.10.190.207 port 30715 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:05:46.400: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5999 exec execpod-affinitybwztk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30715' Jun 10 22:05:46.664: INFO: rc: 1 Jun 10 22:05:46.664: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5999 exec execpod-affinitybwztk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30715: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30715 nc: connect to 10.10.190.207 port 30715 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:05:47.400: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5999 exec execpod-affinitybwztk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30715' Jun 10 22:05:47.659: INFO: rc: 1 Jun 10 22:05:47.659: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5999 exec execpod-affinitybwztk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30715: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30715 nc: connect to 10.10.190.207 port 30715 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:05:48.400: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5999 exec execpod-affinitybwztk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30715' Jun 10 22:05:48.644: INFO: rc: 1 Jun 10 22:05:48.645: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5999 exec execpod-affinitybwztk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30715: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30715 nc: connect to 10.10.190.207 port 30715 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:05:49.401: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5999 exec execpod-affinitybwztk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30715' Jun 10 22:05:49.665: INFO: rc: 1 Jun 10 22:05:49.665: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5999 exec execpod-affinitybwztk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30715: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30715 nc: connect to 10.10.190.207 port 30715 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:05:50.400: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5999 exec execpod-affinitybwztk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30715' Jun 10 22:05:50.707: INFO: rc: 1 Jun 10 22:05:50.707: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5999 exec execpod-affinitybwztk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30715: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30715 nc: connect to 10.10.190.207 port 30715 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:05:51.400: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5999 exec execpod-affinitybwztk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30715' Jun 10 22:05:51.649: INFO: rc: 1 Jun 10 22:05:51.649: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5999 exec execpod-affinitybwztk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30715: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30715 nc: connect to 10.10.190.207 port 30715 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:05:52.400: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5999 exec execpod-affinitybwztk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30715' Jun 10 22:05:52.705: INFO: rc: 1 Jun 10 22:05:52.705: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5999 exec execpod-affinitybwztk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30715: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30715 nc: connect to 10.10.190.207 port 30715 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:05:53.400: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5999 exec execpod-affinitybwztk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30715' Jun 10 22:05:53.656: INFO: rc: 1 Jun 10 22:05:53.656: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5999 exec execpod-affinitybwztk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30715: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30715 nc: connect to 10.10.190.207 port 30715 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:05:54.402: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5999 exec execpod-affinitybwztk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30715' Jun 10 22:05:54.647: INFO: rc: 1 Jun 10 22:05:54.647: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5999 exec execpod-affinitybwztk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30715: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30715 nc: connect to 10.10.190.207 port 30715 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:05:55.401: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5999 exec execpod-affinitybwztk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30715' Jun 10 22:05:55.650: INFO: rc: 1 Jun 10 22:05:55.650: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5999 exec execpod-affinitybwztk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30715: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30715 nc: connect to 10.10.190.207 port 30715 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:05:56.400: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5999 exec execpod-affinitybwztk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30715' Jun 10 22:05:56.705: INFO: rc: 1 Jun 10 22:05:56.705: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5999 exec execpod-affinitybwztk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30715: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30715 nc: connect to 10.10.190.207 port 30715 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:05:57.400: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5999 exec execpod-affinitybwztk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30715' Jun 10 22:05:57.861: INFO: rc: 1 Jun 10 22:05:57.862: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5999 exec execpod-affinitybwztk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30715: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30715 nc: connect to 10.10.190.207 port 30715 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:05:58.401: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5999 exec execpod-affinitybwztk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30715' Jun 10 22:05:58.663: INFO: rc: 1 Jun 10 22:05:58.663: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5999 exec execpod-affinitybwztk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30715: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30715 nc: connect to 10.10.190.207 port 30715 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:05:59.400: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5999 exec execpod-affinitybwztk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30715' Jun 10 22:05:59.658: INFO: rc: 1 Jun 10 22:05:59.658: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5999 exec execpod-affinitybwztk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30715: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30715 nc: connect to 10.10.190.207 port 30715 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:06:00.400: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5999 exec execpod-affinitybwztk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30715' Jun 10 22:06:00.802: INFO: rc: 1 Jun 10 22:06:00.802: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5999 exec execpod-affinitybwztk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30715: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30715 nc: connect to 10.10.190.207 port 30715 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:06:01.401: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5999 exec execpod-affinitybwztk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30715' Jun 10 22:06:01.818: INFO: rc: 1 Jun 10 22:06:01.818: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5999 exec execpod-affinitybwztk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30715: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30715 nc: connect to 10.10.190.207 port 30715 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:06:02.400: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5999 exec execpod-affinitybwztk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30715' Jun 10 22:06:03.308: INFO: rc: 1 Jun 10 22:06:03.308: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5999 exec execpod-affinitybwztk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30715: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30715 nc: connect to 10.10.190.207 port 30715 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:06:03.401: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5999 exec execpod-affinitybwztk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30715' Jun 10 22:06:03.763: INFO: rc: 1 Jun 10 22:06:03.763: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5999 exec execpod-affinitybwztk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30715: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30715 nc: connect to 10.10.190.207 port 30715 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:06:04.400: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5999 exec execpod-affinitybwztk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30715' Jun 10 22:06:04.927: INFO: rc: 1 Jun 10 22:06:04.927: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5999 exec execpod-affinitybwztk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30715: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30715 nc: connect to 10.10.190.207 port 30715 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:06:05.400: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5999 exec execpod-affinitybwztk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30715' Jun 10 22:06:05.689: INFO: rc: 1 Jun 10 22:06:05.689: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5999 exec execpod-affinitybwztk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30715: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30715 nc: connect to 10.10.190.207 port 30715 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:06:06.400: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5999 exec execpod-affinitybwztk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30715' Jun 10 22:06:06.645: INFO: rc: 1 Jun 10 22:06:06.645: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5999 exec execpod-affinitybwztk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30715: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30715 nc: connect to 10.10.190.207 port 30715 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:06:07.400: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5999 exec execpod-affinitybwztk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30715' Jun 10 22:06:07.888: INFO: rc: 1 Jun 10 22:06:07.888: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5999 exec execpod-affinitybwztk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30715: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30715 nc: connect to 10.10.190.207 port 30715 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:06:08.400: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5999 exec execpod-affinitybwztk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30715' Jun 10 22:06:08.657: INFO: rc: 1 Jun 10 22:06:08.658: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5999 exec execpod-affinitybwztk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30715: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30715 nc: connect to 10.10.190.207 port 30715 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:06:09.400: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5999 exec execpod-affinitybwztk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30715' Jun 10 22:06:09.661: INFO: rc: 1 Jun 10 22:06:09.661: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5999 exec execpod-affinitybwztk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30715: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30715 nc: connect to 10.10.190.207 port 30715 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:06:10.400: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5999 exec execpod-affinitybwztk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30715' Jun 10 22:06:10.649: INFO: rc: 1 Jun 10 22:06:10.649: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5999 exec execpod-affinitybwztk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30715: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30715 nc: connect to 10.10.190.207 port 30715 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:06:11.400: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5999 exec execpod-affinitybwztk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30715' Jun 10 22:06:11.736: INFO: rc: 1 Jun 10 22:06:11.736: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5999 exec execpod-affinitybwztk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30715: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30715 nc: connect to 10.10.190.207 port 30715 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:06:12.400: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5999 exec execpod-affinitybwztk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30715' Jun 10 22:06:12.652: INFO: rc: 1 Jun 10 22:06:12.652: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5999 exec execpod-affinitybwztk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30715: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30715 nc: connect to 10.10.190.207 port 30715 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:06:13.400: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5999 exec execpod-affinitybwztk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30715' Jun 10 22:06:13.644: INFO: rc: 1 Jun 10 22:06:13.644: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5999 exec execpod-affinitybwztk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30715: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30715 nc: connect to 10.10.190.207 port 30715 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:06:14.400: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5999 exec execpod-affinitybwztk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30715' Jun 10 22:06:14.634: INFO: rc: 1 Jun 10 22:06:14.634: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5999 exec execpod-affinitybwztk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30715: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30715 nc: connect to 10.10.190.207 port 30715 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:06:15.400: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5999 exec execpod-affinitybwztk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30715' Jun 10 22:06:15.665: INFO: rc: 1 Jun 10 22:06:15.665: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5999 exec execpod-affinitybwztk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30715: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30715 nc: connect to 10.10.190.207 port 30715 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:06:16.400: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5999 exec execpod-affinitybwztk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30715' Jun 10 22:06:16.658: INFO: rc: 1 Jun 10 22:06:16.658: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5999 exec execpod-affinitybwztk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30715: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30715 nc: connect to 10.10.190.207 port 30715 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:06:17.400: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5999 exec execpod-affinitybwztk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30715' Jun 10 22:06:17.646: INFO: rc: 1 Jun 10 22:06:17.646: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5999 exec execpod-affinitybwztk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30715: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30715 nc: connect to 10.10.190.207 port 30715 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:06:18.400: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5999 exec execpod-affinitybwztk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30715' Jun 10 22:06:18.654: INFO: rc: 1 Jun 10 22:06:18.654: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5999 exec execpod-affinitybwztk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30715: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30715 nc: connect to 10.10.190.207 port 30715 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:06:19.400: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5999 exec execpod-affinitybwztk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30715' Jun 10 22:06:19.658: INFO: rc: 1 Jun 10 22:06:19.658: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5999 exec execpod-affinitybwztk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30715: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30715 nc: connect to 10.10.190.207 port 30715 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:06:20.399: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5999 exec execpod-affinitybwztk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30715' Jun 10 22:06:20.648: INFO: rc: 1 Jun 10 22:06:20.648: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5999 exec execpod-affinitybwztk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30715: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30715 nc: connect to 10.10.190.207 port 30715 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:06:21.400: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5999 exec execpod-affinitybwztk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30715' Jun 10 22:06:21.952: INFO: rc: 1 Jun 10 22:06:21.952: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5999 exec execpod-affinitybwztk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30715: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30715 nc: connect to 10.10.190.207 port 30715 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:06:22.400: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5999 exec execpod-affinitybwztk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30715' Jun 10 22:06:22.654: INFO: rc: 1 Jun 10 22:06:22.654: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5999 exec execpod-affinitybwztk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30715: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30715 nc: connect to 10.10.190.207 port 30715 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:06:23.402: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5999 exec execpod-affinitybwztk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30715' Jun 10 22:06:23.650: INFO: rc: 1 Jun 10 22:06:23.650: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5999 exec execpod-affinitybwztk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30715: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30715 nc: connect to 10.10.190.207 port 30715 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:06:24.400: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5999 exec execpod-affinitybwztk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30715' Jun 10 22:06:24.646: INFO: rc: 1 Jun 10 22:06:24.646: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5999 exec execpod-affinitybwztk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30715: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30715 nc: connect to 10.10.190.207 port 30715 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:06:25.400: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5999 exec execpod-affinitybwztk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30715' Jun 10 22:06:25.676: INFO: rc: 1 Jun 10 22:06:25.676: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5999 exec execpod-affinitybwztk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30715: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30715 nc: connect to 10.10.190.207 port 30715 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:06:26.400: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5999 exec execpod-affinitybwztk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30715' Jun 10 22:06:26.668: INFO: rc: 1 Jun 10 22:06:26.669: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5999 exec execpod-affinitybwztk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30715: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30715 nc: connect to 10.10.190.207 port 30715 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:06:27.402: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5999 exec execpod-affinitybwztk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30715' Jun 10 22:06:27.632: INFO: rc: 1 Jun 10 22:06:27.632: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5999 exec execpod-affinitybwztk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30715: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30715 nc: connect to 10.10.190.207 port 30715 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:06:28.400: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5999 exec execpod-affinitybwztk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30715' Jun 10 22:06:29.269: INFO: rc: 1 Jun 10 22:06:29.269: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5999 exec execpod-affinitybwztk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30715: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30715 nc: connect to 10.10.190.207 port 30715 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:06:29.401: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5999 exec execpod-affinitybwztk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30715' Jun 10 22:06:29.643: INFO: rc: 1 Jun 10 22:06:29.643: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5999 exec execpod-affinitybwztk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30715: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30715 nc: connect to 10.10.190.207 port 30715 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:06:30.400: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5999 exec execpod-affinitybwztk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30715' Jun 10 22:06:30.651: INFO: rc: 1 Jun 10 22:06:30.651: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5999 exec execpod-affinitybwztk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30715: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30715 nc: connect to 10.10.190.207 port 30715 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:06:31.400: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5999 exec execpod-affinitybwztk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30715' Jun 10 22:06:31.680: INFO: rc: 1 Jun 10 22:06:31.680: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5999 exec execpod-affinitybwztk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30715: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30715 nc: connect to 10.10.190.207 port 30715 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:06:32.400: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5999 exec execpod-affinitybwztk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30715' Jun 10 22:06:32.644: INFO: rc: 1 Jun 10 22:06:32.644: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5999 exec execpod-affinitybwztk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30715: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30715 nc: connect to 10.10.190.207 port 30715 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:06:33.400: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5999 exec execpod-affinitybwztk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30715' Jun 10 22:06:33.666: INFO: rc: 1 Jun 10 22:06:33.666: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5999 exec execpod-affinitybwztk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30715: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30715 nc: connect to 10.10.190.207 port 30715 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:06:34.400: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5999 exec execpod-affinitybwztk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30715' Jun 10 22:06:34.652: INFO: rc: 1 Jun 10 22:06:34.652: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5999 exec execpod-affinitybwztk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30715: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30715 nc: connect to 10.10.190.207 port 30715 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:06:35.400: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5999 exec execpod-affinitybwztk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30715' Jun 10 22:06:35.646: INFO: rc: 1 Jun 10 22:06:35.646: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5999 exec execpod-affinitybwztk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30715: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30715 nc: connect to 10.10.190.207 port 30715 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:06:36.400: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5999 exec execpod-affinitybwztk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30715' Jun 10 22:06:36.766: INFO: rc: 1 Jun 10 22:06:36.766: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5999 exec execpod-affinitybwztk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30715: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30715 nc: connect to 10.10.190.207 port 30715 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:06:37.399: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5999 exec execpod-affinitybwztk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30715' Jun 10 22:06:37.689: INFO: rc: 1 Jun 10 22:06:37.689: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5999 exec execpod-affinitybwztk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30715: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30715 nc: connect to 10.10.190.207 port 30715 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:06:38.400: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5999 exec execpod-affinitybwztk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30715' Jun 10 22:06:40.033: INFO: rc: 1 Jun 10 22:06:40.033: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5999 exec execpod-affinitybwztk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30715: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30715 nc: connect to 10.10.190.207 port 30715 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:06:40.400: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5999 exec execpod-affinitybwztk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30715' Jun 10 22:06:42.176: INFO: rc: 1 Jun 10 22:06:42.176: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5999 exec execpod-affinitybwztk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30715: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30715 nc: connect to 10.10.190.207 port 30715 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:06:42.401: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5999 exec execpod-affinitybwztk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30715' Jun 10 22:06:42.680: INFO: rc: 1 Jun 10 22:06:42.680: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5999 exec execpod-affinitybwztk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30715: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30715 nc: connect to 10.10.190.207 port 30715 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:06:43.399: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5999 exec execpod-affinitybwztk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30715' Jun 10 22:06:43.641: INFO: rc: 1 Jun 10 22:06:43.641: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5999 exec execpod-affinitybwztk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30715: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30715 nc: connect to 10.10.190.207 port 30715 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:06:44.400: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5999 exec execpod-affinitybwztk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30715' Jun 10 22:06:45.248: INFO: rc: 1 Jun 10 22:06:45.248: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5999 exec execpod-affinitybwztk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30715: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30715 nc: connect to 10.10.190.207 port 30715 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:06:45.401: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5999 exec execpod-affinitybwztk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30715' Jun 10 22:06:45.702: INFO: rc: 1 Jun 10 22:06:45.702: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5999 exec execpod-affinitybwztk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30715: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30715 nc: connect to 10.10.190.207 port 30715 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:06:46.400: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5999 exec execpod-affinitybwztk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30715' Jun 10 22:06:46.649: INFO: rc: 1 Jun 10 22:06:46.649: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5999 exec execpod-affinitybwztk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30715: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30715 nc: connect to 10.10.190.207 port 30715 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:06:47.400: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5999 exec execpod-affinitybwztk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30715' Jun 10 22:06:47.645: INFO: rc: 1 Jun 10 22:06:47.645: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5999 exec execpod-affinitybwztk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30715: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30715 nc: connect to 10.10.190.207 port 30715 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:06:48.400: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5999 exec execpod-affinitybwztk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30715' Jun 10 22:06:48.651: INFO: rc: 1 Jun 10 22:06:48.651: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5999 exec execpod-affinitybwztk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30715: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30715 nc: connect to 10.10.190.207 port 30715 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:06:49.400: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5999 exec execpod-affinitybwztk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30715' Jun 10 22:06:49.635: INFO: rc: 1 Jun 10 22:06:49.635: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5999 exec execpod-affinitybwztk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30715: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30715 nc: connect to 10.10.190.207 port 30715 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:06:50.399: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5999 exec execpod-affinitybwztk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30715' Jun 10 22:06:50.662: INFO: rc: 1 Jun 10 22:06:50.662: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5999 exec execpod-affinitybwztk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30715: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30715 nc: connect to 10.10.190.207 port 30715 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:06:51.400: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5999 exec execpod-affinitybwztk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30715' Jun 10 22:06:51.697: INFO: rc: 1 Jun 10 22:06:51.697: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5999 exec execpod-affinitybwztk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30715: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30715 nc: connect to 10.10.190.207 port 30715 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:06:52.400: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5999 exec execpod-affinitybwztk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30715' Jun 10 22:06:52.649: INFO: rc: 1 Jun 10 22:06:52.649: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5999 exec execpod-affinitybwztk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30715: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30715 nc: connect to 10.10.190.207 port 30715 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:06:53.401: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5999 exec execpod-affinitybwztk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30715' Jun 10 22:06:53.654: INFO: rc: 1 Jun 10 22:06:53.654: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5999 exec execpod-affinitybwztk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30715: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30715 nc: connect to 10.10.190.207 port 30715 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:06:54.400: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5999 exec execpod-affinitybwztk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30715' Jun 10 22:06:54.641: INFO: rc: 1 Jun 10 22:06:54.641: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5999 exec execpod-affinitybwztk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30715: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30715 nc: connect to 10.10.190.207 port 30715 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:06:55.400: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5999 exec execpod-affinitybwztk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30715' Jun 10 22:06:55.648: INFO: rc: 1 Jun 10 22:06:55.648: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5999 exec execpod-affinitybwztk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30715: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30715 nc: connect to 10.10.190.207 port 30715 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:06:56.400: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5999 exec execpod-affinitybwztk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30715' Jun 10 22:06:56.674: INFO: rc: 1 Jun 10 22:06:56.674: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5999 exec execpod-affinitybwztk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30715: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30715 nc: connect to 10.10.190.207 port 30715 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:06:57.399: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5999 exec execpod-affinitybwztk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30715' Jun 10 22:06:57.649: INFO: rc: 1 Jun 10 22:06:57.649: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5999 exec execpod-affinitybwztk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30715: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30715 nc: connect to 10.10.190.207 port 30715 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:06:58.400: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5999 exec execpod-affinitybwztk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30715' Jun 10 22:06:59.636: INFO: rc: 1 Jun 10 22:06:59.636: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5999 exec execpod-affinitybwztk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30715: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30715 nc: connect to 10.10.190.207 port 30715 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:07:00.400: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5999 exec execpod-affinitybwztk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30715' Jun 10 22:07:00.796: INFO: rc: 1 Jun 10 22:07:00.796: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5999 exec execpod-affinitybwztk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30715: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30715 nc: connect to 10.10.190.207 port 30715 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:07:01.400: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5999 exec execpod-affinitybwztk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30715' Jun 10 22:07:02.073: INFO: rc: 1 Jun 10 22:07:02.073: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5999 exec execpod-affinitybwztk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30715: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30715 nc: connect to 10.10.190.207 port 30715 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:07:02.400: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5999 exec execpod-affinitybwztk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30715' Jun 10 22:07:02.670: INFO: rc: 1 Jun 10 22:07:02.670: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5999 exec execpod-affinitybwztk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30715: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30715 nc: connect to 10.10.190.207 port 30715 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:07:03.400: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5999 exec execpod-affinitybwztk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30715' Jun 10 22:07:03.754: INFO: rc: 1 Jun 10 22:07:03.754: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5999 exec execpod-affinitybwztk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30715: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30715 nc: connect to 10.10.190.207 port 30715 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:07:04.400: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5999 exec execpod-affinitybwztk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30715' Jun 10 22:07:04.797: INFO: rc: 1 Jun 10 22:07:04.797: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5999 exec execpod-affinitybwztk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30715: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30715 nc: connect to 10.10.190.207 port 30715 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:07:05.400: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5999 exec execpod-affinitybwztk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30715' Jun 10 22:07:05.652: INFO: rc: 1 Jun 10 22:07:05.652: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5999 exec execpod-affinitybwztk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30715: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30715 nc: connect to 10.10.190.207 port 30715 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:07:06.400: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5999 exec execpod-affinitybwztk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30715' Jun 10 22:07:06.663: INFO: rc: 1 Jun 10 22:07:06.664: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5999 exec execpod-affinitybwztk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30715: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30715 nc: connect to 10.10.190.207 port 30715 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:07:07.400: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5999 exec execpod-affinitybwztk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30715' Jun 10 22:07:08.021: INFO: rc: 1 Jun 10 22:07:08.021: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5999 exec execpod-affinitybwztk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30715: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30715 nc: connect to 10.10.190.207 port 30715 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:07:08.400: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5999 exec execpod-affinitybwztk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30715' Jun 10 22:07:08.902: INFO: rc: 1 Jun 10 22:07:08.902: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5999 exec execpod-affinitybwztk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30715: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30715 nc: connect to 10.10.190.207 port 30715 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:07:09.400: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5999 exec execpod-affinitybwztk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30715' Jun 10 22:07:09.679: INFO: rc: 1 Jun 10 22:07:09.679: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5999 exec execpod-affinitybwztk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30715: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30715 nc: connect to 10.10.190.207 port 30715 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:07:10.400: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5999 exec execpod-affinitybwztk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30715' Jun 10 22:07:10.683: INFO: rc: 1 Jun 10 22:07:10.683: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5999 exec execpod-affinitybwztk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30715: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30715 nc: connect to 10.10.190.207 port 30715 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:07:11.400: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5999 exec execpod-affinitybwztk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30715' Jun 10 22:07:11.644: INFO: rc: 1 Jun 10 22:07:11.644: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5999 exec execpod-affinitybwztk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30715: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30715 nc: connect to 10.10.190.207 port 30715 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:07:12.400: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5999 exec execpod-affinitybwztk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30715' Jun 10 22:07:12.660: INFO: rc: 1 Jun 10 22:07:12.660: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5999 exec execpod-affinitybwztk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30715: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30715 nc: connect to 10.10.190.207 port 30715 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:07:13.401: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5999 exec execpod-affinitybwztk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30715' Jun 10 22:07:13.701: INFO: rc: 1 Jun 10 22:07:13.702: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5999 exec execpod-affinitybwztk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30715: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30715 nc: connect to 10.10.190.207 port 30715 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:07:14.401: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5999 exec execpod-affinitybwztk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30715' Jun 10 22:07:14.661: INFO: rc: 1 Jun 10 22:07:14.661: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5999 exec execpod-affinitybwztk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30715: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30715 nc: connect to 10.10.190.207 port 30715 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:07:15.401: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5999 exec execpod-affinitybwztk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30715' Jun 10 22:07:15.669: INFO: rc: 1 Jun 10 22:07:15.669: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5999 exec execpod-affinitybwztk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30715: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30715 nc: connect to 10.10.190.207 port 30715 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:07:16.400: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5999 exec execpod-affinitybwztk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30715' Jun 10 22:07:16.659: INFO: rc: 1 Jun 10 22:07:16.659: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5999 exec execpod-affinitybwztk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30715: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30715 nc: connect to 10.10.190.207 port 30715 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:07:17.400: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5999 exec execpod-affinitybwztk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30715' Jun 10 22:07:17.759: INFO: rc: 1 Jun 10 22:07:17.759: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5999 exec execpod-affinitybwztk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30715: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30715 nc: connect to 10.10.190.207 port 30715 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:07:18.400: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5999 exec execpod-affinitybwztk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30715' Jun 10 22:07:18.640: INFO: rc: 1 Jun 10 22:07:18.640: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5999 exec execpod-affinitybwztk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30715: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30715 nc: connect to 10.10.190.207 port 30715 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:07:19.400: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5999 exec execpod-affinitybwztk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30715' Jun 10 22:07:20.079: INFO: rc: 1 Jun 10 22:07:20.079: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5999 exec execpod-affinitybwztk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30715: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30715 nc: connect to 10.10.190.207 port 30715 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:07:20.400: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5999 exec execpod-affinitybwztk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30715' Jun 10 22:07:20.634: INFO: rc: 1 Jun 10 22:07:20.635: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5999 exec execpod-affinitybwztk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30715: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30715 nc: connect to 10.10.190.207 port 30715 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:07:21.400: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5999 exec execpod-affinitybwztk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30715' Jun 10 22:07:21.660: INFO: rc: 1 Jun 10 22:07:21.660: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5999 exec execpod-affinitybwztk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30715: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30715 nc: connect to 10.10.190.207 port 30715 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:07:22.400: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5999 exec execpod-affinitybwztk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30715' Jun 10 22:07:22.661: INFO: rc: 1 Jun 10 22:07:22.661: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5999 exec execpod-affinitybwztk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30715: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30715 nc: connect to 10.10.190.207 port 30715 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:07:23.400: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5999 exec execpod-affinitybwztk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30715' Jun 10 22:07:23.645: INFO: rc: 1 Jun 10 22:07:23.645: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5999 exec execpod-affinitybwztk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30715: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30715 nc: connect to 10.10.190.207 port 30715 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:07:24.400: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5999 exec execpod-affinitybwztk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30715' Jun 10 22:07:24.671: INFO: rc: 1 Jun 10 22:07:24.671: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5999 exec execpod-affinitybwztk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30715: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30715 nc: connect to 10.10.190.207 port 30715 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:07:25.400: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5999 exec execpod-affinitybwztk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30715' Jun 10 22:07:25.650: INFO: rc: 1 Jun 10 22:07:25.650: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5999 exec execpod-affinitybwztk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30715: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30715 nc: connect to 10.10.190.207 port 30715 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:07:26.401: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5999 exec execpod-affinitybwztk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30715' Jun 10 22:07:26.679: INFO: rc: 1 Jun 10 22:07:26.679: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5999 exec execpod-affinitybwztk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30715: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30715 nc: connect to 10.10.190.207 port 30715 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:07:27.400: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5999 exec execpod-affinitybwztk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30715' Jun 10 22:07:27.656: INFO: rc: 1 Jun 10 22:07:27.656: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5999 exec execpod-affinitybwztk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30715: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30715 nc: connect to 10.10.190.207 port 30715 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:07:28.400: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5999 exec execpod-affinitybwztk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30715' Jun 10 22:07:28.947: INFO: rc: 1 Jun 10 22:07:28.947: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5999 exec execpod-affinitybwztk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30715: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30715 nc: connect to 10.10.190.207 port 30715 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:07:29.400: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5999 exec execpod-affinitybwztk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30715' Jun 10 22:07:29.651: INFO: rc: 1 Jun 10 22:07:29.651: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5999 exec execpod-affinitybwztk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30715: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30715 nc: connect to 10.10.190.207 port 30715 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:07:30.400: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5999 exec execpod-affinitybwztk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30715' Jun 10 22:07:30.645: INFO: rc: 1 Jun 10 22:07:30.645: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5999 exec execpod-affinitybwztk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30715: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30715 nc: connect to 10.10.190.207 port 30715 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:07:31.400: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5999 exec execpod-affinitybwztk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30715' Jun 10 22:07:31.885: INFO: rc: 1 Jun 10 22:07:31.885: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5999 exec execpod-affinitybwztk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30715: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30715 nc: connect to 10.10.190.207 port 30715 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:07:32.400: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5999 exec execpod-affinitybwztk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30715' Jun 10 22:07:32.725: INFO: rc: 1 Jun 10 22:07:32.725: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5999 exec execpod-affinitybwztk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30715: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30715 nc: connect to 10.10.190.207 port 30715 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:07:33.401: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5999 exec execpod-affinitybwztk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30715' Jun 10 22:07:33.643: INFO: rc: 1 Jun 10 22:07:33.643: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5999 exec execpod-affinitybwztk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30715: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30715 nc: connect to 10.10.190.207 port 30715 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:07:34.400: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5999 exec execpod-affinitybwztk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30715' Jun 10 22:07:34.654: INFO: rc: 1 Jun 10 22:07:34.654: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5999 exec execpod-affinitybwztk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30715: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30715 nc: connect to 10.10.190.207 port 30715 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:07:35.400: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5999 exec execpod-affinitybwztk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30715' Jun 10 22:07:35.649: INFO: rc: 1 Jun 10 22:07:35.649: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5999 exec execpod-affinitybwztk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30715: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30715 nc: connect to 10.10.190.207 port 30715 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:07:36.401: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5999 exec execpod-affinitybwztk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30715' Jun 10 22:07:36.665: INFO: rc: 1 Jun 10 22:07:36.665: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5999 exec execpod-affinitybwztk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30715: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30715 nc: connect to 10.10.190.207 port 30715 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:07:37.401: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5999 exec execpod-affinitybwztk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30715' Jun 10 22:07:37.658: INFO: rc: 1 Jun 10 22:07:37.658: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5999 exec execpod-affinitybwztk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30715: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30715 nc: connect to 10.10.190.207 port 30715 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:07:38.400: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5999 exec execpod-affinitybwztk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30715' Jun 10 22:07:38.637: INFO: rc: 1 Jun 10 22:07:38.637: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5999 exec execpod-affinitybwztk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30715: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30715 nc: connect to 10.10.190.207 port 30715 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:07:39.400: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5999 exec execpod-affinitybwztk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30715' Jun 10 22:07:39.712: INFO: rc: 1 Jun 10 22:07:39.712: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5999 exec execpod-affinitybwztk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30715: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30715 nc: connect to 10.10.190.207 port 30715 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:07:40.400: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5999 exec execpod-affinitybwztk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30715' Jun 10 22:07:40.669: INFO: rc: 1 Jun 10 22:07:40.669: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5999 exec execpod-affinitybwztk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30715: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30715 nc: connect to 10.10.190.207 port 30715 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:07:41.400: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5999 exec execpod-affinitybwztk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30715' Jun 10 22:07:41.648: INFO: rc: 1 Jun 10 22:07:41.648: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5999 exec execpod-affinitybwztk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30715: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30715 nc: connect to 10.10.190.207 port 30715 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:07:42.400: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5999 exec execpod-affinitybwztk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30715' Jun 10 22:07:42.650: INFO: rc: 1 Jun 10 22:07:42.650: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5999 exec execpod-affinitybwztk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30715: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30715 nc: connect to 10.10.190.207 port 30715 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:07:43.400: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5999 exec execpod-affinitybwztk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30715' Jun 10 22:07:43.660: INFO: rc: 1 Jun 10 22:07:43.660: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5999 exec execpod-affinitybwztk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30715: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30715 nc: connect to 10.10.190.207 port 30715 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:07:44.402: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5999 exec execpod-affinitybwztk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30715' Jun 10 22:07:44.723: INFO: rc: 1 Jun 10 22:07:44.723: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5999 exec execpod-affinitybwztk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30715: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30715 nc: connect to 10.10.190.207 port 30715 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:07:44.723: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5999 exec execpod-affinitybwztk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30715' Jun 10 22:07:45.153: INFO: rc: 1 Jun 10 22:07:45.153: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5999 exec execpod-affinitybwztk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30715: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30715 nc: connect to 10.10.190.207 port 30715 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:07:45.154: FAIL: Unexpected error: <*errors.errorString | 0xc002414e50>: { s: "service is not reachable within 2m0s timeout on endpoint 10.10.190.207:30715 over TCP protocol", } service is not reachable within 2m0s timeout on endpoint 10.10.190.207:30715 over TCP protocol occurred Full Stack Trace k8s.io/kubernetes/test/e2e/network.execAffinityTestForNonLBServiceWithOptionalTransition(0xc001255760, 0x77b33d8, 0xc00448ec60, 0xc001840000, 0x1) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:2576 +0x625 k8s.io/kubernetes/test/e2e/network.execAffinityTestForNonLBServiceWithTransition(...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:2531 k8s.io/kubernetes/test/e2e/network.glob..func24.27() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1862 +0xa5 k8s.io/kubernetes/test/e2e.RunE2ETests(0xc00171a480) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x36c k8s.io/kubernetes/test/e2e.TestE2E(0xc00171a480) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:144 +0x2b testing.tRunner(0xc00171a480, 0x70f99e8) /usr/local/go/src/testing/testing.go:1193 +0xef created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1238 +0x2b3 Jun 10 22:07:45.156: INFO: Cleaning up the exec pod STEP: deleting ReplicationController affinity-nodeport-transition in namespace services-5999, will wait for the garbage collector to delete the pods Jun 10 22:07:45.233: INFO: Deleting ReplicationController affinity-nodeport-transition took: 4.068437ms Jun 10 22:07:45.333: INFO: Terminating ReplicationController affinity-nodeport-transition pods took: 100.722151ms [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 STEP: Collecting events from namespace "services-5999". STEP: Found 27 events. Jun 10 22:07:57.753: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for affinity-nodeport-transition-cx685: { } Scheduled: Successfully assigned services-5999/affinity-nodeport-transition-cx685 to node1 Jun 10 22:07:57.753: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for affinity-nodeport-transition-rpgll: { } Scheduled: Successfully assigned services-5999/affinity-nodeport-transition-rpgll to node2 Jun 10 22:07:57.753: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for affinity-nodeport-transition-st24r: { } Scheduled: Successfully assigned services-5999/affinity-nodeport-transition-st24r to node2 Jun 10 22:07:57.753: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for execpod-affinitybwztk: { } Scheduled: Successfully assigned services-5999/execpod-affinitybwztk to node1 Jun 10 22:07:57.753: INFO: At 2022-06-10 22:05:27 +0000 UTC - event for affinity-nodeport-transition: {replication-controller } SuccessfulCreate: Created pod: affinity-nodeport-transition-rpgll Jun 10 22:07:57.753: INFO: At 2022-06-10 22:05:27 +0000 UTC - event for affinity-nodeport-transition: {replication-controller } SuccessfulCreate: Created pod: affinity-nodeport-transition-st24r Jun 10 22:07:57.753: INFO: At 2022-06-10 22:05:27 +0000 UTC - event for affinity-nodeport-transition: {replication-controller } SuccessfulCreate: Created pod: affinity-nodeport-transition-cx685 Jun 10 22:07:57.753: INFO: At 2022-06-10 22:05:30 +0000 UTC - event for affinity-nodeport-transition-cx685: {kubelet node1} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/agnhost:2.32" Jun 10 22:07:57.753: INFO: At 2022-06-10 22:05:30 +0000 UTC - event for affinity-nodeport-transition-cx685: {kubelet node1} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/agnhost:2.32" in 320.666821ms Jun 10 22:07:57.753: INFO: At 2022-06-10 22:05:30 +0000 UTC - event for affinity-nodeport-transition-cx685: {kubelet node1} Created: Created container affinity-nodeport-transition Jun 10 22:07:57.753: INFO: At 2022-06-10 22:05:30 +0000 UTC - event for affinity-nodeport-transition-rpgll: {kubelet node2} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/agnhost:2.32" in 338.961884ms Jun 10 22:07:57.753: INFO: At 2022-06-10 22:05:30 +0000 UTC - event for affinity-nodeport-transition-rpgll: {kubelet node2} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/agnhost:2.32" Jun 10 22:07:57.753: INFO: At 2022-06-10 22:05:31 +0000 UTC - event for affinity-nodeport-transition-rpgll: {kubelet node2} Started: Started container affinity-nodeport-transition Jun 10 22:07:57.753: INFO: At 2022-06-10 22:05:31 +0000 UTC - event for affinity-nodeport-transition-rpgll: {kubelet node2} Created: Created container affinity-nodeport-transition Jun 10 22:07:57.753: INFO: At 2022-06-10 22:05:31 +0000 UTC - event for affinity-nodeport-transition-st24r: {kubelet node2} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/agnhost:2.32" in 244.449977ms Jun 10 22:07:57.753: INFO: At 2022-06-10 22:05:31 +0000 UTC - event for affinity-nodeport-transition-st24r: {kubelet node2} Created: Created container affinity-nodeport-transition Jun 10 22:07:57.753: INFO: At 2022-06-10 22:05:31 +0000 UTC - event for affinity-nodeport-transition-st24r: {kubelet node2} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/agnhost:2.32" Jun 10 22:07:57.753: INFO: At 2022-06-10 22:05:32 +0000 UTC - event for affinity-nodeport-transition-cx685: {kubelet node1} Started: Started container affinity-nodeport-transition Jun 10 22:07:57.753: INFO: At 2022-06-10 22:05:32 +0000 UTC - event for affinity-nodeport-transition-st24r: {kubelet node2} Started: Started container affinity-nodeport-transition Jun 10 22:07:57.753: INFO: At 2022-06-10 22:05:40 +0000 UTC - event for execpod-affinitybwztk: {kubelet node1} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/agnhost:2.32" in 277.535952ms Jun 10 22:07:57.753: INFO: At 2022-06-10 22:05:40 +0000 UTC - event for execpod-affinitybwztk: {kubelet node1} Started: Started container agnhost-container Jun 10 22:07:57.753: INFO: At 2022-06-10 22:05:40 +0000 UTC - event for execpod-affinitybwztk: {kubelet node1} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/agnhost:2.32" Jun 10 22:07:57.753: INFO: At 2022-06-10 22:05:40 +0000 UTC - event for execpod-affinitybwztk: {kubelet node1} Created: Created container agnhost-container Jun 10 22:07:57.753: INFO: At 2022-06-10 22:07:45 +0000 UTC - event for affinity-nodeport-transition-cx685: {kubelet node1} Killing: Stopping container affinity-nodeport-transition Jun 10 22:07:57.753: INFO: At 2022-06-10 22:07:45 +0000 UTC - event for affinity-nodeport-transition-rpgll: {kubelet node2} Killing: Stopping container affinity-nodeport-transition Jun 10 22:07:57.753: INFO: At 2022-06-10 22:07:45 +0000 UTC - event for affinity-nodeport-transition-st24r: {kubelet node2} Killing: Stopping container affinity-nodeport-transition Jun 10 22:07:57.753: INFO: At 2022-06-10 22:07:45 +0000 UTC - event for execpod-affinitybwztk: {kubelet node1} Killing: Stopping container agnhost-container Jun 10 22:07:57.756: INFO: POD NODE PHASE GRACE CONDITIONS Jun 10 22:07:57.756: INFO: Jun 10 22:07:57.761: INFO: Logging node info for node master1 Jun 10 22:07:57.768: INFO: Node Info: &Node{ObjectMeta:{master1 e472448e-87fd-4e8d-bbb7-98d43d3d8a87 48888 0 2022-06-10 19:57:38 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master1 kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.202 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/master.version:v0.8.2 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2022-06-10 19:57:41 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}}} {flanneld Update v1 2022-06-10 20:00:25 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2022-06-10 20:00:33 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.0.0/24\"":{}},"f:taints":{}}}} {kubelet Update v1 2022-06-10 20:05:13 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}} {nfd-master Update v1 2022-06-10 20:08:15 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/master.version":{}}}}}]},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{451201003520 0} {} 440625980Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234739200 0} {} 196518300Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{406080902496 0} {} 406080902496 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324575232 0} {} 195629468Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-06-10 20:03:20 +0000 UTC,LastTransitionTime:2022-06-10 20:03:20 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-06-10 22:07:52 +0000 UTC,LastTransitionTime:2022-06-10 19:57:36 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-06-10 22:07:52 +0000 UTC,LastTransitionTime:2022-06-10 19:57:36 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-06-10 22:07:52 +0000 UTC,LastTransitionTime:2022-06-10 19:57:36 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-06-10 22:07:52 +0000 UTC,LastTransitionTime:2022-06-10 20:00:33 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.202,},NodeAddress{Type:Hostname,Address:master1,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:3faca96dd267476388422e9ecfe8ffa5,SystemUUID:00ACFB60-0631-E711-906E-0017A4403562,BootID:a8563bde-8faa-4424-940f-741c59dd35bf,KernelVersion:3.10.0-1160.66.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.17,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:727708945,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 sirot/netperf-latest:latest],SizeBytes:282025213,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[k8s.gcr.io/nfd/node-feature-discovery@sha256:74a1cbd82354f148277e20cdce25d57816e355a896bc67f67a0f722164b16945 k8s.gcr.io/nfd/node-feature-discovery:v0.8.2],SizeBytes:108486428,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[quay.io/prometheus-operator/prometheus-operator@sha256:850c86bfeda4389bc9c757a9fd17ca5a090ea6b424968178d4467492cfa13921 quay.io/prometheus-operator/prometheus-operator:v0.44.1],SizeBytes:42617274,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e k8s.gcr.io/coredns/coredns:v1.8.0],SizeBytes:42454755,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[localhost:30500/tasextender@sha256:bec743bd4fe4525edfd5f3c9bb11da21629092dfe60d396ce7f8168ac1088695 tasextender:latest localhost:30500/tasextender:0.4],SizeBytes:28910791,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[registry@sha256:1cd9409a311350c3072fe510b52046f104416376c126a479cef9a4dfe692cf57 registry:2.7.0],SizeBytes:24191168,},ContainerImage{Names:[nginx@sha256:b92d3b942c8b84da889ac3dc6e83bd20ffb8cd2d8298eba92c8b0bf88d52f03e nginx:1.20.1-alpine],SizeBytes:22721538,},ContainerImage{Names:[@ :],SizeBytes:5577654,},ContainerImage{Names:[alpine@sha256:c0e9560cda118f9ec63ddefb4a173a2b2a0347082d7dff7dc14272e7841a5b5a alpine:3.12.1],SizeBytes:5573013,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jun 10 22:07:57.769: INFO: Logging kubelet events for node master1 Jun 10 22:07:57.773: INFO: Logging pods the kubelet thinks is on node master1 Jun 10 22:07:57.797: INFO: kube-flannel-xx9h7 started at 2022-06-10 20:00:20 +0000 UTC (1+1 container statuses recorded) Jun 10 22:07:57.797: INFO: Init container install-cni ready: true, restart count 0 Jun 10 22:07:57.797: INFO: Container kube-flannel ready: true, restart count 1 Jun 10 22:07:57.797: INFO: kube-multus-ds-amd64-t5pr7 started at 2022-06-10 20:00:29 +0000 UTC (0+1 container statuses recorded) Jun 10 22:07:57.797: INFO: Container kube-multus ready: true, restart count 1 Jun 10 22:07:57.797: INFO: dns-autoscaler-7df78bfcfb-kz7px started at 2022-06-10 20:00:58 +0000 UTC (0+1 container statuses recorded) Jun 10 22:07:57.797: INFO: Container autoscaler ready: true, restart count 1 Jun 10 22:07:57.797: INFO: container-registry-65d7c44b96-rsh2n started at 2022-06-10 20:04:56 +0000 UTC (0+2 container statuses recorded) Jun 10 22:07:57.797: INFO: Container docker-registry ready: true, restart count 0 Jun 10 22:07:57.797: INFO: Container nginx ready: true, restart count 0 Jun 10 22:07:57.797: INFO: node-feature-discovery-controller-cff799f9f-74qhv started at 2022-06-10 20:08:09 +0000 UTC (0+1 container statuses recorded) Jun 10 22:07:57.797: INFO: Container nfd-controller ready: true, restart count 0 Jun 10 22:07:57.797: INFO: prometheus-operator-585ccfb458-kkb8f started at 2022-06-10 20:13:26 +0000 UTC (0+2 container statuses recorded) Jun 10 22:07:57.797: INFO: Container kube-rbac-proxy ready: true, restart count 0 Jun 10 22:07:57.797: INFO: Container prometheus-operator ready: true, restart count 0 Jun 10 22:07:57.797: INFO: node-exporter-vc67r started at 2022-06-10 20:13:33 +0000 UTC (0+2 container statuses recorded) Jun 10 22:07:57.797: INFO: Container kube-rbac-proxy ready: true, restart count 0 Jun 10 22:07:57.797: INFO: Container node-exporter ready: true, restart count 0 Jun 10 22:07:57.797: INFO: kube-apiserver-master1 started at 2022-06-10 19:58:43 +0000 UTC (0+1 container statuses recorded) Jun 10 22:07:57.797: INFO: Container kube-apiserver ready: true, restart count 0 Jun 10 22:07:57.797: INFO: kube-controller-manager-master1 started at 2022-06-10 20:06:49 +0000 UTC (0+1 container statuses recorded) Jun 10 22:07:57.797: INFO: Container kube-controller-manager ready: true, restart count 2 Jun 10 22:07:57.797: INFO: kube-scheduler-master1 started at 2022-06-10 19:58:43 +0000 UTC (0+1 container statuses recorded) Jun 10 22:07:57.797: INFO: Container kube-scheduler ready: true, restart count 0 Jun 10 22:07:57.797: INFO: kube-proxy-rd4j7 started at 2022-06-10 19:59:24 +0000 UTC (0+1 container statuses recorded) Jun 10 22:07:57.797: INFO: Container kube-proxy ready: true, restart count 3 Jun 10 22:07:57.897: INFO: Latency metrics for node master1 Jun 10 22:07:57.897: INFO: Logging node info for node master2 Jun 10 22:07:57.899: INFO: Node Info: &Node{ObjectMeta:{master2 66c7af40-c8de-462b-933d-792f10a44a43 48830 0 2022-06-10 19:58:07 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master2 kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.203 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2022-06-10 19:58:08 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}}} {flanneld Update v1 2022-06-10 20:00:25 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2022-06-10 20:00:25 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}},"f:taints":{}}}} {kubelet Update v1 2022-06-10 20:10:54 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:capacity":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.1.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{451201003520 0} {} 440625980Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234743296 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{406080902496 0} {} 406080902496 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324579328 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-06-10 20:03:20 +0000 UTC,LastTransitionTime:2022-06-10 20:03:20 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-06-10 22:07:50 +0000 UTC,LastTransitionTime:2022-06-10 19:58:07 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-06-10 22:07:50 +0000 UTC,LastTransitionTime:2022-06-10 19:58:07 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-06-10 22:07:50 +0000 UTC,LastTransitionTime:2022-06-10 19:58:07 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-06-10 22:07:50 +0000 UTC,LastTransitionTime:2022-06-10 20:00:25 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.203,},NodeAddress{Type:Hostname,Address:master2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:31687d4b1abb46329a442e068ee56c42,SystemUUID:00A0DE53-E51D-E711-906E-0017A4403562,BootID:e234d452-a6d8-4bf0-b98d-a080613c39e9,KernelVersion:3.10.0-1160.66.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.17,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:727708945,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 sirot/netperf-latest:latest],SizeBytes:282025213,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e k8s.gcr.io/coredns/coredns:v1.8.0],SizeBytes:42454755,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 aquasec/kube-bench:0.3.1],SizeBytes:19301876,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jun 10 22:07:57.900: INFO: Logging kubelet events for node master2 Jun 10 22:07:57.902: INFO: Logging pods the kubelet thinks is on node master2 Jun 10 22:07:57.911: INFO: kube-apiserver-master2 started at 2022-06-10 19:58:44 +0000 UTC (0+1 container statuses recorded) Jun 10 22:07:57.911: INFO: Container kube-apiserver ready: true, restart count 0 Jun 10 22:07:57.911: INFO: node-exporter-6fbrb started at 2022-06-10 20:13:33 +0000 UTC (0+2 container statuses recorded) Jun 10 22:07:57.911: INFO: Container kube-rbac-proxy ready: true, restart count 0 Jun 10 22:07:57.911: INFO: Container node-exporter ready: true, restart count 0 Jun 10 22:07:57.911: INFO: kube-flannel-ftn9l started at 2022-06-10 20:00:20 +0000 UTC (1+1 container statuses recorded) Jun 10 22:07:57.911: INFO: Init container install-cni ready: true, restart count 2 Jun 10 22:07:57.911: INFO: Container kube-flannel ready: true, restart count 1 Jun 10 22:07:57.911: INFO: kube-multus-ds-amd64-nrmqq started at 2022-06-10 20:00:29 +0000 UTC (0+1 container statuses recorded) Jun 10 22:07:57.911: INFO: Container kube-multus ready: true, restart count 1 Jun 10 22:07:57.911: INFO: coredns-8474476ff8-hlspd started at 2022-06-10 20:01:00 +0000 UTC (0+1 container statuses recorded) Jun 10 22:07:57.911: INFO: Container coredns ready: true, restart count 1 Jun 10 22:07:57.911: INFO: kube-controller-manager-master2 started at 2022-06-10 20:06:49 +0000 UTC (0+1 container statuses recorded) Jun 10 22:07:57.911: INFO: Container kube-controller-manager ready: true, restart count 1 Jun 10 22:07:57.911: INFO: kube-scheduler-master2 started at 2022-06-10 20:06:49 +0000 UTC (0+1 container statuses recorded) Jun 10 22:07:57.911: INFO: Container kube-scheduler ready: true, restart count 3 Jun 10 22:07:57.911: INFO: kube-proxy-2kbvc started at 2022-06-10 19:59:24 +0000 UTC (0+1 container statuses recorded) Jun 10 22:07:57.911: INFO: Container kube-proxy ready: true, restart count 2 Jun 10 22:07:57.992: INFO: Latency metrics for node master2 Jun 10 22:07:57.992: INFO: Logging node info for node master3 Jun 10 22:07:57.994: INFO: Node Info: &Node{ObjectMeta:{master3 e51505ec-e791-4bbe-aeb1-bd0671fd4464 48783 0 2022-06-10 19:58:16 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master3 kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.204 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2022-06-10 19:58:18 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}}} {flanneld Update v1 2022-06-10 20:00:25 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2022-06-10 20:00:31 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.2.0/24\"":{}},"f:taints":{}}}} {kubelet Update v1 2022-06-10 20:10:54 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:capacity":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.2.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{451201003520 0} {} 440625980Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234739200 0} {} 196518300Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{406080902496 0} {} 406080902496 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324575232 0} {} 195629468Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-06-10 20:03:14 +0000 UTC,LastTransitionTime:2022-06-10 20:03:14 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-06-10 22:07:47 +0000 UTC,LastTransitionTime:2022-06-10 19:58:16 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-06-10 22:07:47 +0000 UTC,LastTransitionTime:2022-06-10 19:58:16 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-06-10 22:07:47 +0000 UTC,LastTransitionTime:2022-06-10 19:58:16 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-06-10 22:07:47 +0000 UTC,LastTransitionTime:2022-06-10 20:00:31 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.204,},NodeAddress{Type:Hostname,Address:master3,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:1f373495c4c54f68a37fa0d50cd1da58,SystemUUID:008B1444-141E-E711-906E-0017A4403562,BootID:a719d949-f9d1-4ee4-a79b-ab3a929b7d00,KernelVersion:3.10.0-1160.66.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.17,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:727708945,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e k8s.gcr.io/coredns/coredns:v1.8.0],SizeBytes:42454755,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jun 10 22:07:57.995: INFO: Logging kubelet events for node master3 Jun 10 22:07:57.997: INFO: Logging pods the kubelet thinks is on node master3 Jun 10 22:07:58.006: INFO: kube-controller-manager-master3 started at 2022-06-10 20:06:49 +0000 UTC (0+1 container statuses recorded) Jun 10 22:07:58.006: INFO: Container kube-controller-manager ready: true, restart count 2 Jun 10 22:07:58.006: INFO: kube-scheduler-master3 started at 2022-06-10 20:03:07 +0000 UTC (0+1 container statuses recorded) Jun 10 22:07:58.006: INFO: Container kube-scheduler ready: true, restart count 1 Jun 10 22:07:58.006: INFO: kube-proxy-rm9n6 started at 2022-06-10 19:59:24 +0000 UTC (0+1 container statuses recorded) Jun 10 22:07:58.006: INFO: Container kube-proxy ready: true, restart count 1 Jun 10 22:07:58.006: INFO: kube-flannel-jpd2j started at 2022-06-10 20:00:20 +0000 UTC (1+1 container statuses recorded) Jun 10 22:07:58.006: INFO: Init container install-cni ready: true, restart count 2 Jun 10 22:07:58.006: INFO: Container kube-flannel ready: true, restart count 2 Jun 10 22:07:58.006: INFO: kube-multus-ds-amd64-8b4tg started at 2022-06-10 20:00:29 +0000 UTC (0+1 container statuses recorded) Jun 10 22:07:58.006: INFO: Container kube-multus ready: true, restart count 1 Jun 10 22:07:58.006: INFO: coredns-8474476ff8-s8q89 started at 2022-06-10 20:00:56 +0000 UTC (0+1 container statuses recorded) Jun 10 22:07:58.006: INFO: Container coredns ready: true, restart count 1 Jun 10 22:07:58.006: INFO: node-exporter-q4rw6 started at 2022-06-10 20:13:33 +0000 UTC (0+2 container statuses recorded) Jun 10 22:07:58.006: INFO: Container kube-rbac-proxy ready: true, restart count 0 Jun 10 22:07:58.006: INFO: Container node-exporter ready: true, restart count 0 Jun 10 22:07:58.006: INFO: kube-apiserver-master3 started at 2022-06-10 20:03:07 +0000 UTC (0+1 container statuses recorded) Jun 10 22:07:58.006: INFO: Container kube-apiserver ready: true, restart count 0 Jun 10 22:07:58.090: INFO: Latency metrics for node master3 Jun 10 22:07:58.090: INFO: Logging node info for node node1 Jun 10 22:07:58.093: INFO: Node Info: &Node{ObjectMeta:{node1 fa951133-0317-499e-8a0a-fc7a0636a371 49091 0 2022-06-10 19:59:19 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux cmk.intel.com/cmk-node:true feature.node.kubernetes.io/cpu-cpuid.ADX:true feature.node.kubernetes.io/cpu-cpuid.AESNI:true feature.node.kubernetes.io/cpu-cpuid.AVX:true feature.node.kubernetes.io/cpu-cpuid.AVX2:true feature.node.kubernetes.io/cpu-cpuid.AVX512BW:true feature.node.kubernetes.io/cpu-cpuid.AVX512CD:true feature.node.kubernetes.io/cpu-cpuid.AVX512DQ:true feature.node.kubernetes.io/cpu-cpuid.AVX512F:true feature.node.kubernetes.io/cpu-cpuid.AVX512VL:true feature.node.kubernetes.io/cpu-cpuid.FMA3:true feature.node.kubernetes.io/cpu-cpuid.HLE:true feature.node.kubernetes.io/cpu-cpuid.IBPB:true feature.node.kubernetes.io/cpu-cpuid.MPX:true feature.node.kubernetes.io/cpu-cpuid.RTM:true feature.node.kubernetes.io/cpu-cpuid.SSE4:true feature.node.kubernetes.io/cpu-cpuid.SSE42:true feature.node.kubernetes.io/cpu-cpuid.STIBP:true feature.node.kubernetes.io/cpu-cpuid.VMX:true feature.node.kubernetes.io/cpu-cstate.enabled:true feature.node.kubernetes.io/cpu-hardware_multithreading:true feature.node.kubernetes.io/cpu-pstate.status:active feature.node.kubernetes.io/cpu-pstate.turbo:true feature.node.kubernetes.io/cpu-rdt.RDTCMT:true feature.node.kubernetes.io/cpu-rdt.RDTL3CA:true feature.node.kubernetes.io/cpu-rdt.RDTMBA:true feature.node.kubernetes.io/cpu-rdt.RDTMBM:true feature.node.kubernetes.io/cpu-rdt.RDTMON:true feature.node.kubernetes.io/kernel-config.NO_HZ:true feature.node.kubernetes.io/kernel-config.NO_HZ_FULL:true feature.node.kubernetes.io/kernel-selinux.enabled:true feature.node.kubernetes.io/kernel-version.full:3.10.0-1160.66.1.el7.x86_64 feature.node.kubernetes.io/kernel-version.major:3 feature.node.kubernetes.io/kernel-version.minor:10 feature.node.kubernetes.io/kernel-version.revision:0 feature.node.kubernetes.io/memory-numa:true feature.node.kubernetes.io/network-sriov.capable:true feature.node.kubernetes.io/network-sriov.configured:true feature.node.kubernetes.io/pci-0300_1a03.present:true feature.node.kubernetes.io/storage-nonrotationaldisk:true feature.node.kubernetes.io/system-os_release.ID:centos feature.node.kubernetes.io/system-os_release.VERSION_ID:7 feature.node.kubernetes.io/system-os_release.VERSION_ID.major:7 kubernetes.io/arch:amd64 kubernetes.io/hostname:node1 kubernetes.io/os:linux] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.207 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/extended-resources: nfd.node.kubernetes.io/feature-labels:cpu-cpuid.ADX,cpu-cpuid.AESNI,cpu-cpuid.AVX,cpu-cpuid.AVX2,cpu-cpuid.AVX512BW,cpu-cpuid.AVX512CD,cpu-cpuid.AVX512DQ,cpu-cpuid.AVX512F,cpu-cpuid.AVX512VL,cpu-cpuid.FMA3,cpu-cpuid.HLE,cpu-cpuid.IBPB,cpu-cpuid.MPX,cpu-cpuid.RTM,cpu-cpuid.SSE4,cpu-cpuid.SSE42,cpu-cpuid.STIBP,cpu-cpuid.VMX,cpu-cstate.enabled,cpu-hardware_multithreading,cpu-pstate.status,cpu-pstate.turbo,cpu-rdt.RDTCMT,cpu-rdt.RDTL3CA,cpu-rdt.RDTMBA,cpu-rdt.RDTMBM,cpu-rdt.RDTMON,kernel-config.NO_HZ,kernel-config.NO_HZ_FULL,kernel-selinux.enabled,kernel-version.full,kernel-version.major,kernel-version.minor,kernel-version.revision,memory-numa,network-sriov.capable,network-sriov.configured,pci-0300_1a03.present,storage-nonrotationaldisk,system-os_release.ID,system-os_release.VERSION_ID,system-os_release.VERSION_ID.major nfd.node.kubernetes.io/worker.version:v0.8.2 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2022-06-10 19:59:19 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.3.0/24\"":{}}}}} {kubeadm Update v1 2022-06-10 19:59:20 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {flanneld Update v1 2022-06-10 20:00:25 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {nfd-master Update v1 2022-06-10 20:08:16 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/extended-resources":{},"f:nfd.node.kubernetes.io/feature-labels":{},"f:nfd.node.kubernetes.io/worker.version":{}},"f:labels":{"f:feature.node.kubernetes.io/cpu-cpuid.ADX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AESNI":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX2":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512BW":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512CD":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512DQ":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512F":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512VL":{},"f:feature.node.kubernetes.io/cpu-cpuid.FMA3":{},"f:feature.node.kubernetes.io/cpu-cpuid.HLE":{},"f:feature.node.kubernetes.io/cpu-cpuid.IBPB":{},"f:feature.node.kubernetes.io/cpu-cpuid.MPX":{},"f:feature.node.kubernetes.io/cpu-cpuid.RTM":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE4":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE42":{},"f:feature.node.kubernetes.io/cpu-cpuid.STIBP":{},"f:feature.node.kubernetes.io/cpu-cpuid.VMX":{},"f:feature.node.kubernetes.io/cpu-cstate.enabled":{},"f:feature.node.kubernetes.io/cpu-hardware_multithreading":{},"f:feature.node.kubernetes.io/cpu-pstate.status":{},"f:feature.node.kubernetes.io/cpu-pstate.turbo":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTCMT":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTL3CA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBM":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMON":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ_FULL":{},"f:feature.node.kubernetes.io/kernel-selinux.enabled":{},"f:feature.node.kubernetes.io/kernel-version.full":{},"f:feature.node.kubernetes.io/kernel-version.major":{},"f:feature.node.kubernetes.io/kernel-version.minor":{},"f:feature.node.kubernetes.io/kernel-version.revision":{},"f:feature.node.kubernetes.io/memory-numa":{},"f:feature.node.kubernetes.io/network-sriov.capable":{},"f:feature.node.kubernetes.io/network-sriov.configured":{},"f:feature.node.kubernetes.io/pci-0300_1a03.present":{},"f:feature.node.kubernetes.io/storage-nonrotationaldisk":{},"f:feature.node.kubernetes.io/system-os_release.ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID.major":{}}}}} {Swagger-Codegen Update v1 2022-06-10 20:11:46 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:cmk.intel.com/cmk-node":{}}},"f:status":{"f:capacity":{"f:cmk.intel.com/exclusive-cores":{}}}}} {kubelet Update v1 2022-06-10 20:11:53 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:cmk.intel.com/exclusive-cores":{},"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:capacity":{"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.3.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.3.0/24],},Status:NodeStatus{Capacity:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{451201003520 0} {} 440625980Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{201269608448 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{77 0} {} 77 DecimalSI},ephemeral-storage: {{406080902496 0} {} 406080902496 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{178884608000 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-06-10 20:03:13 +0000 UTC,LastTransitionTime:2022-06-10 20:03:13 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-06-10 22:07:57 +0000 UTC,LastTransitionTime:2022-06-10 19:59:19 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-06-10 22:07:57 +0000 UTC,LastTransitionTime:2022-06-10 19:59:19 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-06-10 22:07:57 +0000 UTC,LastTransitionTime:2022-06-10 19:59:19 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-06-10 22:07:57 +0000 UTC,LastTransitionTime:2022-06-10 20:00:27 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.207,},NodeAddress{Type:Hostname,Address:node1,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:aabc551d0ffe4cb3b41c0db91649a9a2,SystemUUID:00CDA902-D022-E711-906E-0017A4403562,BootID:fea48af7-d08f-4093-b808-340d06faf38b,KernelVersion:3.10.0-1160.66.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.17,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[opnfv/barometer-collectd@sha256:f30e965aa6195e6ac4ca2410f5a15e3704c92e4afa5208178ca22a7911975d66],SizeBytes:1075575763,},ContainerImage{Names:[@ :],SizeBytes:1003977815,},ContainerImage{Names:[localhost:30500/cmk@sha256:fa61e6e6fee0a4d296013d2993a9ff5538ff0b2e232e6b9c661a6604d93ce888 cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:727708945,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[golang@sha256:db2475a1dbb2149508e5db31d7d77a75e6600d54be645f37681f03f2762169ba golang:alpine3.12],SizeBytes:301186719,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[k8s.gcr.io/etcd@sha256:4ad90a11b55313b182afc186b9876c8e891531b8db4c9bf1541953021618d0e2 k8s.gcr.io/etcd:3.4.13-0],SizeBytes:253392289,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:702a992280fb7c3303e84a5801acbb4c9c7fcf48cffe0e9c8be3f0c60f74cf89 k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.4],SizeBytes:253371792,},ContainerImage{Names:[grafana/grafana@sha256:ba39bf5131dcc0464134a3ff0e26e8c6380415249fa725e5f619176601255172 grafana/grafana:7.5.4],SizeBytes:203572842,},ContainerImage{Names:[quay.io/prometheus/prometheus@sha256:b899dbd1b9017b9a379f76ce5b40eead01a62762c4f2057eacef945c3c22d210 quay.io/prometheus/prometheus:v2.22.1],SizeBytes:168344243,},ContainerImage{Names:[nginx@sha256:a05b0cdd4fc1be3b224ba9662ebdf98fe44c09c0c9215b45f84344c12867002e nginx:1.21.1],SizeBytes:133175493,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:716d2f68314c5c4ddd5ecdb45183fcb4ed8019015982c1321571f863989b70b0 k8s.gcr.io/e2e-test-images/httpd:2.4.39-1],SizeBytes:126894770,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:125930239,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:1f36a24cfb5e0c3f725d7565a867c2384282fcbeccc77b07b423c9da95763a9a k8s.gcr.io/e2e-test-images/nautilus:1.4],SizeBytes:121748345,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[k8s.gcr.io/nfd/node-feature-discovery@sha256:74a1cbd82354f148277e20cdce25d57816e355a896bc67f67a0f722164b16945 k8s.gcr.io/nfd/node-feature-discovery:v0.8.2],SizeBytes:108486428,},ContainerImage{Names:[directxman12/k8s-prometheus-adapter@sha256:2b09a571757a12c0245f2f1a74db4d1b9386ff901cf57f5ce48a0a682bd0e3af directxman12/k8s-prometheus-adapter:v0.8.2],SizeBytes:68230450,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/sample-apiserver@sha256:e7fddbaac4c3451da2365ab90bad149d32f11409738034e41e0f460927f7c276 k8s.gcr.io/e2e-test-images/sample-apiserver:1.17.4],SizeBytes:58172101,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[localhost:30500/sriov-device-plugin@sha256:73408b8d6699bf382b8f7526b6d0a986fad0f037440cd9aabd8985a7e1dbea07 nfvpe/sriov-device-plugin:latest localhost:30500/sriov-device-plugin:v3.3.2],SizeBytes:42676189,},ContainerImage{Names:[localhost:30500/tasextender@sha256:bec743bd4fe4525edfd5f3c9bb11da21629092dfe60d396ce7f8168ac1088695 localhost:30500/tasextender:0.4],SizeBytes:28910791,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[prom/collectd-exporter@sha256:73fbda4d24421bff3b741c27efc36f1b6fbe7c57c378d56d4ff78101cd556654],SizeBytes:17463681,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:16032814,},ContainerImage{Names:[quay.io/prometheus-operator/prometheus-config-reloader@sha256:4dee0fcf1820355ddd6986c1317b555693776c731315544a99d6cc59a7e34ce9 quay.io/prometheus-operator/prometheus-config-reloader:v0.44.1],SizeBytes:13433274,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:11443478,},ContainerImage{Names:[appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172cead6502fec989198b98d8bb appropriate/curl:edge],SizeBytes:5654234,},ContainerImage{Names:[alpine@sha256:c75ac27b49326926b803b9ed43bf088bc220d22556de1bc5f72d742c91398f69 alpine:3.12],SizeBytes:5581590,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:1154361,},ContainerImage{Names:[busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 busybox:1.28],SizeBytes:1146369,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jun 10 22:07:58.094: INFO: Logging kubelet events for node node1 Jun 10 22:07:58.096: INFO: Logging pods the kubelet thinks is on node node1 Jun 10 22:07:58.108: INFO: externalname-service-m9jbh started at 2022-06-10 22:07:04 +0000 UTC (0+1 container statuses recorded) Jun 10 22:07:58.108: INFO: Container externalname-service ready: true, restart count 0 Jun 10 22:07:58.108: INFO: cmk-init-discover-node1-hlbt6 started at 2022-06-10 20:11:42 +0000 UTC (0+3 container statuses recorded) Jun 10 22:07:58.108: INFO: Container discover ready: false, restart count 0 Jun 10 22:07:58.108: INFO: Container init ready: false, restart count 0 Jun 10 22:07:58.108: INFO: Container install ready: false, restart count 0 Jun 10 22:07:58.108: INFO: prometheus-k8s-0 started at 2022-06-10 20:13:45 +0000 UTC (0+4 container statuses recorded) Jun 10 22:07:58.108: INFO: Container config-reloader ready: true, restart count 0 Jun 10 22:07:58.108: INFO: Container custom-metrics-apiserver ready: true, restart count 0 Jun 10 22:07:58.108: INFO: Container grafana ready: true, restart count 0 Jun 10 22:07:58.108: INFO: Container prometheus ready: true, restart count 1 Jun 10 22:07:58.108: INFO: pod-2 started at 2022-06-10 22:07:42 +0000 UTC (0+1 container statuses recorded) Jun 10 22:07:58.108: INFO: Container donothing ready: true, restart count 0 Jun 10 22:07:58.108: INFO: node-feature-discovery-worker-9xsdt started at 2022-06-10 20:08:09 +0000 UTC (0+1 container statuses recorded) Jun 10 22:07:58.108: INFO: Container nfd-worker ready: true, restart count 0 Jun 10 22:07:58.108: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-k4f5v started at 2022-06-10 20:09:21 +0000 UTC (0+1 container statuses recorded) Jun 10 22:07:58.108: INFO: Container kube-sriovdp ready: true, restart count 0 Jun 10 22:07:58.108: INFO: node-exporter-tk8f9 started at 2022-06-10 20:13:33 +0000 UTC (0+2 container statuses recorded) Jun 10 22:07:58.108: INFO: Container kube-rbac-proxy ready: true, restart count 0 Jun 10 22:07:58.108: INFO: Container node-exporter ready: true, restart count 0 Jun 10 22:07:58.108: INFO: cmk-webhook-6c9d5f8578-n9w8j started at 2022-06-10 20:12:30 +0000 UTC (0+1 container statuses recorded) Jun 10 22:07:58.108: INFO: Container cmk-webhook ready: true, restart count 0 Jun 10 22:07:58.108: INFO: collectd-kpj5z started at 2022-06-10 20:17:30 +0000 UTC (0+3 container statuses recorded) Jun 10 22:07:58.108: INFO: Container collectd ready: true, restart count 0 Jun 10 22:07:58.108: INFO: Container collectd-exporter ready: true, restart count 0 Jun 10 22:07:58.108: INFO: Container rbac-proxy ready: true, restart count 0 Jun 10 22:07:58.108: INFO: alpine-nnp-false-93a8a0a9-fa00-4ab7-b9fa-ab315cb2f051 started at 2022-06-10 22:07:52 +0000 UTC (0+1 container statuses recorded) Jun 10 22:07:58.108: INFO: Container alpine-nnp-false-93a8a0a9-fa00-4ab7-b9fa-ab315cb2f051 ready: false, restart count 0 Jun 10 22:07:58.108: INFO: cmk-qjrhs started at 2022-06-10 20:12:29 +0000 UTC (0+2 container statuses recorded) Jun 10 22:07:58.108: INFO: Container nodereport ready: true, restart count 0 Jun 10 22:07:58.108: INFO: Container reconcile ready: true, restart count 0 Jun 10 22:07:58.108: INFO: foo-5bj4z started at 2022-06-10 22:07:18 +0000 UTC (0+1 container statuses recorded) Jun 10 22:07:58.108: INFO: Container c ready: false, restart count 0 Jun 10 22:07:58.108: INFO: nginx-proxy-node1 started at 2022-06-10 19:59:19 +0000 UTC (0+1 container statuses recorded) Jun 10 22:07:58.108: INFO: Container nginx-proxy ready: true, restart count 2 Jun 10 22:07:58.108: INFO: downwardapi-volume-a538c984-1fe4-470e-b6dc-11ee081ba5ef started at 2022-06-10 22:07:55 +0000 UTC (0+1 container statuses recorded) Jun 10 22:07:58.108: INFO: Container client-container ready: false, restart count 0 Jun 10 22:07:58.108: INFO: execpodclt87 started at 2022-06-10 22:07:07 +0000 UTC (0+1 container statuses recorded) Jun 10 22:07:58.108: INFO: Container agnhost-container ready: true, restart count 0 Jun 10 22:07:58.108: INFO: test-webserver-130dfafe-d20f-42b6-9b5d-661bc0a0fc28 started at 2022-06-10 22:06:43 +0000 UTC (0+1 container statuses recorded) Jun 10 22:07:58.108: INFO: Container test-webserver ready: true, restart count 0 Jun 10 22:07:58.108: INFO: rc-test-hlfj2 started at 2022-06-10 22:07:38 +0000 UTC (0+1 container statuses recorded) Jun 10 22:07:58.108: INFO: Container rc-test ready: false, restart count 0 Jun 10 22:07:58.108: INFO: pod-handle-http-request started at 2022-06-10 22:07:54 +0000 UTC (0+1 container statuses recorded) Jun 10 22:07:58.108: INFO: Container agnhost-container ready: false, restart count 0 Jun 10 22:07:58.108: INFO: kube-proxy-5bkrr started at 2022-06-10 19:59:24 +0000 UTC (0+1 container statuses recorded) Jun 10 22:07:58.108: INFO: Container kube-proxy ready: true, restart count 1 Jun 10 22:07:58.108: INFO: kube-flannel-x926c started at 2022-06-10 20:00:20 +0000 UTC (1+1 container statuses recorded) Jun 10 22:07:58.108: INFO: Init container install-cni ready: true, restart count 2 Jun 10 22:07:58.108: INFO: Container kube-flannel ready: true, restart count 2 Jun 10 22:07:58.108: INFO: kube-multus-ds-amd64-4gckf started at 2022-06-10 20:00:29 +0000 UTC (0+1 container statuses recorded) Jun 10 22:07:58.108: INFO: Container kube-multus ready: true, restart count 1 Jun 10 22:07:58.108: INFO: tas-telemetry-aware-scheduling-84ff454dfb-lb2mn started at 2022-06-10 20:16:40 +0000 UTC (0+1 container statuses recorded) Jun 10 22:07:58.108: INFO: Container tas-extender ready: true, restart count 0 Jun 10 22:07:59.043: INFO: Latency metrics for node node1 Jun 10 22:07:59.043: INFO: Logging node info for node node2 Jun 10 22:07:59.045: INFO: Node Info: &Node{ObjectMeta:{node2 e3ba5b73-7a35-4d3f-9138-31db06c90dc3 49039 0 2022-06-10 19:59:19 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux cmk.intel.com/cmk-node:true feature.node.kubernetes.io/cpu-cpuid.ADX:true feature.node.kubernetes.io/cpu-cpuid.AESNI:true feature.node.kubernetes.io/cpu-cpuid.AVX:true feature.node.kubernetes.io/cpu-cpuid.AVX2:true feature.node.kubernetes.io/cpu-cpuid.AVX512BW:true feature.node.kubernetes.io/cpu-cpuid.AVX512CD:true feature.node.kubernetes.io/cpu-cpuid.AVX512DQ:true feature.node.kubernetes.io/cpu-cpuid.AVX512F:true feature.node.kubernetes.io/cpu-cpuid.AVX512VL:true feature.node.kubernetes.io/cpu-cpuid.FMA3:true feature.node.kubernetes.io/cpu-cpuid.HLE:true feature.node.kubernetes.io/cpu-cpuid.IBPB:true feature.node.kubernetes.io/cpu-cpuid.MPX:true feature.node.kubernetes.io/cpu-cpuid.RTM:true feature.node.kubernetes.io/cpu-cpuid.SSE4:true feature.node.kubernetes.io/cpu-cpuid.SSE42:true feature.node.kubernetes.io/cpu-cpuid.STIBP:true feature.node.kubernetes.io/cpu-cpuid.VMX:true feature.node.kubernetes.io/cpu-cstate.enabled:true feature.node.kubernetes.io/cpu-hardware_multithreading:true feature.node.kubernetes.io/cpu-pstate.status:active feature.node.kubernetes.io/cpu-pstate.turbo:true feature.node.kubernetes.io/cpu-rdt.RDTCMT:true feature.node.kubernetes.io/cpu-rdt.RDTL3CA:true feature.node.kubernetes.io/cpu-rdt.RDTMBA:true feature.node.kubernetes.io/cpu-rdt.RDTMBM:true feature.node.kubernetes.io/cpu-rdt.RDTMON:true feature.node.kubernetes.io/kernel-config.NO_HZ:true feature.node.kubernetes.io/kernel-config.NO_HZ_FULL:true feature.node.kubernetes.io/kernel-selinux.enabled:true feature.node.kubernetes.io/kernel-version.full:3.10.0-1160.66.1.el7.x86_64 feature.node.kubernetes.io/kernel-version.major:3 feature.node.kubernetes.io/kernel-version.minor:10 feature.node.kubernetes.io/kernel-version.revision:0 feature.node.kubernetes.io/memory-numa:true feature.node.kubernetes.io/network-sriov.capable:true feature.node.kubernetes.io/network-sriov.configured:true feature.node.kubernetes.io/pci-0300_1a03.present:true feature.node.kubernetes.io/storage-nonrotationaldisk:true feature.node.kubernetes.io/system-os_release.ID:centos feature.node.kubernetes.io/system-os_release.VERSION_ID:7 feature.node.kubernetes.io/system-os_release.VERSION_ID.major:7 kubernetes.io/arch:amd64 kubernetes.io/hostname:node2 kubernetes.io/os:linux] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.208 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/extended-resources: nfd.node.kubernetes.io/feature-labels:cpu-cpuid.ADX,cpu-cpuid.AESNI,cpu-cpuid.AVX,cpu-cpuid.AVX2,cpu-cpuid.AVX512BW,cpu-cpuid.AVX512CD,cpu-cpuid.AVX512DQ,cpu-cpuid.AVX512F,cpu-cpuid.AVX512VL,cpu-cpuid.FMA3,cpu-cpuid.HLE,cpu-cpuid.IBPB,cpu-cpuid.MPX,cpu-cpuid.RTM,cpu-cpuid.SSE4,cpu-cpuid.SSE42,cpu-cpuid.STIBP,cpu-cpuid.VMX,cpu-cstate.enabled,cpu-hardware_multithreading,cpu-pstate.status,cpu-pstate.turbo,cpu-rdt.RDTCMT,cpu-rdt.RDTL3CA,cpu-rdt.RDTMBA,cpu-rdt.RDTMBM,cpu-rdt.RDTMON,kernel-config.NO_HZ,kernel-config.NO_HZ_FULL,kernel-selinux.enabled,kernel-version.full,kernel-version.major,kernel-version.minor,kernel-version.revision,memory-numa,network-sriov.capable,network-sriov.configured,pci-0300_1a03.present,storage-nonrotationaldisk,system-os_release.ID,system-os_release.VERSION_ID,system-os_release.VERSION_ID.major nfd.node.kubernetes.io/worker.version:v0.8.2 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2022-06-10 19:59:19 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.4.0/24\"":{}}}}} {kubeadm Update v1 2022-06-10 19:59:20 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {flanneld Update v1 2022-06-10 20:00:25 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {nfd-master Update v1 2022-06-10 20:08:16 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/extended-resources":{},"f:nfd.node.kubernetes.io/feature-labels":{},"f:nfd.node.kubernetes.io/worker.version":{}},"f:labels":{"f:feature.node.kubernetes.io/cpu-cpuid.ADX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AESNI":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX2":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512BW":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512CD":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512DQ":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512F":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512VL":{},"f:feature.node.kubernetes.io/cpu-cpuid.FMA3":{},"f:feature.node.kubernetes.io/cpu-cpuid.HLE":{},"f:feature.node.kubernetes.io/cpu-cpuid.IBPB":{},"f:feature.node.kubernetes.io/cpu-cpuid.MPX":{},"f:feature.node.kubernetes.io/cpu-cpuid.RTM":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE4":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE42":{},"f:feature.node.kubernetes.io/cpu-cpuid.STIBP":{},"f:feature.node.kubernetes.io/cpu-cpuid.VMX":{},"f:feature.node.kubernetes.io/cpu-cstate.enabled":{},"f:feature.node.kubernetes.io/cpu-hardware_multithreading":{},"f:feature.node.kubernetes.io/cpu-pstate.status":{},"f:feature.node.kubernetes.io/cpu-pstate.turbo":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTCMT":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTL3CA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBM":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMON":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ_FULL":{},"f:feature.node.kubernetes.io/kernel-selinux.enabled":{},"f:feature.node.kubernetes.io/kernel-version.full":{},"f:feature.node.kubernetes.io/kernel-version.major":{},"f:feature.node.kubernetes.io/kernel-version.minor":{},"f:feature.node.kubernetes.io/kernel-version.revision":{},"f:feature.node.kubernetes.io/memory-numa":{},"f:feature.node.kubernetes.io/network-sriov.capable":{},"f:feature.node.kubernetes.io/network-sriov.configured":{},"f:feature.node.kubernetes.io/pci-0300_1a03.present":{},"f:feature.node.kubernetes.io/storage-nonrotationaldisk":{},"f:feature.node.kubernetes.io/system-os_release.ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID.major":{}}}}} {Swagger-Codegen Update v1 2022-06-10 20:12:10 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:cmk.intel.com/cmk-node":{}}},"f:status":{"f:capacity":{"f:cmk.intel.com/exclusive-cores":{}}}}} {kubelet Update v1 2022-06-10 20:12:13 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:cmk.intel.com/exclusive-cores":{},"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:capacity":{"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.4.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.4.0/24],},Status:NodeStatus{Capacity:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{451201003520 0} {} 440625980Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{201269604352 0} {} 196552348Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{77 0} {} 77 DecimalSI},ephemeral-storage: {{406080902496 0} {} 406080902496 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{178884603904 0} {} 174691996Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-06-10 20:03:16 +0000 UTC,LastTransitionTime:2022-06-10 20:03:16 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-06-10 22:07:56 +0000 UTC,LastTransitionTime:2022-06-10 19:59:19 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-06-10 22:07:56 +0000 UTC,LastTransitionTime:2022-06-10 19:59:19 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-06-10 22:07:56 +0000 UTC,LastTransitionTime:2022-06-10 19:59:19 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-06-10 22:07:56 +0000 UTC,LastTransitionTime:2022-06-10 20:00:31 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.208,},NodeAddress{Type:Hostname,Address:node2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:bb5fb4a83f9949939cd41b7583e9b343,SystemUUID:80B3CD56-852F-E711-906E-0017A4403562,BootID:bd9c2046-c9ae-4b83-a147-c07e3487254e,KernelVersion:3.10.0-1160.66.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.17,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[opnfv/barometer-collectd@sha256:f30e965aa6195e6ac4ca2410f5a15e3704c92e4afa5208178ca22a7911975d66],SizeBytes:1075575763,},ContainerImage{Names:[localhost:30500/cmk@sha256:fa61e6e6fee0a4d296013d2993a9ff5538ff0b2e232e6b9c661a6604d93ce888 localhost:30500/cmk:v1.5.1],SizeBytes:727708945,},ContainerImage{Names:[cmk:v1.5.1],SizeBytes:727708945,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[aquasec/kube-hunter@sha256:2be6820bc1d7e0f57193a9a27d5a3e16b2fd93c53747b03ce8ca48c6fc323781 aquasec/kube-hunter:0.3.1],SizeBytes:347611549,},ContainerImage{Names:[sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 sirot/netperf-latest:latest],SizeBytes:282025213,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:702a992280fb7c3303e84a5801acbb4c9c7fcf48cffe0e9c8be3f0c60f74cf89 k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.4],SizeBytes:253371792,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[nginx@sha256:a05b0cdd4fc1be3b224ba9662ebdf98fe44c09c0c9215b45f84344c12867002e nginx:1.21.1],SizeBytes:133175493,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:716d2f68314c5c4ddd5ecdb45183fcb4ed8019015982c1321571f863989b70b0 k8s.gcr.io/e2e-test-images/httpd:2.4.39-1],SizeBytes:126894770,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:125930239,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:1f36a24cfb5e0c3f725d7565a867c2384282fcbeccc77b07b423c9da95763a9a k8s.gcr.io/e2e-test-images/nautilus:1.4],SizeBytes:121748345,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[k8s.gcr.io/nfd/node-feature-discovery@sha256:74a1cbd82354f148277e20cdce25d57816e355a896bc67f67a0f722164b16945 k8s.gcr.io/nfd/node-feature-discovery:v0.8.2],SizeBytes:108486428,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[localhost:30500/sriov-device-plugin@sha256:73408b8d6699bf382b8f7526b6d0a986fad0f037440cd9aabd8985a7e1dbea07 localhost:30500/sriov-device-plugin:v3.3.2],SizeBytes:42676189,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 aquasec/kube-bench:0.3.1],SizeBytes:19301876,},ContainerImage{Names:[prom/collectd-exporter@sha256:73fbda4d24421bff3b741c27efc36f1b6fbe7c57c378d56d4ff78101cd556654],SizeBytes:17463681,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:16032814,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:11443478,},ContainerImage{Names:[appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172cead6502fec989198b98d8bb appropriate/curl:edge],SizeBytes:5654234,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:1154361,},ContainerImage{Names:[busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 busybox:1.28],SizeBytes:1146369,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jun 10 22:07:59.046: INFO: Logging kubelet events for node node2 Jun 10 22:07:59.049: INFO: Logging pods the kubelet thinks is on node node2 Jun 10 22:07:59.060: INFO: node-feature-discovery-worker-s9mwk started at 2022-06-10 20:08:09 +0000 UTC (0+1 container statuses recorded) Jun 10 22:07:59.060: INFO: Container nfd-worker ready: true, restart count 0 Jun 10 22:07:59.060: INFO: my-hostname-basic-e09333b3-d6ca-48b8-8503-32c829e85026-jlzsg started at 2022-06-10 22:07:42 +0000 UTC (0+1 container statuses recorded) Jun 10 22:07:59.060: INFO: Container my-hostname-basic-e09333b3-d6ca-48b8-8503-32c829e85026 ready: true, restart count 0 Jun 10 22:07:59.060: INFO: kube-proxy-4clxz started at 2022-06-10 19:59:24 +0000 UTC (0+1 container statuses recorded) Jun 10 22:07:59.060: INFO: Container kube-proxy ready: true, restart count 2 Jun 10 22:07:59.060: INFO: kube-flannel-8jl6m started at 2022-06-10 20:00:20 +0000 UTC (1+1 container statuses recorded) Jun 10 22:07:59.060: INFO: Init container install-cni ready: true, restart count 2 Jun 10 22:07:59.060: INFO: Container kube-flannel ready: true, restart count 2 Jun 10 22:07:59.060: INFO: cmk-init-discover-node2-jxvbr started at 2022-06-10 20:12:04 +0000 UTC (0+3 container statuses recorded) Jun 10 22:07:59.060: INFO: Container discover ready: false, restart count 0 Jun 10 22:07:59.060: INFO: Container init ready: false, restart count 0 Jun 10 22:07:59.060: INFO: Container install ready: false, restart count 0 Jun 10 22:07:59.060: INFO: externalname-service-5zzgz started at 2022-06-10 22:07:04 +0000 UTC (0+1 container statuses recorded) Jun 10 22:07:59.060: INFO: Container externalname-service ready: true, restart count 0 Jun 10 22:07:59.060: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-z4m46 started at 2022-06-10 20:09:21 +0000 UTC (0+1 container statuses recorded) Jun 10 22:07:59.060: INFO: Container kube-sriovdp ready: true, restart count 0 Jun 10 22:07:59.060: INFO: collectd-srmjh started at 2022-06-10 20:17:30 +0000 UTC (0+3 container statuses recorded) Jun 10 22:07:59.060: INFO: Container collectd ready: true, restart count 0 Jun 10 22:07:59.060: INFO: Container collectd-exporter ready: true, restart count 0 Jun 10 22:07:59.060: INFO: Container rbac-proxy ready: true, restart count 0 Jun 10 22:07:59.060: INFO: pod-1 started at 2022-06-10 22:07:42 +0000 UTC (0+1 container statuses recorded) Jun 10 22:07:59.060: INFO: Container donothing ready: true, restart count 0 Jun 10 22:07:59.060: INFO: kubernetes-metrics-scraper-5558854cb-pf6tn started at 2022-06-10 20:01:01 +0000 UTC (0+1 container statuses recorded) Jun 10 22:07:59.060: INFO: Container kubernetes-metrics-scraper ready: true, restart count 1 Jun 10 22:07:59.060: INFO: node-exporter-trpg7 started at 2022-06-10 20:13:33 +0000 UTC (0+2 container statuses recorded) Jun 10 22:07:59.060: INFO: Container kube-rbac-proxy ready: true, restart count 0 Jun 10 22:07:59.060: INFO: Container node-exporter ready: true, restart count 0 Jun 10 22:07:59.060: INFO: downward-api-cb56ad70-3589-4f49-92bb-f10066379c1c started at 2022-06-10 22:07:57 +0000 UTC (0+1 container statuses recorded) Jun 10 22:07:59.060: INFO: Container dapi-container ready: false, restart count 0 Jun 10 22:07:59.060: INFO: nginx-proxy-node2 started at 2022-06-10 19:59:19 +0000 UTC (0+1 container statuses recorded) Jun 10 22:07:59.060: INFO: Container nginx-proxy ready: true, restart count 2 Jun 10 22:07:59.060: INFO: kube-multus-ds-amd64-nj866 started at 2022-06-10 20:00:29 +0000 UTC (0+1 container statuses recorded) Jun 10 22:07:59.060: INFO: Container kube-multus ready: true, restart count 1 Jun 10 22:07:59.060: INFO: kubernetes-dashboard-785dcbb76d-7pmgn started at 2022-06-10 20:01:00 +0000 UTC (0+1 container statuses recorded) Jun 10 22:07:59.060: INFO: Container kubernetes-dashboard ready: true, restart count 1 Jun 10 22:07:59.060: INFO: cmk-zpstc started at 2022-06-10 20:12:29 +0000 UTC (0+2 container statuses recorded) Jun 10 22:07:59.060: INFO: Container nodereport ready: true, restart count 0 Jun 10 22:07:59.060: INFO: Container reconcile ready: true, restart count 0 Jun 10 22:07:59.060: INFO: pod-0 started at 2022-06-10 22:07:42 +0000 UTC (0+1 container statuses recorded) Jun 10 22:07:59.060: INFO: Container donothing ready: true, restart count 0 Jun 10 22:08:00.190: INFO: Latency metrics for node node2 Jun 10 22:08:00.190: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-5999" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750 • Failure [152.682 seconds] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance] [It] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Jun 10 22:07:45.154: Unexpected error: <*errors.errorString | 0xc002414e50>: { s: "service is not reachable within 2m0s timeout on endpoint 10.10.190.207:30715 over TCP protocol", } service is not reachable within 2m0s timeout on endpoint 10.10.190.207:30715 over TCP protocol occurred /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:2576 ------------------------------ {"msg":"FAILED [sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","total":-1,"completed":16,"skipped":268,"failed":1,"failures":["[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]"]} SSS ------------------------------ [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 10 22:07:54.969: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41 [It] should provide podname only [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test downward API volume plugin Jun 10 22:07:55.004: INFO: Waiting up to 5m0s for pod "downwardapi-volume-a538c984-1fe4-470e-b6dc-11ee081ba5ef" in namespace "projected-2499" to be "Succeeded or Failed" Jun 10 22:07:55.007: INFO: Pod "downwardapi-volume-a538c984-1fe4-470e-b6dc-11ee081ba5ef": Phase="Pending", Reason="", readiness=false. Elapsed: 2.367328ms Jun 10 22:07:57.009: INFO: Pod "downwardapi-volume-a538c984-1fe4-470e-b6dc-11ee081ba5ef": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005102358s Jun 10 22:07:59.013: INFO: Pod "downwardapi-volume-a538c984-1fe4-470e-b6dc-11ee081ba5ef": Phase="Pending", Reason="", readiness=false. Elapsed: 4.008824729s Jun 10 22:08:01.017: INFO: Pod "downwardapi-volume-a538c984-1fe4-470e-b6dc-11ee081ba5ef": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.012522944s STEP: Saw pod success Jun 10 22:08:01.017: INFO: Pod "downwardapi-volume-a538c984-1fe4-470e-b6dc-11ee081ba5ef" satisfied condition "Succeeded or Failed" Jun 10 22:08:01.019: INFO: Trying to get logs from node node1 pod downwardapi-volume-a538c984-1fe4-470e-b6dc-11ee081ba5ef container client-container: STEP: delete the pod Jun 10 22:08:01.033: INFO: Waiting for pod downwardapi-volume-a538c984-1fe4-470e-b6dc-11ee081ba5ef to disappear Jun 10 22:08:01.036: INFO: Pod downwardapi-volume-a538c984-1fe4-470e-b6dc-11ee081ba5ef no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 10 22:08:01.036: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2499" for this suite. • [SLOW TEST:6.075 seconds] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should provide podname only [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance]","total":-1,"completed":29,"skipped":591,"failed":1,"failures":["[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 10 22:07:51.195: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46 [It] should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Jun 10 22:07:51.230: INFO: Waiting up to 5m0s for pod "alpine-nnp-false-93a8a0a9-fa00-4ab7-b9fa-ab315cb2f051" in namespace "security-context-test-9383" to be "Succeeded or Failed" Jun 10 22:07:51.232: INFO: Pod "alpine-nnp-false-93a8a0a9-fa00-4ab7-b9fa-ab315cb2f051": Phase="Pending", Reason="", readiness=false. Elapsed: 1.813485ms Jun 10 22:07:53.235: INFO: Pod "alpine-nnp-false-93a8a0a9-fa00-4ab7-b9fa-ab315cb2f051": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005277715s Jun 10 22:07:55.239: INFO: Pod "alpine-nnp-false-93a8a0a9-fa00-4ab7-b9fa-ab315cb2f051": Phase="Pending", Reason="", readiness=false. Elapsed: 4.008593295s Jun 10 22:07:57.242: INFO: Pod "alpine-nnp-false-93a8a0a9-fa00-4ab7-b9fa-ab315cb2f051": Phase="Pending", Reason="", readiness=false. Elapsed: 6.011707749s Jun 10 22:07:59.246: INFO: Pod "alpine-nnp-false-93a8a0a9-fa00-4ab7-b9fa-ab315cb2f051": Phase="Pending", Reason="", readiness=false. Elapsed: 8.016091866s Jun 10 22:08:01.249: INFO: Pod "alpine-nnp-false-93a8a0a9-fa00-4ab7-b9fa-ab315cb2f051": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.019013716s Jun 10 22:08:01.249: INFO: Pod "alpine-nnp-false-93a8a0a9-fa00-4ab7-b9fa-ab315cb2f051" satisfied condition "Succeeded or Failed" [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 10 22:08:01.255: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-9383" for this suite. • [SLOW TEST:10.068 seconds] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 when creating containers with AllowPrivilegeEscalation /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:296 should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":39,"skipped":873,"failed":0} SSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 10 22:07:57.683: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide host IP as an env var [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test downward api env vars Jun 10 22:07:57.717: INFO: Waiting up to 5m0s for pod "downward-api-cb56ad70-3589-4f49-92bb-f10066379c1c" in namespace "downward-api-5916" to be "Succeeded or Failed" Jun 10 22:07:57.719: INFO: Pod "downward-api-cb56ad70-3589-4f49-92bb-f10066379c1c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.350216ms Jun 10 22:07:59.723: INFO: Pod "downward-api-cb56ad70-3589-4f49-92bb-f10066379c1c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006175014s Jun 10 22:08:01.727: INFO: Pod "downward-api-cb56ad70-3589-4f49-92bb-f10066379c1c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.010357277s Jun 10 22:08:03.733: INFO: Pod "downward-api-cb56ad70-3589-4f49-92bb-f10066379c1c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.015875336s Jun 10 22:08:05.735: INFO: Pod "downward-api-cb56ad70-3589-4f49-92bb-f10066379c1c": Phase="Pending", Reason="", readiness=false. Elapsed: 8.018693077s Jun 10 22:08:07.741: INFO: Pod "downward-api-cb56ad70-3589-4f49-92bb-f10066379c1c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.024006882s STEP: Saw pod success Jun 10 22:08:07.741: INFO: Pod "downward-api-cb56ad70-3589-4f49-92bb-f10066379c1c" satisfied condition "Succeeded or Failed" Jun 10 22:08:07.744: INFO: Trying to get logs from node node2 pod downward-api-cb56ad70-3589-4f49-92bb-f10066379c1c container dapi-container: STEP: delete the pod Jun 10 22:08:08.100: INFO: Waiting for pod downward-api-cb56ad70-3589-4f49-92bb-f10066379c1c to disappear Jun 10 22:08:08.102: INFO: Pod downward-api-cb56ad70-3589-4f49-92bb-f10066379c1c no longer exists [AfterEach] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 10 22:08:08.102: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-5916" for this suite. • [SLOW TEST:10.429 seconds] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should provide host IP as an env var [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 10 22:08:01.284: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:186 [It] should run through the lifecycle of Pods and PodStatus [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating a Pod with a static label STEP: watching for Pod to be ready Jun 10 22:08:01.343: INFO: observed Pod pod-test in namespace pods-110 in phase Pending with labels: map[test-pod-static:true] & conditions [] Jun 10 22:08:01.345: INFO: observed Pod pod-test in namespace pods-110 in phase Pending with labels: map[test-pod-static:true] & conditions [{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-06-10 22:08:01 +0000 UTC }] Jun 10 22:08:06.500: INFO: observed Pod pod-test in namespace pods-110 in phase Pending with labels: map[test-pod-static:true] & conditions [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-06-10 22:08:01 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-06-10 22:08:01 +0000 UTC ContainersNotReady containers with unready status: [pod-test]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-06-10 22:08:01 +0000 UTC ContainersNotReady containers with unready status: [pod-test]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-06-10 22:08:01 +0000 UTC }] Jun 10 22:08:07.254: INFO: observed Pod pod-test in namespace pods-110 in phase Pending with labels: map[test-pod-static:true] & conditions [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-06-10 22:08:01 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-06-10 22:08:01 +0000 UTC ContainersNotReady containers with unready status: [pod-test]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-06-10 22:08:01 +0000 UTC ContainersNotReady containers with unready status: [pod-test]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-06-10 22:08:01 +0000 UTC }] Jun 10 22:08:10.291: INFO: Found Pod pod-test in namespace pods-110 in phase Running with labels: map[test-pod-static:true] & conditions [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-06-10 22:08:01 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2022-06-10 22:08:09 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-06-10 22:08:09 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-06-10 22:08:01 +0000 UTC }] STEP: patching the Pod with a new Label and updated data Jun 10 22:08:10.304: INFO: observed event type ADDED STEP: getting the Pod and ensuring that it's patched STEP: getting the PodStatus STEP: replacing the Pod's status Ready condition to False STEP: check the Pod again to ensure its Ready conditions are False STEP: deleting the Pod via a Collection with a LabelSelector STEP: watching for the Pod to be deleted Jun 10 22:08:10.323: INFO: observed event type ADDED Jun 10 22:08:10.323: INFO: observed event type MODIFIED Jun 10 22:08:10.323: INFO: observed event type MODIFIED Jun 10 22:08:10.323: INFO: observed event type MODIFIED Jun 10 22:08:10.323: INFO: observed event type MODIFIED Jun 10 22:08:10.323: INFO: observed event type MODIFIED Jun 10 22:08:10.323: INFO: observed event type MODIFIED Jun 10 22:08:10.323: INFO: observed event type MODIFIED [AfterEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 10 22:08:10.324: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-110" for this suite. • [SLOW TEST:9.048 seconds] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should run through the lifecycle of Pods and PodStatus [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Pods should run through the lifecycle of Pods and PodStatus [Conformance]","total":-1,"completed":40,"skipped":883,"failed":0} SSS ------------------------------ [BeforeEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 10 22:08:00.216: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete pods created by rc when not orphaning [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: create the rc STEP: delete the rc STEP: wait for all pods to be garbage collected STEP: Gathering metrics Jun 10 22:08:10.272: INFO: The status of Pod kube-controller-manager-master3 is Running (Ready = true) Jun 10 22:08:10.329: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 10 22:08:10.329: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-924" for this suite. • [SLOW TEST:10.121 seconds] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should delete pods created by rc when not orphaning [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ S ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance]","total":-1,"completed":17,"skipped":271,"failed":1,"failures":["[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]"]} SSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] server version /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 10 22:08:10.362: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename server-version STEP: Waiting for a default service account to be provisioned in namespace [It] should find the server version [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Request ServerVersion STEP: Confirm major version Jun 10 22:08:10.383: INFO: Major version: 1 STEP: Confirm minor version Jun 10 22:08:10.383: INFO: cleanMinorVersion: 21 Jun 10 22:08:10.383: INFO: Minor version: 21 [AfterEach] [sig-api-machinery] server version /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 10 22:08:10.383: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "server-version-5552" for this suite. • ------------------------------ {"msg":"PASSED [sig-api-machinery] server version should find the server version [Conformance]","total":-1,"completed":18,"skipped":286,"failed":1,"failures":["[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ {"msg":"PASSED [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance]","total":-1,"completed":45,"skipped":725,"failed":0} [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 10 22:08:08.114: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46 [It] should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Jun 10 22:08:08.148: INFO: Waiting up to 5m0s for pod "busybox-user-65534-4bdf13bf-0799-4ca6-9138-716fc2c2b91b" in namespace "security-context-test-9269" to be "Succeeded or Failed" Jun 10 22:08:08.151: INFO: Pod "busybox-user-65534-4bdf13bf-0799-4ca6-9138-716fc2c2b91b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.664328ms Jun 10 22:08:10.154: INFO: Pod "busybox-user-65534-4bdf13bf-0799-4ca6-9138-716fc2c2b91b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005896732s Jun 10 22:08:12.158: INFO: Pod "busybox-user-65534-4bdf13bf-0799-4ca6-9138-716fc2c2b91b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.00990534s Jun 10 22:08:12.158: INFO: Pod "busybox-user-65534-4bdf13bf-0799-4ca6-9138-716fc2c2b91b" satisfied condition "Succeeded or Failed" [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 10 22:08:12.158: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-9269" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":46,"skipped":725,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 10 22:08:10.437: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/init_container.go:162 [It] should invoke init containers on a RestartNever pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating the pod Jun 10 22:08:10.457: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [sig-node] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 10 22:08:15.710: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-706" for this suite. • [SLOW TEST:5.281 seconds] [sig-node] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should invoke init containers on a RestartNever pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance]","total":-1,"completed":19,"skipped":311,"failed":1,"failures":["[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]"]} SSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 10 22:08:10.340: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should mount projected service account token [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test service account token: Jun 10 22:08:10.377: INFO: Waiting up to 5m0s for pod "test-pod-6799bd50-e0d5-4765-816e-3a4977322e30" in namespace "svcaccounts-7500" to be "Succeeded or Failed" Jun 10 22:08:10.380: INFO: Pod "test-pod-6799bd50-e0d5-4765-816e-3a4977322e30": Phase="Pending", Reason="", readiness=false. Elapsed: 3.609732ms Jun 10 22:08:12.383: INFO: Pod "test-pod-6799bd50-e0d5-4765-816e-3a4977322e30": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006457147s Jun 10 22:08:14.388: INFO: Pod "test-pod-6799bd50-e0d5-4765-816e-3a4977322e30": Phase="Pending", Reason="", readiness=false. Elapsed: 4.011029313s Jun 10 22:08:16.393: INFO: Pod "test-pod-6799bd50-e0d5-4765-816e-3a4977322e30": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.016394691s STEP: Saw pod success Jun 10 22:08:16.393: INFO: Pod "test-pod-6799bd50-e0d5-4765-816e-3a4977322e30" satisfied condition "Succeeded or Failed" Jun 10 22:08:16.395: INFO: Trying to get logs from node node2 pod test-pod-6799bd50-e0d5-4765-816e-3a4977322e30 container agnhost-container: STEP: delete the pod Jun 10 22:08:16.483: INFO: Waiting for pod test-pod-6799bd50-e0d5-4765-816e-3a4977322e30 to disappear Jun 10 22:08:16.486: INFO: Pod test-pod-6799bd50-e0d5-4765-816e-3a4977322e30 no longer exists [AfterEach] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 10 22:08:16.487: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-7500" for this suite. • [SLOW TEST:6.155 seconds] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23 should mount projected service account token [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-auth] ServiceAccounts should mount projected service account token [Conformance]","total":-1,"completed":41,"skipped":887,"failed":0} SSS ------------------------------ [BeforeEach] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 10 22:08:01.141: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should fail substituting values in a volume subpath with absolute path [Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Jun 10 22:08:11.181: INFO: Deleting pod "var-expansion-ab9b9cbd-2023-4174-929e-9d25ec975714" in namespace "var-expansion-6782" Jun 10 22:08:11.186: INFO: Wait up to 5m0s for pod "var-expansion-ab9b9cbd-2023-4174-929e-9d25ec975714" to be fully deleted [AfterEach] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 10 22:08:17.191: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-6782" for this suite. • [SLOW TEST:16.057 seconds] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should fail substituting values in a volume subpath with absolute path [Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Variable Expansion should fail substituting values in a volume subpath with absolute path [Slow] [Conformance]","total":-1,"completed":30,"skipped":649,"failed":1,"failures":["[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SS ------------------------------ [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 10 22:07:31.908: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of same group but different versions [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: CRs in the same group but different versions (one multiversion CRD) show up in OpenAPI documentation Jun 10 22:07:31.932: INFO: >>> kubeConfig: /root/.kube/config STEP: CRs in the same group but different versions (two CRDs) show up in OpenAPI documentation Jun 10 22:07:50.822: INFO: >>> kubeConfig: /root/.kube/config Jun 10 22:07:59.504: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 10 22:08:18.342: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-4957" for this suite. • [SLOW TEST:46.453 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of same group but different versions [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance]","total":-1,"completed":26,"skipped":444,"failed":2,"failures":["[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should be able to create a functioning NodePort service [Conformance]"]} SSSS ------------------------------ [BeforeEach] [sig-scheduling] LimitRange /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 10 22:08:12.193: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename limitrange STEP: Waiting for a default service account to be provisioned in namespace [It] should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a LimitRange STEP: Setting up watch STEP: Submitting a LimitRange Jun 10 22:08:12.226: INFO: observed the limitRanges list STEP: Verifying LimitRange creation was observed STEP: Fetching the LimitRange to ensure it has proper values Jun 10 22:08:12.230: INFO: Verifying requests: expected map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] with actual map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] Jun 10 22:08:12.230: INFO: Verifying limits: expected map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] with actual map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] STEP: Creating a Pod with no resource requirements STEP: Ensuring Pod has resource requirements applied from LimitRange Jun 10 22:08:12.246: INFO: Verifying requests: expected map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] with actual map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] Jun 10 22:08:12.246: INFO: Verifying limits: expected map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] with actual map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] STEP: Creating a Pod with partial resource requirements STEP: Ensuring Pod has merged resource requirements applied from LimitRange Jun 10 22:08:12.264: INFO: Verifying requests: expected map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{161061273600 0} {} 150Gi BinarySI} memory:{{157286400 0} {} 150Mi BinarySI}] with actual map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{161061273600 0} {} 150Gi BinarySI} memory:{{157286400 0} {} 150Mi BinarySI}] Jun 10 22:08:12.264: INFO: Verifying limits: expected map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] with actual map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] STEP: Failing to create a Pod with less than min resources STEP: Failing to create a Pod with more than max resources STEP: Updating a LimitRange STEP: Verifying LimitRange updating is effective STEP: Creating a Pod with less than former min resources STEP: Failing to create a Pod with more than max resources STEP: Deleting a LimitRange STEP: Verifying the LimitRange was deleted Jun 10 22:08:19.312: INFO: limitRange is already deleted STEP: Creating a Pod with more than former max resources [AfterEach] [sig-scheduling] LimitRange /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 10 22:08:19.323: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "limitrange-7227" for this suite. • [SLOW TEST:7.139 seconds] [sig-scheduling] LimitRange /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-scheduling] LimitRange should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance]","total":-1,"completed":47,"skipped":741,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ {"msg":"PASSED [sig-apps] Job should delete a job [Conformance]","total":-1,"completed":15,"skipped":363,"failed":1,"failures":["[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]"]} [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 10 22:07:58.918: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:241 [BeforeEach] Kubectl logs /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1386 STEP: creating an pod Jun 10 22:07:58.940: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-5720 run logs-generator --image=k8s.gcr.io/e2e-test-images/agnhost:2.32 --restart=Never -- logs-generator --log-lines-total 100 --run-duration 20s' Jun 10 22:07:59.083: INFO: stderr: "" Jun 10 22:07:59.083: INFO: stdout: "pod/logs-generator created\n" [It] should be able to retrieve and filter logs [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Waiting for log generator to start. Jun 10 22:07:59.083: INFO: Waiting up to 5m0s for 1 pods to be running and ready, or succeeded: [logs-generator] Jun 10 22:07:59.083: INFO: Waiting up to 5m0s for pod "logs-generator" in namespace "kubectl-5720" to be "running and ready, or succeeded" Jun 10 22:07:59.086: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 2.933853ms Jun 10 22:08:01.089: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005689497s Jun 10 22:08:03.094: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 4.011057636s Jun 10 22:08:05.097: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 6.013961522s Jun 10 22:08:07.100: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 8.016695688s Jun 10 22:08:09.103: INFO: Pod "logs-generator": Phase="Running", Reason="", readiness=true. Elapsed: 10.020345624s Jun 10 22:08:09.103: INFO: Pod "logs-generator" satisfied condition "running and ready, or succeeded" Jun 10 22:08:09.103: INFO: Wanted all 1 pods to be running and ready, or succeeded. Result: true. Pods: [logs-generator] STEP: checking for a matching strings Jun 10 22:08:09.103: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-5720 logs logs-generator logs-generator' Jun 10 22:08:09.263: INFO: stderr: "" Jun 10 22:08:09.263: INFO: stdout: "I0610 22:08:02.219251 1 logs_generator.go:76] 0 PUT /api/v1/namespaces/default/pods/l7n 563\nI0610 22:08:02.419370 1 logs_generator.go:76] 1 GET /api/v1/namespaces/kube-system/pods/nc2l 547\nI0610 22:08:02.619638 1 logs_generator.go:76] 2 PUT /api/v1/namespaces/ns/pods/vjvq 415\nI0610 22:08:02.819936 1 logs_generator.go:76] 3 GET /api/v1/namespaces/ns/pods/bkgd 375\nI0610 22:08:03.020232 1 logs_generator.go:76] 4 GET /api/v1/namespaces/default/pods/9cv 444\nI0610 22:08:03.219304 1 logs_generator.go:76] 5 POST /api/v1/namespaces/kube-system/pods/fghs 324\nI0610 22:08:03.419584 1 logs_generator.go:76] 6 GET /api/v1/namespaces/ns/pods/lwk 252\nI0610 22:08:03.619857 1 logs_generator.go:76] 7 POST /api/v1/namespaces/kube-system/pods/h6m 359\nI0610 22:08:03.820125 1 logs_generator.go:76] 8 PUT /api/v1/namespaces/kube-system/pods/5x7q 276\nI0610 22:08:04.019366 1 logs_generator.go:76] 9 PUT /api/v1/namespaces/default/pods/mnx 549\nI0610 22:08:04.219701 1 logs_generator.go:76] 10 PUT /api/v1/namespaces/default/pods/98j 570\nI0610 22:08:04.420037 1 logs_generator.go:76] 11 POST /api/v1/namespaces/default/pods/zjd 384\nI0610 22:08:04.619328 1 logs_generator.go:76] 12 GET /api/v1/namespaces/default/pods/6hq4 484\nI0610 22:08:04.819631 1 logs_generator.go:76] 13 GET /api/v1/namespaces/ns/pods/zkvt 493\nI0610 22:08:05.019934 1 logs_generator.go:76] 14 PUT /api/v1/namespaces/default/pods/k6w8 352\nI0610 22:08:05.220339 1 logs_generator.go:76] 15 PUT /api/v1/namespaces/kube-system/pods/5fl 537\nI0610 22:08:05.419661 1 logs_generator.go:76] 16 PUT /api/v1/namespaces/ns/pods/7x7 234\nI0610 22:08:05.619902 1 logs_generator.go:76] 17 PUT /api/v1/namespaces/kube-system/pods/5cl5 548\nI0610 22:08:05.820035 1 logs_generator.go:76] 18 GET /api/v1/namespaces/default/pods/p47k 544\nI0610 22:08:06.019254 1 logs_generator.go:76] 19 POST /api/v1/namespaces/ns/pods/ljk 457\nI0610 22:08:06.219579 1 logs_generator.go:76] 20 PUT /api/v1/namespaces/default/pods/8fqj 480\nI0610 22:08:06.419840 1 logs_generator.go:76] 21 POST /api/v1/namespaces/default/pods/mmft 559\nI0610 22:08:06.620148 1 logs_generator.go:76] 22 POST /api/v1/namespaces/default/pods/vfgk 234\nI0610 22:08:06.819377 1 logs_generator.go:76] 23 PUT /api/v1/namespaces/ns/pods/n6k 551\nI0610 22:08:07.019620 1 logs_generator.go:76] 24 GET /api/v1/namespaces/default/pods/4w56 321\nI0610 22:08:07.220011 1 logs_generator.go:76] 25 POST /api/v1/namespaces/ns/pods/2fpv 359\nI0610 22:08:07.420308 1 logs_generator.go:76] 26 PUT /api/v1/namespaces/default/pods/jzj8 330\nI0610 22:08:07.619726 1 logs_generator.go:76] 27 POST /api/v1/namespaces/ns/pods/c78 482\nI0610 22:08:07.820032 1 logs_generator.go:76] 28 PUT /api/v1/namespaces/kube-system/pods/82nz 215\nI0610 22:08:08.019425 1 logs_generator.go:76] 29 GET /api/v1/namespaces/ns/pods/ndx 441\nI0610 22:08:08.219901 1 logs_generator.go:76] 30 POST /api/v1/namespaces/ns/pods/7fh 492\nI0610 22:08:08.420198 1 logs_generator.go:76] 31 GET /api/v1/namespaces/ns/pods/99g6 443\nI0610 22:08:08.619566 1 logs_generator.go:76] 32 GET /api/v1/namespaces/default/pods/5gfc 374\nI0610 22:08:08.820061 1 logs_generator.go:76] 33 POST /api/v1/namespaces/kube-system/pods/8q5q 468\nI0610 22:08:09.019382 1 logs_generator.go:76] 34 GET /api/v1/namespaces/kube-system/pods/z2m 359\nI0610 22:08:09.219890 1 logs_generator.go:76] 35 POST /api/v1/namespaces/default/pods/bdrt 279\n" STEP: limiting log lines Jun 10 22:08:09.263: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-5720 logs logs-generator logs-generator --tail=1' Jun 10 22:08:09.431: INFO: stderr: "" Jun 10 22:08:09.431: INFO: stdout: "I0610 22:08:09.419267 1 logs_generator.go:76] 36 GET /api/v1/namespaces/ns/pods/5sdb 354\n" Jun 10 22:08:09.431: INFO: got output "I0610 22:08:09.419267 1 logs_generator.go:76] 36 GET /api/v1/namespaces/ns/pods/5sdb 354\n" STEP: limiting log bytes Jun 10 22:08:09.431: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-5720 logs logs-generator logs-generator --limit-bytes=1' Jun 10 22:08:09.606: INFO: stderr: "" Jun 10 22:08:09.606: INFO: stdout: "I" Jun 10 22:08:09.606: INFO: got output "I" STEP: exposing timestamps Jun 10 22:08:09.606: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-5720 logs logs-generator logs-generator --tail=1 --timestamps' Jun 10 22:08:09.777: INFO: stderr: "" Jun 10 22:08:09.777: INFO: stdout: "2022-06-10T22:08:09.619689699Z I0610 22:08:09.619621 1 logs_generator.go:76] 37 POST /api/v1/namespaces/ns/pods/sk9 504\n" Jun 10 22:08:09.777: INFO: got output "2022-06-10T22:08:09.619689699Z I0610 22:08:09.619621 1 logs_generator.go:76] 37 POST /api/v1/namespaces/ns/pods/sk9 504\n" STEP: restricting to a time range Jun 10 22:08:12.278: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-5720 logs logs-generator logs-generator --since=1s' Jun 10 22:08:12.422: INFO: stderr: "" Jun 10 22:08:12.422: INFO: stdout: "I0610 22:08:11.419808 1 logs_generator.go:76] 46 PUT /api/v1/namespaces/default/pods/pzd 276\nI0610 22:08:11.620296 1 logs_generator.go:76] 47 PUT /api/v1/namespaces/kube-system/pods/mnp 460\nI0610 22:08:11.819636 1 logs_generator.go:76] 48 POST /api/v1/namespaces/default/pods/8hv 280\nI0610 22:08:12.019993 1 logs_generator.go:76] 49 PUT /api/v1/namespaces/default/pods/pdk5 235\nI0610 22:08:12.219639 1 logs_generator.go:76] 50 POST /api/v1/namespaces/ns/pods/h8x 378\n" Jun 10 22:08:12.422: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-5720 logs logs-generator logs-generator --since=24h' Jun 10 22:08:12.786: INFO: stderr: "" Jun 10 22:08:12.786: INFO: stdout: "I0610 22:08:02.219251 1 logs_generator.go:76] 0 PUT /api/v1/namespaces/default/pods/l7n 563\nI0610 22:08:02.419370 1 logs_generator.go:76] 1 GET /api/v1/namespaces/kube-system/pods/nc2l 547\nI0610 22:08:02.619638 1 logs_generator.go:76] 2 PUT /api/v1/namespaces/ns/pods/vjvq 415\nI0610 22:08:02.819936 1 logs_generator.go:76] 3 GET /api/v1/namespaces/ns/pods/bkgd 375\nI0610 22:08:03.020232 1 logs_generator.go:76] 4 GET /api/v1/namespaces/default/pods/9cv 444\nI0610 22:08:03.219304 1 logs_generator.go:76] 5 POST /api/v1/namespaces/kube-system/pods/fghs 324\nI0610 22:08:03.419584 1 logs_generator.go:76] 6 GET /api/v1/namespaces/ns/pods/lwk 252\nI0610 22:08:03.619857 1 logs_generator.go:76] 7 POST /api/v1/namespaces/kube-system/pods/h6m 359\nI0610 22:08:03.820125 1 logs_generator.go:76] 8 PUT /api/v1/namespaces/kube-system/pods/5x7q 276\nI0610 22:08:04.019366 1 logs_generator.go:76] 9 PUT /api/v1/namespaces/default/pods/mnx 549\nI0610 22:08:04.219701 1 logs_generator.go:76] 10 PUT /api/v1/namespaces/default/pods/98j 570\nI0610 22:08:04.420037 1 logs_generator.go:76] 11 POST /api/v1/namespaces/default/pods/zjd 384\nI0610 22:08:04.619328 1 logs_generator.go:76] 12 GET /api/v1/namespaces/default/pods/6hq4 484\nI0610 22:08:04.819631 1 logs_generator.go:76] 13 GET /api/v1/namespaces/ns/pods/zkvt 493\nI0610 22:08:05.019934 1 logs_generator.go:76] 14 PUT /api/v1/namespaces/default/pods/k6w8 352\nI0610 22:08:05.220339 1 logs_generator.go:76] 15 PUT /api/v1/namespaces/kube-system/pods/5fl 537\nI0610 22:08:05.419661 1 logs_generator.go:76] 16 PUT /api/v1/namespaces/ns/pods/7x7 234\nI0610 22:08:05.619902 1 logs_generator.go:76] 17 PUT /api/v1/namespaces/kube-system/pods/5cl5 548\nI0610 22:08:05.820035 1 logs_generator.go:76] 18 GET /api/v1/namespaces/default/pods/p47k 544\nI0610 22:08:06.019254 1 logs_generator.go:76] 19 POST /api/v1/namespaces/ns/pods/ljk 457\nI0610 22:08:06.219579 1 logs_generator.go:76] 20 PUT /api/v1/namespaces/default/pods/8fqj 480\nI0610 22:08:06.419840 1 logs_generator.go:76] 21 POST /api/v1/namespaces/default/pods/mmft 559\nI0610 22:08:06.620148 1 logs_generator.go:76] 22 POST /api/v1/namespaces/default/pods/vfgk 234\nI0610 22:08:06.819377 1 logs_generator.go:76] 23 PUT /api/v1/namespaces/ns/pods/n6k 551\nI0610 22:08:07.019620 1 logs_generator.go:76] 24 GET /api/v1/namespaces/default/pods/4w56 321\nI0610 22:08:07.220011 1 logs_generator.go:76] 25 POST /api/v1/namespaces/ns/pods/2fpv 359\nI0610 22:08:07.420308 1 logs_generator.go:76] 26 PUT /api/v1/namespaces/default/pods/jzj8 330\nI0610 22:08:07.619726 1 logs_generator.go:76] 27 POST /api/v1/namespaces/ns/pods/c78 482\nI0610 22:08:07.820032 1 logs_generator.go:76] 28 PUT /api/v1/namespaces/kube-system/pods/82nz 215\nI0610 22:08:08.019425 1 logs_generator.go:76] 29 GET /api/v1/namespaces/ns/pods/ndx 441\nI0610 22:08:08.219901 1 logs_generator.go:76] 30 POST /api/v1/namespaces/ns/pods/7fh 492\nI0610 22:08:08.420198 1 logs_generator.go:76] 31 GET /api/v1/namespaces/ns/pods/99g6 443\nI0610 22:08:08.619566 1 logs_generator.go:76] 32 GET /api/v1/namespaces/default/pods/5gfc 374\nI0610 22:08:08.820061 1 logs_generator.go:76] 33 POST /api/v1/namespaces/kube-system/pods/8q5q 468\nI0610 22:08:09.019382 1 logs_generator.go:76] 34 GET /api/v1/namespaces/kube-system/pods/z2m 359\nI0610 22:08:09.219890 1 logs_generator.go:76] 35 POST /api/v1/namespaces/default/pods/bdrt 279\nI0610 22:08:09.419267 1 logs_generator.go:76] 36 GET /api/v1/namespaces/ns/pods/5sdb 354\nI0610 22:08:09.619621 1 logs_generator.go:76] 37 POST /api/v1/namespaces/ns/pods/sk9 504\nI0610 22:08:09.820101 1 logs_generator.go:76] 38 GET /api/v1/namespaces/kube-system/pods/r77v 345\nI0610 22:08:10.019494 1 logs_generator.go:76] 39 POST /api/v1/namespaces/ns/pods/8xz 339\nI0610 22:08:10.219905 1 logs_generator.go:76] 40 GET /api/v1/namespaces/kube-system/pods/vj8 348\nI0610 22:08:10.420217 1 logs_generator.go:76] 41 GET /api/v1/namespaces/kube-system/pods/j279 254\nI0610 22:08:10.619464 1 logs_generator.go:76] 42 POST /api/v1/namespaces/ns/pods/mk5m 454\nI0610 22:08:10.819789 1 logs_generator.go:76] 43 PUT /api/v1/namespaces/ns/pods/9s57 467\nI0610 22:08:11.020070 1 logs_generator.go:76] 44 PUT /api/v1/namespaces/ns/pods/l72s 254\nI0610 22:08:11.219464 1 logs_generator.go:76] 45 GET /api/v1/namespaces/kube-system/pods/tznz 242\nI0610 22:08:11.419808 1 logs_generator.go:76] 46 PUT /api/v1/namespaces/default/pods/pzd 276\nI0610 22:08:11.620296 1 logs_generator.go:76] 47 PUT /api/v1/namespaces/kube-system/pods/mnp 460\nI0610 22:08:11.819636 1 logs_generator.go:76] 48 POST /api/v1/namespaces/default/pods/8hv 280\nI0610 22:08:12.019993 1 logs_generator.go:76] 49 PUT /api/v1/namespaces/default/pods/pdk5 235\nI0610 22:08:12.219639 1 logs_generator.go:76] 50 POST /api/v1/namespaces/ns/pods/h8x 378\nI0610 22:08:12.420003 1 logs_generator.go:76] 51 POST /api/v1/namespaces/default/pods/knx 244\nI0610 22:08:12.620286 1 logs_generator.go:76] 52 GET /api/v1/namespaces/kube-system/pods/l4m 326\n" [AfterEach] Kubectl logs /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1391 Jun 10 22:08:12.786: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-5720 delete pod logs-generator' Jun 10 22:08:19.568: INFO: stderr: "" Jun 10 22:08:19.568: INFO: stdout: "pod \"logs-generator\" deleted\n" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 10 22:08:19.568: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5720" for this suite. • [SLOW TEST:20.659 seconds] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl logs /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1383 should be able to retrieve and filter logs [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]","total":-1,"completed":16,"skipped":363,"failed":1,"failures":["[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 10 22:07:54.788: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/lifecycle_hook.go:52 STEP: create the container to handle the HTTPGet hook request. Jun 10 22:07:54.829: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Jun 10 22:07:56.832: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Jun 10 22:07:58.833: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Jun 10 22:08:00.832: INFO: The status of Pod pod-handle-http-request is Running (Ready = true) [It] should execute poststart exec hook properly [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: create the pod with lifecycle hook Jun 10 22:08:00.847: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jun 10 22:08:02.849: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jun 10 22:08:04.852: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jun 10 22:08:06.851: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jun 10 22:08:08.852: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jun 10 22:08:10.850: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = true) STEP: check poststart hook STEP: delete the pod with lifecycle hook Jun 10 22:08:10.863: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jun 10 22:08:10.866: INFO: Pod pod-with-poststart-exec-hook still exists Jun 10 22:08:12.867: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jun 10 22:08:12.870: INFO: Pod pod-with-poststart-exec-hook still exists Jun 10 22:08:14.869: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jun 10 22:08:14.871: INFO: Pod pod-with-poststart-exec-hook still exists Jun 10 22:08:16.866: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jun 10 22:08:16.869: INFO: Pod pod-with-poststart-exec-hook still exists Jun 10 22:08:18.870: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jun 10 22:08:18.873: INFO: Pod pod-with-poststart-exec-hook still exists Jun 10 22:08:20.867: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jun 10 22:08:20.870: INFO: Pod pod-with-poststart-exec-hook no longer exists [AfterEach] [sig-node] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 10 22:08:20.870: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-1630" for this suite. • [SLOW TEST:26.091 seconds] [sig-node] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 when create a pod with lifecycle hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/lifecycle_hook.go:43 should execute poststart exec hook properly [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]","total":-1,"completed":27,"skipped":313,"failed":0} SSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] EndpointSlice /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 10 22:08:19.635: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename endpointslice STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] EndpointSlice /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/endpointslice.go:49 [It] should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [AfterEach] [sig-network] EndpointSlice /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 10 22:08:21.684: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "endpointslice-2731" for this suite. • ------------------------------ {"msg":"PASSED [sig-network] EndpointSlice should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance]","total":-1,"completed":17,"skipped":396,"failed":1,"failures":["[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]"]} SSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 10 22:08:17.206: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating secret with name secret-test-df0a0d68-873c-4ee6-beb1-01dd4b1804b9 STEP: Creating a pod to test consume secrets Jun 10 22:08:17.242: INFO: Waiting up to 5m0s for pod "pod-secrets-63d9e6e6-f781-4f49-a22f-30e587fa5442" in namespace "secrets-6729" to be "Succeeded or Failed" Jun 10 22:08:17.245: INFO: Pod "pod-secrets-63d9e6e6-f781-4f49-a22f-30e587fa5442": Phase="Pending", Reason="", readiness=false. Elapsed: 2.863586ms Jun 10 22:08:19.250: INFO: Pod "pod-secrets-63d9e6e6-f781-4f49-a22f-30e587fa5442": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00724198s Jun 10 22:08:21.253: INFO: Pod "pod-secrets-63d9e6e6-f781-4f49-a22f-30e587fa5442": Phase="Pending", Reason="", readiness=false. Elapsed: 4.010974749s Jun 10 22:08:23.258: INFO: Pod "pod-secrets-63d9e6e6-f781-4f49-a22f-30e587fa5442": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.015987657s STEP: Saw pod success Jun 10 22:08:23.259: INFO: Pod "pod-secrets-63d9e6e6-f781-4f49-a22f-30e587fa5442" satisfied condition "Succeeded or Failed" Jun 10 22:08:23.261: INFO: Trying to get logs from node node2 pod pod-secrets-63d9e6e6-f781-4f49-a22f-30e587fa5442 container secret-volume-test: STEP: delete the pod Jun 10 22:08:23.290: INFO: Waiting for pod pod-secrets-63d9e6e6-f781-4f49-a22f-30e587fa5442 to disappear Jun 10 22:08:23.292: INFO: Pod pod-secrets-63d9e6e6-f781-4f49-a22f-30e587fa5442 no longer exists [AfterEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 10 22:08:23.292: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-6729" for this suite. • [SLOW TEST:6.094 seconds] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":-1,"completed":31,"skipped":651,"failed":1,"failures":["[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/sysctl.go:35 [BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 10 22:08:23.340: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sysctl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/sysctl.go:64 [It] should reject invalid sysctls [MinimumKubeletVersion:1.21] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod with one valid and two invalid sysctls [AfterEach] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 10 22:08:23.372: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sysctl-3573" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] should reject invalid sysctls [MinimumKubeletVersion:1.21] [Conformance]","total":-1,"completed":32,"skipped":668,"failed":1,"failures":["[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 10 22:08:23.411: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746 [It] should test the lifecycle of an Endpoint [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating an Endpoint STEP: waiting for available Endpoint STEP: listing all Endpoints STEP: updating the Endpoint STEP: fetching the Endpoint STEP: patching the Endpoint STEP: fetching the Endpoint STEP: deleting the Endpoint by Collection STEP: waiting for Endpoint deletion STEP: fetching the Endpoint [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 10 22:08:23.467: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-917" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750 • ------------------------------ {"msg":"PASSED [sig-network] Services should test the lifecycle of an Endpoint [Conformance]","total":-1,"completed":33,"skipped":681,"failed":1,"failures":["[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSSS ------------------------------ [BeforeEach] [sig-node] Docker Containers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 10 22:08:15.760: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command and arguments [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test override all Jun 10 22:08:15.802: INFO: Waiting up to 5m0s for pod "client-containers-ac6641be-d389-483d-b46d-4ce98f172e89" in namespace "containers-1422" to be "Succeeded or Failed" Jun 10 22:08:15.804: INFO: Pod "client-containers-ac6641be-d389-483d-b46d-4ce98f172e89": Phase="Pending", Reason="", readiness=false. Elapsed: 2.051052ms Jun 10 22:08:17.807: INFO: Pod "client-containers-ac6641be-d389-483d-b46d-4ce98f172e89": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005320912s Jun 10 22:08:19.812: INFO: Pod "client-containers-ac6641be-d389-483d-b46d-4ce98f172e89": Phase="Pending", Reason="", readiness=false. Elapsed: 4.009531916s Jun 10 22:08:21.816: INFO: Pod "client-containers-ac6641be-d389-483d-b46d-4ce98f172e89": Phase="Pending", Reason="", readiness=false. Elapsed: 6.01405848s Jun 10 22:08:23.820: INFO: Pod "client-containers-ac6641be-d389-483d-b46d-4ce98f172e89": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.01796264s STEP: Saw pod success Jun 10 22:08:23.820: INFO: Pod "client-containers-ac6641be-d389-483d-b46d-4ce98f172e89" satisfied condition "Succeeded or Failed" Jun 10 22:08:23.823: INFO: Trying to get logs from node node2 pod client-containers-ac6641be-d389-483d-b46d-4ce98f172e89 container agnhost-container: STEP: delete the pod Jun 10 22:08:23.838: INFO: Waiting for pod client-containers-ac6641be-d389-483d-b46d-4ce98f172e89 to disappear Jun 10 22:08:23.840: INFO: Pod client-containers-ac6641be-d389-483d-b46d-4ce98f172e89 no longer exists [AfterEach] [sig-node] Docker Containers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 10 22:08:23.840: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-1422" for this suite. • [SLOW TEST:8.088 seconds] [sig-node] Docker Containers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should be able to override the image's default command and arguments [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance]","total":-1,"completed":20,"skipped":331,"failed":1,"failures":["[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]"]} SSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 10 22:08:21.719: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/kubelet.go:38 [It] should print the output to logs [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Jun 10 22:08:21.759: INFO: The status of Pod busybox-scheduling-c89b523e-0585-4b0b-a2b2-e9c69635d930 is Pending, waiting for it to be Running (with Ready = true) Jun 10 22:08:23.763: INFO: The status of Pod busybox-scheduling-c89b523e-0585-4b0b-a2b2-e9c69635d930 is Pending, waiting for it to be Running (with Ready = true) Jun 10 22:08:25.764: INFO: The status of Pod busybox-scheduling-c89b523e-0585-4b0b-a2b2-e9c69635d930 is Running (Ready = true) [AfterEach] [sig-node] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 10 22:08:25.774: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-53" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance]","total":-1,"completed":18,"skipped":410,"failed":1,"failures":["[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]"]} SSS ------------------------------ [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 10 22:08:18.373: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test emptydir 0644 on tmpfs Jun 10 22:08:18.408: INFO: Waiting up to 5m0s for pod "pod-167f196d-a77e-4246-a980-784417189860" in namespace "emptydir-4448" to be "Succeeded or Failed" Jun 10 22:08:18.410: INFO: Pod "pod-167f196d-a77e-4246-a980-784417189860": Phase="Pending", Reason="", readiness=false. Elapsed: 2.203506ms Jun 10 22:08:20.414: INFO: Pod "pod-167f196d-a77e-4246-a980-784417189860": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006573588s Jun 10 22:08:22.418: INFO: Pod "pod-167f196d-a77e-4246-a980-784417189860": Phase="Pending", Reason="", readiness=false. Elapsed: 4.009887191s Jun 10 22:08:24.422: INFO: Pod "pod-167f196d-a77e-4246-a980-784417189860": Phase="Pending", Reason="", readiness=false. Elapsed: 6.013860679s Jun 10 22:08:26.426: INFO: Pod "pod-167f196d-a77e-4246-a980-784417189860": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.018055855s STEP: Saw pod success Jun 10 22:08:26.426: INFO: Pod "pod-167f196d-a77e-4246-a980-784417189860" satisfied condition "Succeeded or Failed" Jun 10 22:08:26.428: INFO: Trying to get logs from node node2 pod pod-167f196d-a77e-4246-a980-784417189860 container test-container: STEP: delete the pod Jun 10 22:08:26.477: INFO: Waiting for pod pod-167f196d-a77e-4246-a980-784417189860 to disappear Jun 10 22:08:26.479: INFO: Pod pod-167f196d-a77e-4246-a980-784417189860 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 10 22:08:26.479: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-4448" for this suite. • [SLOW TEST:8.116 seconds] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":27,"skipped":448,"failed":2,"failures":["[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should be able to create a functioning NodePort service [Conformance]"]} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 10 22:08:19.396: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41 [It] should update annotations on modification [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating the pod Jun 10 22:08:19.431: INFO: The status of Pod annotationupdate25e637af-a829-4e73-a88c-835bdf8eb865 is Pending, waiting for it to be Running (with Ready = true) Jun 10 22:08:21.436: INFO: The status of Pod annotationupdate25e637af-a829-4e73-a88c-835bdf8eb865 is Pending, waiting for it to be Running (with Ready = true) Jun 10 22:08:23.436: INFO: The status of Pod annotationupdate25e637af-a829-4e73-a88c-835bdf8eb865 is Pending, waiting for it to be Running (with Ready = true) Jun 10 22:08:25.435: INFO: The status of Pod annotationupdate25e637af-a829-4e73-a88c-835bdf8eb865 is Pending, waiting for it to be Running (with Ready = true) Jun 10 22:08:27.434: INFO: The status of Pod annotationupdate25e637af-a829-4e73-a88c-835bdf8eb865 is Running (Ready = true) Jun 10 22:08:27.953: INFO: Successfully updated pod "annotationupdate25e637af-a829-4e73-a88c-835bdf8eb865" [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 10 22:08:29.968: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-398" for this suite. • [SLOW TEST:10.581 seconds] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should update annotations on modification [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance]","total":-1,"completed":48,"skipped":777,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-apps] ReplicaSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 10 22:08:20.908: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Jun 10 22:08:20.929: INFO: Creating ReplicaSet my-hostname-basic-5c4f8ef8-6052-4693-8551-44cb846242e3 Jun 10 22:08:20.936: INFO: Pod name my-hostname-basic-5c4f8ef8-6052-4693-8551-44cb846242e3: Found 0 pods out of 1 Jun 10 22:08:25.939: INFO: Pod name my-hostname-basic-5c4f8ef8-6052-4693-8551-44cb846242e3: Found 1 pods out of 1 Jun 10 22:08:25.939: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-5c4f8ef8-6052-4693-8551-44cb846242e3" is running Jun 10 22:08:27.947: INFO: Pod "my-hostname-basic-5c4f8ef8-6052-4693-8551-44cb846242e3-smhrq" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-10 22:08:21 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-10 22:08:21 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-5c4f8ef8-6052-4693-8551-44cb846242e3]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-10 22:08:21 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-5c4f8ef8-6052-4693-8551-44cb846242e3]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-10 22:08:20 +0000 UTC Reason: Message:}]) Jun 10 22:08:27.948: INFO: Trying to dial the pod Jun 10 22:08:32.958: INFO: Controller my-hostname-basic-5c4f8ef8-6052-4693-8551-44cb846242e3: Got expected result from replica 1 [my-hostname-basic-5c4f8ef8-6052-4693-8551-44cb846242e3-smhrq]: "my-hostname-basic-5c4f8ef8-6052-4693-8551-44cb846242e3-smhrq", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicaSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 10 22:08:32.958: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-5659" for this suite. • [SLOW TEST:12.059 seconds] [sig-apps] ReplicaSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance]","total":-1,"completed":28,"skipped":326,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 10 22:08:25.792: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Jun 10 22:08:26.108: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Jun 10 22:08:28.119: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63790495706, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63790495706, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63790495706, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63790495706, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} Jun 10 22:08:30.122: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63790495706, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63790495706, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63790495706, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63790495706, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} Jun 10 22:08:32.123: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63790495706, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63790495706, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63790495706, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63790495706, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jun 10 22:08:35.134: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Registering a validating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API STEP: Registering a mutating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API STEP: Creating a dummy validating-webhook-configuration object STEP: Deleting the validating-webhook-configuration, which should be possible to remove STEP: Creating a dummy mutating-webhook-configuration object STEP: Deleting the mutating-webhook-configuration, which should be possible to remove [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 10 22:08:35.178: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-5822" for this suite. STEP: Destroying namespace "webhook-5822-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:9.416 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","total":-1,"completed":19,"skipped":413,"failed":1,"failures":["[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]"]} SSSSSSSSSS ------------------------------ [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 10 22:08:16.503: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:241 [BeforeEach] Kubectl replace /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1548 [It] should update a single-container pod's image [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: running the image k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 Jun 10 22:08:16.527: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-8016 run e2e-test-httpd-pod --image=k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 --labels=run=e2e-test-httpd-pod' Jun 10 22:08:16.686: INFO: stderr: "" Jun 10 22:08:16.686: INFO: stdout: "pod/e2e-test-httpd-pod created\n" STEP: verifying the pod e2e-test-httpd-pod is running STEP: verifying the pod e2e-test-httpd-pod was created Jun 10 22:08:26.740: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-8016 get pod e2e-test-httpd-pod -o json' Jun 10 22:08:26.889: INFO: stderr: "" Jun 10 22:08:26.889: INFO: stdout: "{\n \"apiVersion\": \"v1\",\n \"kind\": \"Pod\",\n \"metadata\": {\n \"annotations\": {\n \"k8s.v1.cni.cncf.io/network-status\": \"[{\\n \\\"name\\\": \\\"default-cni-network\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.244.4.226\\\"\\n ],\\n \\\"mac\\\": \\\"ee:fc:65:29:d5:27\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\n \"k8s.v1.cni.cncf.io/networks-status\": \"[{\\n \\\"name\\\": \\\"default-cni-network\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.244.4.226\\\"\\n ],\\n \\\"mac\\\": \\\"ee:fc:65:29:d5:27\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\n \"kubernetes.io/psp\": \"collectd\"\n },\n \"creationTimestamp\": \"2022-06-10T22:08:16Z\",\n \"labels\": {\n \"run\": \"e2e-test-httpd-pod\"\n },\n \"name\": \"e2e-test-httpd-pod\",\n \"namespace\": \"kubectl-8016\",\n \"resourceVersion\": \"49821\",\n \"uid\": \"555b9c1f-8b54-40d0-821a-2c6b42f662c8\"\n },\n \"spec\": {\n \"containers\": [\n {\n \"image\": \"k8s.gcr.io/e2e-test-images/httpd:2.4.38-1\",\n \"imagePullPolicy\": \"Always\",\n \"name\": \"e2e-test-httpd-pod\",\n \"resources\": {},\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"volumeMounts\": [\n {\n \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n \"name\": \"kube-api-access-j7qmf\",\n \"readOnly\": true\n }\n ]\n }\n ],\n \"dnsPolicy\": \"ClusterFirst\",\n \"enableServiceLinks\": true,\n \"nodeName\": \"node2\",\n \"preemptionPolicy\": \"PreemptLowerPriority\",\n \"priority\": 0,\n \"restartPolicy\": \"Always\",\n \"schedulerName\": \"default-scheduler\",\n \"securityContext\": {},\n \"serviceAccount\": \"default\",\n \"serviceAccountName\": \"default\",\n \"terminationGracePeriodSeconds\": 30,\n \"tolerations\": [\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/not-ready\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n },\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/unreachable\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n }\n ],\n \"volumes\": [\n {\n \"name\": \"kube-api-access-j7qmf\",\n \"projected\": {\n \"defaultMode\": 420,\n \"sources\": [\n {\n \"serviceAccountToken\": {\n \"expirationSeconds\": 3607,\n \"path\": \"token\"\n }\n },\n {\n \"configMap\": {\n \"items\": [\n {\n \"key\": \"ca.crt\",\n \"path\": \"ca.crt\"\n }\n ],\n \"name\": \"kube-root-ca.crt\"\n }\n },\n {\n \"downwardAPI\": {\n \"items\": [\n {\n \"fieldRef\": {\n \"apiVersion\": \"v1\",\n \"fieldPath\": \"metadata.namespace\"\n },\n \"path\": \"namespace\"\n }\n ]\n }\n }\n ]\n }\n }\n ]\n },\n \"status\": {\n \"conditions\": [\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2022-06-10T22:08:16Z\",\n \"status\": \"True\",\n \"type\": \"Initialized\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2022-06-10T22:08:23Z\",\n \"status\": \"True\",\n \"type\": \"Ready\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2022-06-10T22:08:23Z\",\n \"status\": \"True\",\n \"type\": \"ContainersReady\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2022-06-10T22:08:16Z\",\n \"status\": \"True\",\n \"type\": \"PodScheduled\"\n }\n ],\n \"containerStatuses\": [\n {\n \"containerID\": \"docker://c125bb6b8808fa0fff01c606bf7087d1a9da42e50cfb79701da55a154b3b03ff\",\n \"image\": \"k8s.gcr.io/e2e-test-images/httpd:2.4.38-1\",\n \"imageID\": \"docker-pullable://k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50\",\n \"lastState\": {},\n \"name\": \"e2e-test-httpd-pod\",\n \"ready\": true,\n \"restartCount\": 0,\n \"started\": true,\n \"state\": {\n \"running\": {\n \"startedAt\": \"2022-06-10T22:08:20Z\"\n }\n }\n }\n ],\n \"hostIP\": \"10.10.190.208\",\n \"phase\": \"Running\",\n \"podIP\": \"10.244.4.226\",\n \"podIPs\": [\n {\n \"ip\": \"10.244.4.226\"\n }\n ],\n \"qosClass\": \"BestEffort\",\n \"startTime\": \"2022-06-10T22:08:16Z\"\n }\n}\n" STEP: replace the image in the pod Jun 10 22:08:26.889: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-8016 replace -f -' Jun 10 22:08:27.281: INFO: stderr: "" Jun 10 22:08:27.281: INFO: stdout: "pod/e2e-test-httpd-pod replaced\n" STEP: verifying the pod e2e-test-httpd-pod has the right image k8s.gcr.io/e2e-test-images/busybox:1.29-1 [AfterEach] Kubectl replace /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1552 Jun 10 22:08:27.284: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-8016 delete pods e2e-test-httpd-pod' Jun 10 22:08:37.066: INFO: stderr: "" Jun 10 22:08:37.066: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 10 22:08:37.066: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8016" for this suite. • [SLOW TEST:20.572 seconds] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl replace /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1545 should update a single-container pod's image [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image [Conformance]","total":-1,"completed":42,"skipped":890,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 10 22:08:23.871: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746 [It] should serve multiport endpoints from pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating service multi-endpoint-test in namespace services-4194 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-4194 to expose endpoints map[] Jun 10 22:08:23.904: INFO: Failed go get Endpoints object: endpoints "multi-endpoint-test" not found Jun 10 22:08:24.910: INFO: successfully validated that service multi-endpoint-test in namespace services-4194 exposes endpoints map[] STEP: Creating pod pod1 in namespace services-4194 Jun 10 22:08:24.925: INFO: The status of Pod pod1 is Pending, waiting for it to be Running (with Ready = true) Jun 10 22:08:26.929: INFO: The status of Pod pod1 is Pending, waiting for it to be Running (with Ready = true) Jun 10 22:08:28.929: INFO: The status of Pod pod1 is Pending, waiting for it to be Running (with Ready = true) Jun 10 22:08:30.933: INFO: The status of Pod pod1 is Pending, waiting for it to be Running (with Ready = true) Jun 10 22:08:32.929: INFO: The status of Pod pod1 is Running (Ready = true) STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-4194 to expose endpoints map[pod1:[100]] Jun 10 22:08:32.942: INFO: successfully validated that service multi-endpoint-test in namespace services-4194 exposes endpoints map[pod1:[100]] STEP: Creating pod pod2 in namespace services-4194 Jun 10 22:08:32.956: INFO: The status of Pod pod2 is Pending, waiting for it to be Running (with Ready = true) Jun 10 22:08:34.962: INFO: The status of Pod pod2 is Pending, waiting for it to be Running (with Ready = true) Jun 10 22:08:36.960: INFO: The status of Pod pod2 is Running (Ready = true) STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-4194 to expose endpoints map[pod1:[100] pod2:[101]] Jun 10 22:08:36.971: INFO: successfully validated that service multi-endpoint-test in namespace services-4194 exposes endpoints map[pod1:[100] pod2:[101]] STEP: Deleting pod pod1 in namespace services-4194 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-4194 to expose endpoints map[pod2:[101]] Jun 10 22:08:37.989: INFO: successfully validated that service multi-endpoint-test in namespace services-4194 exposes endpoints map[pod2:[101]] STEP: Deleting pod pod2 in namespace services-4194 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-4194 to expose endpoints map[] Jun 10 22:08:38.012: INFO: successfully validated that service multi-endpoint-test in namespace services-4194 exposes endpoints map[] [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 10 22:08:38.022: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-4194" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750 • [SLOW TEST:14.158 seconds] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should serve multiport endpoints from pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-network] Services should serve multiport endpoints from pods [Conformance]","total":-1,"completed":21,"skipped":342,"failed":1,"failures":["[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 10 22:08:35.230: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod UID as env vars [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test downward api env vars Jun 10 22:08:35.270: INFO: Waiting up to 5m0s for pod "downward-api-73e21fe4-2fbc-47c3-b6d2-fb1bfbcdebe3" in namespace "downward-api-9545" to be "Succeeded or Failed" Jun 10 22:08:35.274: INFO: Pod "downward-api-73e21fe4-2fbc-47c3-b6d2-fb1bfbcdebe3": Phase="Pending", Reason="", readiness=false. Elapsed: 4.351643ms Jun 10 22:08:37.278: INFO: Pod "downward-api-73e21fe4-2fbc-47c3-b6d2-fb1bfbcdebe3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007950897s Jun 10 22:08:39.283: INFO: Pod "downward-api-73e21fe4-2fbc-47c3-b6d2-fb1bfbcdebe3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012897306s STEP: Saw pod success Jun 10 22:08:39.283: INFO: Pod "downward-api-73e21fe4-2fbc-47c3-b6d2-fb1bfbcdebe3" satisfied condition "Succeeded or Failed" Jun 10 22:08:39.285: INFO: Trying to get logs from node node1 pod downward-api-73e21fe4-2fbc-47c3-b6d2-fb1bfbcdebe3 container dapi-container: STEP: delete the pod Jun 10 22:08:39.299: INFO: Waiting for pod downward-api-73e21fe4-2fbc-47c3-b6d2-fb1bfbcdebe3 to disappear Jun 10 22:08:39.301: INFO: Pod downward-api-73e21fe4-2fbc-47c3-b6d2-fb1bfbcdebe3 no longer exists [AfterEach] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 10 22:08:39.301: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-9545" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance]","total":-1,"completed":20,"skipped":423,"failed":1,"failures":["[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]"]} SSSSSSSSSSSSS ------------------------------ Jun 10 22:08:39.336: INFO: Running AfterSuite actions on all nodes [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 10 22:08:26.528: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication Jun 10 22:08:26.807: INFO: role binding webhook-auth-reader already exists STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Jun 10 22:08:26.820: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Jun 10 22:08:28.830: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63790495706, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63790495706, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63790495706, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63790495706, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jun 10 22:08:31.840: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Jun 10 22:08:31.843: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-964-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource that should be mutated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 10 22:08:39.908: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-9951" for this suite. STEP: Destroying namespace "webhook-9951-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:13.412 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","total":-1,"completed":28,"skipped":469,"failed":2,"failures":["[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should be able to create a functioning NodePort service [Conformance]"]} Jun 10 22:08:39.942: INFO: Running AfterSuite actions on all nodes [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 10 22:08:30.041: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:186 [It] should contain environment variables for services [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Jun 10 22:08:30.080: INFO: The status of Pod server-envvars-246bf887-fc4a-406e-9da9-698943e53d05 is Pending, waiting for it to be Running (with Ready = true) Jun 10 22:08:32.083: INFO: The status of Pod server-envvars-246bf887-fc4a-406e-9da9-698943e53d05 is Pending, waiting for it to be Running (with Ready = true) Jun 10 22:08:34.085: INFO: The status of Pod server-envvars-246bf887-fc4a-406e-9da9-698943e53d05 is Running (Ready = true) Jun 10 22:08:34.104: INFO: Waiting up to 5m0s for pod "client-envvars-4f009d98-edc1-4765-a9df-ba9b96269a2b" in namespace "pods-2636" to be "Succeeded or Failed" Jun 10 22:08:34.106: INFO: Pod "client-envvars-4f009d98-edc1-4765-a9df-ba9b96269a2b": Phase="Pending", Reason="", readiness=false. Elapsed: 1.82679ms Jun 10 22:08:36.109: INFO: Pod "client-envvars-4f009d98-edc1-4765-a9df-ba9b96269a2b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005609733s Jun 10 22:08:38.114: INFO: Pod "client-envvars-4f009d98-edc1-4765-a9df-ba9b96269a2b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.009895051s Jun 10 22:08:40.117: INFO: Pod "client-envvars-4f009d98-edc1-4765-a9df-ba9b96269a2b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.013766721s STEP: Saw pod success Jun 10 22:08:40.118: INFO: Pod "client-envvars-4f009d98-edc1-4765-a9df-ba9b96269a2b" satisfied condition "Succeeded or Failed" Jun 10 22:08:40.120: INFO: Trying to get logs from node node2 pod client-envvars-4f009d98-edc1-4765-a9df-ba9b96269a2b container env3cont: STEP: delete the pod Jun 10 22:08:40.142: INFO: Waiting for pod client-envvars-4f009d98-edc1-4765-a9df-ba9b96269a2b to disappear Jun 10 22:08:40.144: INFO: Pod client-envvars-4f009d98-edc1-4765-a9df-ba9b96269a2b no longer exists [AfterEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 10 22:08:40.144: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-2636" for this suite. • [SLOW TEST:10.110 seconds] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should contain environment variables for services [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Pods should contain environment variables for services [NodeConformance] [Conformance]","total":-1,"completed":49,"skipped":809,"failed":0} Jun 10 22:08:40.154: INFO: Running AfterSuite actions on all nodes [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 10 22:08:38.080: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:241 [It] should check if kubectl can dry-run update Pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: running the image k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 Jun 10 22:08:38.105: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-1095 run e2e-test-httpd-pod --image=k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 --labels=run=e2e-test-httpd-pod' Jun 10 22:08:38.268: INFO: stderr: "" Jun 10 22:08:38.268: INFO: stdout: "pod/e2e-test-httpd-pod created\n" STEP: replace the image in the pod with server-side dry-run Jun 10 22:08:38.268: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-1095 patch pod e2e-test-httpd-pod -p {"spec":{"containers":[{"name": "e2e-test-httpd-pod","image": "k8s.gcr.io/e2e-test-images/busybox:1.29-1"}]}} --dry-run=server' Jun 10 22:08:38.652: INFO: stderr: "" Jun 10 22:08:38.652: INFO: stdout: "pod/e2e-test-httpd-pod patched\n" STEP: verifying the pod e2e-test-httpd-pod has the right image k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 Jun 10 22:08:38.655: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-1095 delete pods e2e-test-httpd-pod' Jun 10 22:08:47.097: INFO: stderr: "" Jun 10 22:08:47.097: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 10 22:08:47.097: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1095" for this suite. • [SLOW TEST:9.026 seconds] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl server-side dry-run /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:903 should check if kubectl can dry-run update Pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl server-side dry-run should check if kubectl can dry-run update Pods [Conformance]","total":-1,"completed":22,"skipped":368,"failed":1,"failures":["[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]"]} Jun 10 22:08:47.108: INFO: Running AfterSuite actions on all nodes [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 10 22:08:37.106: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Jun 10 22:08:37.429: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Jun 10 22:08:39.438: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63790495717, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63790495717, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63790495717, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63790495717, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} Jun 10 22:08:41.442: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63790495717, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63790495717, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63790495717, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63790495717, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jun 10 22:08:44.451: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource with pruning [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Jun 10 22:08:44.453: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-9473-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource that should be mutated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 10 22:08:52.540: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-8906" for this suite. STEP: Destroying namespace "webhook-8906-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:15.465 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource with pruning [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","total":-1,"completed":43,"skipped":906,"failed":0} Jun 10 22:08:52.573: INFO: Running AfterSuite actions on all nodes [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 10 22:08:33.007: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 [BeforeEach] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:105 STEP: Creating service test in namespace statefulset-3274 [It] should have a working scale subresource [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating statefulset ss in namespace statefulset-3274 Jun 10 22:08:33.049: INFO: Found 0 stateful pods, waiting for 1 Jun 10 22:08:43.053: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: getting scale subresource STEP: updating a scale subresource STEP: verifying the statefulset Spec.Replicas was modified STEP: Patch a scale subresource STEP: verifying the statefulset Spec.Replicas was modified [AfterEach] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:116 Jun 10 22:08:43.071: INFO: Deleting all statefulset in ns statefulset-3274 Jun 10 22:08:43.074: INFO: Scaling statefulset ss to 0 Jun 10 22:08:53.090: INFO: Waiting for statefulset status.replicas updated to 0 Jun 10 22:08:53.092: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 10 22:08:53.101: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-3274" for this suite. • [SLOW TEST:20.102 seconds] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:95 should have a working scale subresource [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance]","total":-1,"completed":29,"skipped":346,"failed":0} Jun 10 22:08:53.110: INFO: Running AfterSuite actions on all nodes [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 10 22:07:04.054: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746 [It] should be able to change the type from ExternalName to NodePort [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating a service externalname-service with the type=ExternalName in namespace services-961 STEP: changing the ExternalName service to type=NodePort STEP: creating replication controller externalname-service in namespace services-961 I0610 22:07:04.099229 30 runners.go:190] Created replication controller with name: externalname-service, namespace: services-961, replica count: 2 I0610 22:07:07.151088 30 runners.go:190] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jun 10 22:07:07.151: INFO: Creating new exec pod Jun 10 22:07:12.175: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-961 exec execpodclt87 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80' Jun 10 22:07:12.434: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 externalname-service 80\nConnection to externalname-service 80 port [tcp/http] succeeded!\n" Jun 10 22:07:12.434: INFO: stdout: "externalname-service-5zzgz" Jun 10 22:07:12.434: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-961 exec execpodclt87 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.233.48.149 80' Jun 10 22:07:12.698: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 10.233.48.149 80\nConnection to 10.233.48.149 80 port [tcp/http] succeeded!\n" Jun 10 22:07:12.698: INFO: stdout: "externalname-service-m9jbh" Jun 10 22:07:12.698: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-961 exec execpodclt87 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31833' Jun 10 22:07:12.961: INFO: rc: 1 Jun 10 22:07:12.961: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-961 exec execpodclt87 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31833: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31833 nc: connect to 10.10.190.207 port 31833 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:07:13.962: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-961 exec execpodclt87 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31833' Jun 10 22:07:14.226: INFO: rc: 1 Jun 10 22:07:14.226: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-961 exec execpodclt87 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31833: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31833 nc: connect to 10.10.190.207 port 31833 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:07:14.963: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-961 exec execpodclt87 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31833' Jun 10 22:07:15.224: INFO: rc: 1 Jun 10 22:07:15.224: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-961 exec execpodclt87 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31833: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31833 nc: connect to 10.10.190.207 port 31833 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:07:15.962: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-961 exec execpodclt87 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31833' Jun 10 22:07:16.218: INFO: rc: 1 Jun 10 22:07:16.219: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-961 exec execpodclt87 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31833: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31833 nc: connect to 10.10.190.207 port 31833 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:07:16.962: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-961 exec execpodclt87 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31833' Jun 10 22:07:17.234: INFO: rc: 1 Jun 10 22:07:17.234: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-961 exec execpodclt87 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31833: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31833 nc: connect to 10.10.190.207 port 31833 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:07:17.962: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-961 exec execpodclt87 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31833' Jun 10 22:07:18.193: INFO: rc: 1 Jun 10 22:07:18.193: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-961 exec execpodclt87 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31833: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31833 nc: connect to 10.10.190.207 port 31833 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:07:18.963: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-961 exec execpodclt87 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31833' Jun 10 22:07:19.353: INFO: rc: 1 Jun 10 22:07:19.353: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-961 exec execpodclt87 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31833: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31833 nc: connect to 10.10.190.207 port 31833 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:07:19.963: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-961 exec execpodclt87 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31833' Jun 10 22:07:20.205: INFO: rc: 1 Jun 10 22:07:20.205: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-961 exec execpodclt87 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31833: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31833 nc: connect to 10.10.190.207 port 31833 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:07:20.962: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-961 exec execpodclt87 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31833' Jun 10 22:07:21.192: INFO: rc: 1 Jun 10 22:07:21.192: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-961 exec execpodclt87 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31833: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31833 + echo hostName nc: connect to 10.10.190.207 port 31833 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:07:21.962: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-961 exec execpodclt87 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31833' Jun 10 22:07:22.232: INFO: rc: 1 Jun 10 22:07:22.232: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-961 exec execpodclt87 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31833: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31833 nc: connect to 10.10.190.207 port 31833 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:07:22.962: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-961 exec execpodclt87 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31833' Jun 10 22:07:23.200: INFO: rc: 1 Jun 10 22:07:23.200: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-961 exec execpodclt87 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31833: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31833 nc: connect to 10.10.190.207 port 31833 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:07:23.962: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-961 exec execpodclt87 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31833' Jun 10 22:07:24.214: INFO: rc: 1 Jun 10 22:07:24.214: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-961 exec execpodclt87 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31833: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31833 nc: connect to 10.10.190.207 port 31833 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:07:24.962: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-961 exec execpodclt87 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31833' Jun 10 22:07:25.229: INFO: rc: 1 Jun 10 22:07:25.229: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-961 exec execpodclt87 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31833: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31833 + echo hostName nc: connect to 10.10.190.207 port 31833 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:07:25.962: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-961 exec execpodclt87 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31833' Jun 10 22:07:26.210: INFO: rc: 1 Jun 10 22:07:26.210: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-961 exec execpodclt87 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31833: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31833 nc: connect to 10.10.190.207 port 31833 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:07:26.962: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-961 exec execpodclt87 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31833' Jun 10 22:07:27.559: INFO: rc: 1 Jun 10 22:07:27.560: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-961 exec execpodclt87 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31833: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31833 nc: connect to 10.10.190.207 port 31833 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:07:27.962: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-961 exec execpodclt87 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31833' Jun 10 22:07:28.935: INFO: rc: 1 Jun 10 22:07:28.935: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-961 exec execpodclt87 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31833: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31833 nc: connect to 10.10.190.207 port 31833 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:07:28.963: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-961 exec execpodclt87 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31833' Jun 10 22:07:29.236: INFO: rc: 1 Jun 10 22:07:29.236: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-961 exec execpodclt87 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31833: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31833 nc: connect to 10.10.190.207 port 31833 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:07:29.962: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-961 exec execpodclt87 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31833' Jun 10 22:07:30.481: INFO: rc: 1 Jun 10 22:07:30.481: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-961 exec execpodclt87 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31833: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31833 nc: connect to 10.10.190.207 port 31833 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:07:30.962: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-961 exec execpodclt87 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31833' Jun 10 22:07:31.229: INFO: rc: 1 Jun 10 22:07:31.229: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-961 exec execpodclt87 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31833: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31833 nc: connect to 10.10.190.207 port 31833 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:07:31.962: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-961 exec execpodclt87 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31833' Jun 10 22:07:32.287: INFO: rc: 1 Jun 10 22:07:32.287: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-961 exec execpodclt87 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31833: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31833 nc: connect to 10.10.190.207 port 31833 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:07:32.962: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-961 exec execpodclt87 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31833' Jun 10 22:07:33.473: INFO: rc: 1 Jun 10 22:07:33.473: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-961 exec execpodclt87 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31833: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31833 nc: connect to 10.10.190.207 port 31833 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:07:33.962: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-961 exec execpodclt87 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31833' Jun 10 22:07:34.230: INFO: rc: 1 Jun 10 22:07:34.230: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-961 exec execpodclt87 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31833: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31833 nc: connect to 10.10.190.207 port 31833 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:07:34.963: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-961 exec execpodclt87 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31833' Jun 10 22:07:35.237: INFO: rc: 1 Jun 10 22:07:35.237: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-961 exec execpodclt87 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31833: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31833 nc: connect to 10.10.190.207 port 31833 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:07:35.962: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-961 exec execpodclt87 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31833' Jun 10 22:07:36.224: INFO: rc: 1 Jun 10 22:07:36.224: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-961 exec execpodclt87 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31833: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31833 nc: connect to 10.10.190.207 port 31833 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:07:36.962: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-961 exec execpodclt87 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31833' Jun 10 22:07:37.281: INFO: rc: 1 Jun 10 22:07:37.281: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-961 exec execpodclt87 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31833: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31833 nc: connect to 10.10.190.207 port 31833 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:07:37.962: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-961 exec execpodclt87 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31833' Jun 10 22:07:38.633: INFO: rc: 1 Jun 10 22:07:38.633: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-961 exec execpodclt87 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31833: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31833 nc: connect to 10.10.190.207 port 31833 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:07:38.962: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-961 exec execpodclt87 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31833' Jun 10 22:07:39.354: INFO: rc: 1 Jun 10 22:07:39.354: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-961 exec execpodclt87 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31833: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31833 nc: connect to 10.10.190.207 port 31833 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:07:39.963: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-961 exec execpodclt87 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31833' Jun 10 22:07:40.325: INFO: rc: 1 Jun 10 22:07:40.325: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-961 exec execpodclt87 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31833: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31833 nc: connect to 10.10.190.207 port 31833 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:07:40.962: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-961 exec execpodclt87 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31833' Jun 10 22:07:41.240: INFO: rc: 1 Jun 10 22:07:41.240: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-961 exec execpodclt87 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31833: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31833 nc: connect to 10.10.190.207 port 31833 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:07:41.962: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-961 exec execpodclt87 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31833' Jun 10 22:07:42.210: INFO: rc: 1 Jun 10 22:07:42.210: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-961 exec execpodclt87 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31833: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31833 nc: connect to 10.10.190.207 port 31833 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:07:42.962: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-961 exec execpodclt87 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31833' Jun 10 22:07:43.199: INFO: rc: 1 Jun 10 22:07:43.199: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-961 exec execpodclt87 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31833: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31833 nc: connect to 10.10.190.207 port 31833 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:07:43.963: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-961 exec execpodclt87 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31833' Jun 10 22:07:44.423: INFO: rc: 1 Jun 10 22:07:44.423: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-961 exec execpodclt87 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31833: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31833 nc: connect to 10.10.190.207 port 31833 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:07:44.962: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-961 exec execpodclt87 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31833' Jun 10 22:07:45.211: INFO: rc: 1 Jun 10 22:07:45.211: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-961 exec execpodclt87 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31833: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31833 nc: connect to 10.10.190.207 port 31833 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:07:45.962: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-961 exec execpodclt87 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31833' Jun 10 22:07:46.773: INFO: rc: 1 Jun 10 22:07:46.773: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-961 exec execpodclt87 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31833: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31833 nc: connect to 10.10.190.207 port 31833 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:07:46.962: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-961 exec execpodclt87 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31833' Jun 10 22:07:47.881: INFO: rc: 1 Jun 10 22:07:47.881: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-961 exec execpodclt87 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31833: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31833 nc: connect to 10.10.190.207 port 31833 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:07:47.962: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-961 exec execpodclt87 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31833' Jun 10 22:07:48.310: INFO: rc: 1 Jun 10 22:07:48.311: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-961 exec execpodclt87 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31833: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31833 nc: connect to 10.10.190.207 port 31833 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:07:48.962: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-961 exec execpodclt87 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31833' Jun 10 22:07:49.197: INFO: rc: 1 Jun 10 22:07:49.197: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-961 exec execpodclt87 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31833: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31833 nc: connect to 10.10.190.207 port 31833 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:07:49.962: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-961 exec execpodclt87 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31833' Jun 10 22:07:50.198: INFO: rc: 1 Jun 10 22:07:50.198: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-961 exec execpodclt87 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31833: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31833 nc: connect to 10.10.190.207 port 31833 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:07:50.962: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-961 exec execpodclt87 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31833' Jun 10 22:07:51.220: INFO: rc: 1 Jun 10 22:07:51.220: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-961 exec execpodclt87 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31833: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31833 nc: connect to 10.10.190.207 port 31833 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:07:51.963: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-961 exec execpodclt87 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31833' Jun 10 22:07:52.248: INFO: rc: 1 Jun 10 22:07:52.248: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-961 exec execpodclt87 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31833: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31833 nc: connect to 10.10.190.207 port 31833 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:07:52.962: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-961 exec execpodclt87 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31833' Jun 10 22:07:53.647: INFO: rc: 1 Jun 10 22:07:53.647: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-961 exec execpodclt87 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31833: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31833 nc: connect to 10.10.190.207 port 31833 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:07:53.963: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-961 exec execpodclt87 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31833' Jun 10 22:07:54.317: INFO: rc: 1 Jun 10 22:07:54.317: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-961 exec execpodclt87 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31833: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31833 nc: connect to 10.10.190.207 port 31833 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:07:54.963: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-961 exec execpodclt87 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31833' Jun 10 22:07:55.195: INFO: rc: 1 Jun 10 22:07:55.195: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-961 exec execpodclt87 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31833: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31833 nc: connect to 10.10.190.207 port 31833 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:07:55.962: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-961 exec execpodclt87 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31833' Jun 10 22:07:56.235: INFO: rc: 1 Jun 10 22:07:56.235: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-961 exec execpodclt87 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31833: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31833 nc: connect to 10.10.190.207 port 31833 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:07:56.962: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-961 exec execpodclt87 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31833' Jun 10 22:07:57.426: INFO: rc: 1 Jun 10 22:07:57.426: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-961 exec execpodclt87 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31833: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31833 nc: connect to 10.10.190.207 port 31833 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:07:57.962: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-961 exec execpodclt87 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31833' Jun 10 22:07:58.944: INFO: rc: 1 Jun 10 22:07:58.944: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-961 exec execpodclt87 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31833: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31833 nc: connect to 10.10.190.207 port 31833 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:07:58.962: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-961 exec execpodclt87 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31833' Jun 10 22:07:59.298: INFO: rc: 1 Jun 10 22:07:59.298: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-961 exec execpodclt87 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31833: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31833 nc: connect to 10.10.190.207 port 31833 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:07:59.963: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-961 exec execpodclt87 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31833' Jun 10 22:08:00.591: INFO: rc: 1 Jun 10 22:08:00.591: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-961 exec execpodclt87 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31833: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31833 nc: connect to 10.10.190.207 port 31833 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:08:00.963: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-961 exec execpodclt87 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31833' Jun 10 22:08:01.502: INFO: rc: 1 Jun 10 22:08:01.502: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-961 exec execpodclt87 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31833: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31833 nc: connect to 10.10.190.207 port 31833 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:08:01.962: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-961 exec execpodclt87 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31833' Jun 10 22:08:02.219: INFO: rc: 1 Jun 10 22:08:02.219: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-961 exec execpodclt87 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31833: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31833 nc: connect to 10.10.190.207 port 31833 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:08:02.962: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-961 exec execpodclt87 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31833' Jun 10 22:08:03.199: INFO: rc: 1 Jun 10 22:08:03.199: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-961 exec execpodclt87 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31833: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31833 nc: connect to 10.10.190.207 port 31833 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:08:03.964: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-961 exec execpodclt87 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31833' Jun 10 22:08:04.207: INFO: rc: 1 Jun 10 22:08:04.207: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-961 exec execpodclt87 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31833: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31833 nc: connect to 10.10.190.207 port 31833 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:08:04.964: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-961 exec execpodclt87 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31833' Jun 10 22:08:05.220: INFO: rc: 1 Jun 10 22:08:05.221: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-961 exec execpodclt87 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31833: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31833 + echo hostName nc: connect to 10.10.190.207 port 31833 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:08:05.962: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-961 exec execpodclt87 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31833' Jun 10 22:08:06.215: INFO: rc: 1 Jun 10 22:08:06.215: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-961 exec execpodclt87 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31833: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31833 nc: connect to 10.10.190.207 port 31833 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:08:06.963: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-961 exec execpodclt87 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31833' Jun 10 22:08:07.218: INFO: rc: 1 Jun 10 22:08:07.218: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-961 exec execpodclt87 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31833: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31833 nc: connect to 10.10.190.207 port 31833 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:08:07.962: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-961 exec execpodclt87 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31833' Jun 10 22:08:08.196: INFO: rc: 1 Jun 10 22:08:08.196: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-961 exec execpodclt87 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31833: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31833 nc: connect to 10.10.190.207 port 31833 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:08:08.962: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-961 exec execpodclt87 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31833' Jun 10 22:08:09.360: INFO: rc: 1 Jun 10 22:08:09.360: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-961 exec execpodclt87 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31833: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31833 + echo hostName nc: connect to 10.10.190.207 port 31833 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:08:09.963: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-961 exec execpodclt87 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31833' Jun 10 22:08:10.245: INFO: rc: 1 Jun 10 22:08:10.245: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-961 exec execpodclt87 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31833: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31833 nc: connect to 10.10.190.207 port 31833 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:08:10.962: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-961 exec execpodclt87 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31833' Jun 10 22:08:11.319: INFO: rc: 1 Jun 10 22:08:11.319: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-961 exec execpodclt87 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31833: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31833 nc: connect to 10.10.190.207 port 31833 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:08:11.962: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-961 exec execpodclt87 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31833' Jun 10 22:08:12.288: INFO: rc: 1 Jun 10 22:08:12.288: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-961 exec execpodclt87 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31833: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31833 nc: connect to 10.10.190.207 port 31833 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:08:12.963: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-961 exec execpodclt87 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31833' Jun 10 22:08:13.190: INFO: rc: 1 Jun 10 22:08:13.190: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-961 exec execpodclt87 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31833: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31833 nc: connect to 10.10.190.207 port 31833 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:08:13.962: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-961 exec execpodclt87 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31833' Jun 10 22:08:14.209: INFO: rc: 1 Jun 10 22:08:14.209: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-961 exec execpodclt87 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31833: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31833 nc: connect to 10.10.190.207 port 31833 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:08:14.962: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-961 exec execpodclt87 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31833' Jun 10 22:08:15.221: INFO: rc: 1 Jun 10 22:08:15.221: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-961 exec execpodclt87 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31833: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31833 nc: connect to 10.10.190.207 port 31833 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:08:15.962: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-961 exec execpodclt87 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31833' Jun 10 22:08:16.277: INFO: rc: 1 Jun 10 22:08:16.278: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-961 exec execpodclt87 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31833: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31833 nc: connect to 10.10.190.207 port 31833 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:08:16.963: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-961 exec execpodclt87 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31833' Jun 10 22:08:17.225: INFO: rc: 1 Jun 10 22:08:17.225: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-961 exec execpodclt87 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31833: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31833 nc: connect to 10.10.190.207 port 31833 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:08:17.962: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-961 exec execpodclt87 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31833' Jun 10 22:08:18.209: INFO: rc: 1 Jun 10 22:08:18.209: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-961 exec execpodclt87 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31833: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31833 nc: connect to 10.10.190.207 port 31833 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:08:18.962: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-961 exec execpodclt87 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31833' Jun 10 22:08:19.217: INFO: rc: 1 Jun 10 22:08:19.217: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-961 exec execpodclt87 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31833: Command stdout: stderr: + + echonc hostName -v -t -w 2 10.10.190.207 31833 nc: connect to 10.10.190.207 port 31833 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:08:19.962: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-961 exec execpodclt87 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31833' Jun 10 22:08:20.225: INFO: rc: 1 Jun 10 22:08:20.225: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-961 exec execpodclt87 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31833: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31833 nc: connect to 10.10.190.207 port 31833 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:08:20.962: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-961 exec execpodclt87 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31833' Jun 10 22:08:21.201: INFO: rc: 1 Jun 10 22:08:21.202: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-961 exec execpodclt87 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31833: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31833 nc: connect to 10.10.190.207 port 31833 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:08:21.962: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-961 exec execpodclt87 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31833' Jun 10 22:08:22.259: INFO: rc: 1 Jun 10 22:08:22.259: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-961 exec execpodclt87 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31833: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31833 nc: connect to 10.10.190.207 port 31833 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:08:22.963: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-961 exec execpodclt87 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31833' Jun 10 22:08:23.209: INFO: rc: 1 Jun 10 22:08:23.209: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-961 exec execpodclt87 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31833: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31833 nc: connect to 10.10.190.207 port 31833 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:08:23.963: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-961 exec execpodclt87 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31833' Jun 10 22:08:24.207: INFO: rc: 1 Jun 10 22:08:24.207: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-961 exec execpodclt87 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31833: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31833 nc: connect to 10.10.190.207 port 31833 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:08:24.963: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-961 exec execpodclt87 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31833' Jun 10 22:08:25.196: INFO: rc: 1 Jun 10 22:08:25.197: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-961 exec execpodclt87 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31833: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31833 nc: connect to 10.10.190.207 port 31833 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:08:25.963: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-961 exec execpodclt87 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31833' Jun 10 22:08:26.205: INFO: rc: 1 Jun 10 22:08:26.205: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-961 exec execpodclt87 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31833: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31833 nc: connect to 10.10.190.207 port 31833 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:08:26.962: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-961 exec execpodclt87 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31833' Jun 10 22:08:27.202: INFO: rc: 1 Jun 10 22:08:27.202: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-961 exec execpodclt87 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31833: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31833 nc: connect to 10.10.190.207 port 31833 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:08:27.962: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-961 exec execpodclt87 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31833' Jun 10 22:08:29.555: INFO: rc: 1 Jun 10 22:08:29.556: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-961 exec execpodclt87 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31833: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31833 nc: connect to 10.10.190.207 port 31833 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:08:29.962: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-961 exec execpodclt87 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31833' Jun 10 22:08:30.238: INFO: rc: 1 Jun 10 22:08:30.238: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-961 exec execpodclt87 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31833: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31833 nc: connect to 10.10.190.207 port 31833 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:08:30.962: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-961 exec execpodclt87 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31833' Jun 10 22:08:31.220: INFO: rc: 1 Jun 10 22:08:31.220: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-961 exec execpodclt87 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31833: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31833 nc: connect to 10.10.190.207 port 31833 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:08:31.962: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-961 exec execpodclt87 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31833' Jun 10 22:08:32.206: INFO: rc: 1 Jun 10 22:08:32.206: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-961 exec execpodclt87 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31833: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31833 nc: connect to 10.10.190.207 port 31833 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:08:32.964: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-961 exec execpodclt87 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31833' Jun 10 22:08:33.260: INFO: rc: 1 Jun 10 22:08:33.260: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-961 exec execpodclt87 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31833: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31833 nc: connect to 10.10.190.207 port 31833 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:08:33.962: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-961 exec execpodclt87 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31833' Jun 10 22:08:34.556: INFO: rc: 1 Jun 10 22:08:34.556: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-961 exec execpodclt87 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31833: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31833 nc: connect to 10.10.190.207 port 31833 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:08:34.962: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-961 exec execpodclt87 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31833' Jun 10 22:08:35.191: INFO: rc: 1 Jun 10 22:08:35.191: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-961 exec execpodclt87 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31833: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31833 nc: connect to 10.10.190.207 port 31833 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:08:35.963: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-961 exec execpodclt87 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31833' Jun 10 22:08:36.408: INFO: rc: 1 Jun 10 22:08:36.408: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-961 exec execpodclt87 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31833: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31833 nc: connect to 10.10.190.207 port 31833 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:08:36.962: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-961 exec execpodclt87 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31833' Jun 10 22:08:37.229: INFO: rc: 1 Jun 10 22:08:37.230: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-961 exec execpodclt87 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31833: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31833 nc: connect to 10.10.190.207 port 31833 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:08:37.962: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-961 exec execpodclt87 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31833' Jun 10 22:08:38.241: INFO: rc: 1 Jun 10 22:08:38.241: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-961 exec execpodclt87 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31833: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31833 nc: connect to 10.10.190.207 port 31833 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:08:38.963: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-961 exec execpodclt87 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31833' Jun 10 22:08:39.220: INFO: rc: 1 Jun 10 22:08:39.220: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-961 exec execpodclt87 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31833: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31833 nc: connect to 10.10.190.207 port 31833 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:08:39.962: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-961 exec execpodclt87 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31833' Jun 10 22:08:40.266: INFO: rc: 1 Jun 10 22:08:40.266: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-961 exec execpodclt87 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31833: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31833 nc: connect to 10.10.190.207 port 31833 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:08:40.963: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-961 exec execpodclt87 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31833' Jun 10 22:08:41.200: INFO: rc: 1 Jun 10 22:08:41.200: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-961 exec execpodclt87 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31833: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31833 nc: connect to 10.10.190.207 port 31833 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:08:41.962: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-961 exec execpodclt87 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31833' Jun 10 22:08:42.233: INFO: rc: 1 Jun 10 22:08:42.233: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-961 exec execpodclt87 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31833: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31833 nc: connect to 10.10.190.207 port 31833 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:08:42.962: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-961 exec execpodclt87 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31833' Jun 10 22:08:43.211: INFO: rc: 1 Jun 10 22:08:43.211: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-961 exec execpodclt87 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31833: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31833 nc: connect to 10.10.190.207 port 31833 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:08:43.962: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-961 exec execpodclt87 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31833' Jun 10 22:08:44.212: INFO: rc: 1 Jun 10 22:08:44.212: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-961 exec execpodclt87 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31833: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31833 nc: connect to 10.10.190.207 port 31833 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:08:44.962: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-961 exec execpodclt87 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31833' Jun 10 22:08:45.209: INFO: rc: 1 Jun 10 22:08:45.210: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-961 exec execpodclt87 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31833: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31833 nc: connect to 10.10.190.207 port 31833 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:08:45.964: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-961 exec execpodclt87 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31833' Jun 10 22:08:46.207: INFO: rc: 1 Jun 10 22:08:46.207: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-961 exec execpodclt87 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31833: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31833 nc: connect to 10.10.190.207 port 31833 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:08:46.962: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-961 exec execpodclt87 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31833' Jun 10 22:08:47.236: INFO: rc: 1 Jun 10 22:08:47.236: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-961 exec execpodclt87 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31833: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31833 nc: connect to 10.10.190.207 port 31833 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:08:47.963: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-961 exec execpodclt87 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31833' Jun 10 22:08:48.208: INFO: rc: 1 Jun 10 22:08:48.208: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-961 exec execpodclt87 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31833: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31833 nc: connect to 10.10.190.207 port 31833 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:08:48.963: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-961 exec execpodclt87 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31833' Jun 10 22:08:49.227: INFO: rc: 1 Jun 10 22:08:49.227: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-961 exec execpodclt87 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31833: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31833 nc: connect to 10.10.190.207 port 31833 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:08:49.963: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-961 exec execpodclt87 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31833' Jun 10 22:08:50.227: INFO: rc: 1 Jun 10 22:08:50.227: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-961 exec execpodclt87 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31833: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31833 nc: connect to 10.10.190.207 port 31833 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:08:50.963: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-961 exec execpodclt87 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31833' Jun 10 22:08:51.205: INFO: rc: 1 Jun 10 22:08:51.205: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-961 exec execpodclt87 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31833: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31833 nc: connect to 10.10.190.207 port 31833 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:08:51.962: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-961 exec execpodclt87 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31833' Jun 10 22:08:52.225: INFO: rc: 1 Jun 10 22:08:52.225: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-961 exec execpodclt87 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31833: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31833 nc: connect to 10.10.190.207 port 31833 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:08:52.963: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-961 exec execpodclt87 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31833' Jun 10 22:08:53.192: INFO: rc: 1 Jun 10 22:08:53.192: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-961 exec execpodclt87 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31833: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31833 nc: connect to 10.10.190.207 port 31833 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:08:53.963: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-961 exec execpodclt87 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31833' Jun 10 22:08:55.069: INFO: rc: 1 Jun 10 22:08:55.069: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-961 exec execpodclt87 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31833: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31833 nc: connect to 10.10.190.207 port 31833 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:08:55.962: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-961 exec execpodclt87 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31833' Jun 10 22:08:56.206: INFO: rc: 1 Jun 10 22:08:56.206: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-961 exec execpodclt87 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31833: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31833 nc: connect to 10.10.190.207 port 31833 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:08:56.962: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-961 exec execpodclt87 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31833' Jun 10 22:08:57.315: INFO: rc: 1 Jun 10 22:08:57.315: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-961 exec execpodclt87 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31833: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31833 nc: connect to 10.10.190.207 port 31833 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:08:57.963: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-961 exec execpodclt87 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31833' Jun 10 22:08:59.014: INFO: rc: 1 Jun 10 22:08:59.014: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-961 exec execpodclt87 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31833: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31833 nc: connect to 10.10.190.207 port 31833 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:08:59.962: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-961 exec execpodclt87 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31833' Jun 10 22:09:00.220: INFO: rc: 1 Jun 10 22:09:00.220: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-961 exec execpodclt87 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31833: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31833 nc: connect to 10.10.190.207 port 31833 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:09:00.963: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-961 exec execpodclt87 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31833' Jun 10 22:09:02.123: INFO: rc: 1 Jun 10 22:09:02.123: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-961 exec execpodclt87 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31833: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31833 nc: connect to 10.10.190.207 port 31833 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:09:02.963: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-961 exec execpodclt87 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31833' Jun 10 22:09:03.210: INFO: rc: 1 Jun 10 22:09:03.210: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-961 exec execpodclt87 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31833: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31833 nc: connect to 10.10.190.207 port 31833 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:09:03.962: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-961 exec execpodclt87 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31833' Jun 10 22:09:04.194: INFO: rc: 1 Jun 10 22:09:04.194: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-961 exec execpodclt87 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31833: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31833 nc: connect to 10.10.190.207 port 31833 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:09:04.963: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-961 exec execpodclt87 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31833' Jun 10 22:09:05.214: INFO: rc: 1 Jun 10 22:09:05.214: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-961 exec execpodclt87 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31833: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31833 nc: connect to 10.10.190.207 port 31833 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:09:05.963: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-961 exec execpodclt87 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31833' Jun 10 22:09:06.211: INFO: rc: 1 Jun 10 22:09:06.212: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-961 exec execpodclt87 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31833: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31833 nc: connect to 10.10.190.207 port 31833 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:09:06.962: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-961 exec execpodclt87 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31833' Jun 10 22:09:07.196: INFO: rc: 1 Jun 10 22:09:07.196: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-961 exec execpodclt87 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31833: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31833 nc: connect to 10.10.190.207 port 31833 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:09:07.962: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-961 exec execpodclt87 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31833' Jun 10 22:09:08.209: INFO: rc: 1 Jun 10 22:09:08.209: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-961 exec execpodclt87 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31833: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31833 nc: connect to 10.10.190.207 port 31833 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:09:08.963: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-961 exec execpodclt87 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31833' Jun 10 22:09:09.208: INFO: rc: 1 Jun 10 22:09:09.208: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-961 exec execpodclt87 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31833: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31833 nc: connect to 10.10.190.207 port 31833 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:09:09.962: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-961 exec execpodclt87 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31833' Jun 10 22:09:10.209: INFO: rc: 1 Jun 10 22:09:10.209: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-961 exec execpodclt87 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31833: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31833 nc: connect to 10.10.190.207 port 31833 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:09:10.964: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-961 exec execpodclt87 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31833' Jun 10 22:09:11.195: INFO: rc: 1 Jun 10 22:09:11.195: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-961 exec execpodclt87 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31833: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31833 nc: connect to 10.10.190.207 port 31833 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:09:11.963: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-961 exec execpodclt87 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31833' Jun 10 22:09:12.209: INFO: rc: 1 Jun 10 22:09:12.209: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-961 exec execpodclt87 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31833: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31833 nc: connect to 10.10.190.207 port 31833 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:09:12.964: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-961 exec execpodclt87 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31833' Jun 10 22:09:13.192: INFO: rc: 1 Jun 10 22:09:13.192: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-961 exec execpodclt87 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31833: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31833 nc: connect to 10.10.190.207 port 31833 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:09:13.192: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-961 exec execpodclt87 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31833' Jun 10 22:09:13.435: INFO: rc: 1 Jun 10 22:09:13.435: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-961 exec execpodclt87 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31833: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31833 nc: connect to 10.10.190.207 port 31833 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 10 22:09:13.436: FAIL: Unexpected error: <*errors.errorString | 0xc003c24ed0>: { s: "service is not reachable within 2m0s timeout on endpoint 10.10.190.207:31833 over TCP protocol", } service is not reachable within 2m0s timeout on endpoint 10.10.190.207:31833 over TCP protocol occurred Full Stack Trace k8s.io/kubernetes/test/e2e/network.glob..func24.15() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1351 +0x358 k8s.io/kubernetes/test/e2e.RunE2ETests(0xc0029b2c00) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x36c k8s.io/kubernetes/test/e2e.TestE2E(0xc0029b2c00) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:144 +0x2b testing.tRunner(0xc0029b2c00, 0x70f99e8) /usr/local/go/src/testing/testing.go:1193 +0xef created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1238 +0x2b3 Jun 10 22:09:13.437: INFO: Cleaning up the ExternalName to NodePort test service [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 STEP: Collecting events from namespace "services-961". STEP: Found 17 events. Jun 10 22:09:13.457: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for execpodclt87: { } Scheduled: Successfully assigned services-961/execpodclt87 to node1 Jun 10 22:09:13.457: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for externalname-service-5zzgz: { } Scheduled: Successfully assigned services-961/externalname-service-5zzgz to node2 Jun 10 22:09:13.457: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for externalname-service-m9jbh: { } Scheduled: Successfully assigned services-961/externalname-service-m9jbh to node1 Jun 10 22:09:13.457: INFO: At 2022-06-10 22:07:04 +0000 UTC - event for externalname-service: {replication-controller } SuccessfulCreate: Created pod: externalname-service-m9jbh Jun 10 22:09:13.457: INFO: At 2022-06-10 22:07:04 +0000 UTC - event for externalname-service: {replication-controller } SuccessfulCreate: Created pod: externalname-service-5zzgz Jun 10 22:09:13.457: INFO: At 2022-06-10 22:07:05 +0000 UTC - event for externalname-service-5zzgz: {kubelet node2} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/agnhost:2.32" Jun 10 22:09:13.457: INFO: At 2022-06-10 22:07:05 +0000 UTC - event for externalname-service-5zzgz: {kubelet node2} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/agnhost:2.32" in 253.057206ms Jun 10 22:09:13.457: INFO: At 2022-06-10 22:07:05 +0000 UTC - event for externalname-service-m9jbh: {kubelet node1} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/agnhost:2.32" Jun 10 22:09:13.457: INFO: At 2022-06-10 22:07:05 +0000 UTC - event for externalname-service-m9jbh: {kubelet node1} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/agnhost:2.32" in 255.547194ms Jun 10 22:09:13.457: INFO: At 2022-06-10 22:07:05 +0000 UTC - event for externalname-service-m9jbh: {kubelet node1} Created: Created container externalname-service Jun 10 22:09:13.457: INFO: At 2022-06-10 22:07:06 +0000 UTC - event for externalname-service-5zzgz: {kubelet node2} Started: Started container externalname-service Jun 10 22:09:13.457: INFO: At 2022-06-10 22:07:06 +0000 UTC - event for externalname-service-5zzgz: {kubelet node2} Created: Created container externalname-service Jun 10 22:09:13.457: INFO: At 2022-06-10 22:07:06 +0000 UTC - event for externalname-service-m9jbh: {kubelet node1} Started: Started container externalname-service Jun 10 22:09:13.457: INFO: At 2022-06-10 22:07:09 +0000 UTC - event for execpodclt87: {kubelet node1} Started: Started container agnhost-container Jun 10 22:09:13.457: INFO: At 2022-06-10 22:07:09 +0000 UTC - event for execpodclt87: {kubelet node1} Created: Created container agnhost-container Jun 10 22:09:13.457: INFO: At 2022-06-10 22:07:09 +0000 UTC - event for execpodclt87: {kubelet node1} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/agnhost:2.32" Jun 10 22:09:13.457: INFO: At 2022-06-10 22:07:09 +0000 UTC - event for execpodclt87: {kubelet node1} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/agnhost:2.32" in 319.55881ms Jun 10 22:09:13.460: INFO: POD NODE PHASE GRACE CONDITIONS Jun 10 22:09:13.460: INFO: execpodclt87 node1 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-06-10 22:07:07 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2022-06-10 22:07:10 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-06-10 22:07:10 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-06-10 22:07:07 +0000 UTC }] Jun 10 22:09:13.460: INFO: externalname-service-5zzgz node2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-06-10 22:07:04 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2022-06-10 22:07:07 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-06-10 22:07:07 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-06-10 22:07:04 +0000 UTC }] Jun 10 22:09:13.460: INFO: externalname-service-m9jbh node1 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-06-10 22:07:04 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2022-06-10 22:07:06 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-06-10 22:07:06 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-06-10 22:07:04 +0000 UTC }] Jun 10 22:09:13.460: INFO: Jun 10 22:09:13.465: INFO: Logging node info for node master1 Jun 10 22:09:13.467: INFO: Node Info: &Node{ObjectMeta:{master1 e472448e-87fd-4e8d-bbb7-98d43d3d8a87 50684 0 2022-06-10 19:57:38 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master1 kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.202 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/master.version:v0.8.2 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2022-06-10 19:57:41 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}}} {flanneld Update v1 2022-06-10 20:00:25 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2022-06-10 20:00:33 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.0.0/24\"":{}},"f:taints":{}}}} {kubelet Update v1 2022-06-10 20:05:13 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}} {nfd-master Update v1 2022-06-10 20:08:15 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/master.version":{}}}}}]},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{451201003520 0} {} 440625980Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234739200 0} {} 196518300Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{406080902496 0} {} 406080902496 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324575232 0} {} 195629468Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-06-10 20:03:20 +0000 UTC,LastTransitionTime:2022-06-10 20:03:20 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-06-10 22:09:12 +0000 UTC,LastTransitionTime:2022-06-10 19:57:36 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-06-10 22:09:12 +0000 UTC,LastTransitionTime:2022-06-10 19:57:36 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-06-10 22:09:12 +0000 UTC,LastTransitionTime:2022-06-10 19:57:36 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-06-10 22:09:12 +0000 UTC,LastTransitionTime:2022-06-10 20:00:33 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.202,},NodeAddress{Type:Hostname,Address:master1,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:3faca96dd267476388422e9ecfe8ffa5,SystemUUID:00ACFB60-0631-E711-906E-0017A4403562,BootID:a8563bde-8faa-4424-940f-741c59dd35bf,KernelVersion:3.10.0-1160.66.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.17,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:727708945,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 sirot/netperf-latest:latest],SizeBytes:282025213,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[k8s.gcr.io/nfd/node-feature-discovery@sha256:74a1cbd82354f148277e20cdce25d57816e355a896bc67f67a0f722164b16945 k8s.gcr.io/nfd/node-feature-discovery:v0.8.2],SizeBytes:108486428,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[quay.io/prometheus-operator/prometheus-operator@sha256:850c86bfeda4389bc9c757a9fd17ca5a090ea6b424968178d4467492cfa13921 quay.io/prometheus-operator/prometheus-operator:v0.44.1],SizeBytes:42617274,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e k8s.gcr.io/coredns/coredns:v1.8.0],SizeBytes:42454755,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[localhost:30500/tasextender@sha256:bec743bd4fe4525edfd5f3c9bb11da21629092dfe60d396ce7f8168ac1088695 tasextender:latest localhost:30500/tasextender:0.4],SizeBytes:28910791,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[registry@sha256:1cd9409a311350c3072fe510b52046f104416376c126a479cef9a4dfe692cf57 registry:2.7.0],SizeBytes:24191168,},ContainerImage{Names:[nginx@sha256:b92d3b942c8b84da889ac3dc6e83bd20ffb8cd2d8298eba92c8b0bf88d52f03e nginx:1.20.1-alpine],SizeBytes:22721538,},ContainerImage{Names:[@ :],SizeBytes:5577654,},ContainerImage{Names:[alpine@sha256:c0e9560cda118f9ec63ddefb4a173a2b2a0347082d7dff7dc14272e7841a5b5a alpine:3.12.1],SizeBytes:5573013,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jun 10 22:09:13.468: INFO: Logging kubelet events for node master1 Jun 10 22:09:13.470: INFO: Logging pods the kubelet thinks is on node master1 Jun 10 22:09:13.492: INFO: kube-flannel-xx9h7 started at 2022-06-10 20:00:20 +0000 UTC (1+1 container statuses recorded) Jun 10 22:09:13.492: INFO: Init container install-cni ready: true, restart count 0 Jun 10 22:09:13.492: INFO: Container kube-flannel ready: true, restart count 1 Jun 10 22:09:13.492: INFO: kube-multus-ds-amd64-t5pr7 started at 2022-06-10 20:00:29 +0000 UTC (0+1 container statuses recorded) Jun 10 22:09:13.492: INFO: Container kube-multus ready: true, restart count 1 Jun 10 22:09:13.492: INFO: dns-autoscaler-7df78bfcfb-kz7px started at 2022-06-10 20:00:58 +0000 UTC (0+1 container statuses recorded) Jun 10 22:09:13.492: INFO: Container autoscaler ready: true, restart count 1 Jun 10 22:09:13.492: INFO: prometheus-operator-585ccfb458-kkb8f started at 2022-06-10 20:13:26 +0000 UTC (0+2 container statuses recorded) Jun 10 22:09:13.492: INFO: Container kube-rbac-proxy ready: true, restart count 0 Jun 10 22:09:13.492: INFO: Container prometheus-operator ready: true, restart count 0 Jun 10 22:09:13.492: INFO: node-exporter-vc67r started at 2022-06-10 20:13:33 +0000 UTC (0+2 container statuses recorded) Jun 10 22:09:13.492: INFO: Container kube-rbac-proxy ready: true, restart count 0 Jun 10 22:09:13.492: INFO: Container node-exporter ready: true, restart count 0 Jun 10 22:09:13.492: INFO: kube-apiserver-master1 started at 2022-06-10 19:58:43 +0000 UTC (0+1 container statuses recorded) Jun 10 22:09:13.492: INFO: Container kube-apiserver ready: true, restart count 0 Jun 10 22:09:13.492: INFO: kube-controller-manager-master1 started at 2022-06-10 20:06:49 +0000 UTC (0+1 container statuses recorded) Jun 10 22:09:13.492: INFO: Container kube-controller-manager ready: true, restart count 2 Jun 10 22:09:13.492: INFO: kube-scheduler-master1 started at 2022-06-10 19:58:43 +0000 UTC (0+1 container statuses recorded) Jun 10 22:09:13.492: INFO: Container kube-scheduler ready: true, restart count 0 Jun 10 22:09:13.492: INFO: kube-proxy-rd4j7 started at 2022-06-10 19:59:24 +0000 UTC (0+1 container statuses recorded) Jun 10 22:09:13.492: INFO: Container kube-proxy ready: true, restart count 3 Jun 10 22:09:13.492: INFO: container-registry-65d7c44b96-rsh2n started at 2022-06-10 20:04:56 +0000 UTC (0+2 container statuses recorded) Jun 10 22:09:13.492: INFO: Container docker-registry ready: true, restart count 0 Jun 10 22:09:13.492: INFO: Container nginx ready: true, restart count 0 Jun 10 22:09:13.492: INFO: node-feature-discovery-controller-cff799f9f-74qhv started at 2022-06-10 20:08:09 +0000 UTC (0+1 container statuses recorded) Jun 10 22:09:13.492: INFO: Container nfd-controller ready: true, restart count 0 Jun 10 22:09:13.588: INFO: Latency metrics for node master1 Jun 10 22:09:13.588: INFO: Logging node info for node master2 Jun 10 22:09:13.591: INFO: Node Info: &Node{ObjectMeta:{master2 66c7af40-c8de-462b-933d-792f10a44a43 50678 0 2022-06-10 19:58:07 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master2 kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.203 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2022-06-10 19:58:08 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}}} {flanneld Update v1 2022-06-10 20:00:25 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2022-06-10 20:00:25 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}},"f:taints":{}}}} {kubelet Update v1 2022-06-10 20:10:54 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:capacity":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.1.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{451201003520 0} {} 440625980Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234743296 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{406080902496 0} {} 406080902496 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324579328 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-06-10 20:03:20 +0000 UTC,LastTransitionTime:2022-06-10 20:03:20 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-06-10 22:09:11 +0000 UTC,LastTransitionTime:2022-06-10 19:58:07 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-06-10 22:09:11 +0000 UTC,LastTransitionTime:2022-06-10 19:58:07 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-06-10 22:09:11 +0000 UTC,LastTransitionTime:2022-06-10 19:58:07 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-06-10 22:09:11 +0000 UTC,LastTransitionTime:2022-06-10 20:00:25 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.203,},NodeAddress{Type:Hostname,Address:master2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:31687d4b1abb46329a442e068ee56c42,SystemUUID:00A0DE53-E51D-E711-906E-0017A4403562,BootID:e234d452-a6d8-4bf0-b98d-a080613c39e9,KernelVersion:3.10.0-1160.66.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.17,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:727708945,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 sirot/netperf-latest:latest],SizeBytes:282025213,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e k8s.gcr.io/coredns/coredns:v1.8.0],SizeBytes:42454755,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 aquasec/kube-bench:0.3.1],SizeBytes:19301876,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jun 10 22:09:13.591: INFO: Logging kubelet events for node master2 Jun 10 22:09:13.593: INFO: Logging pods the kubelet thinks is on node master2 Jun 10 22:09:13.601: INFO: kube-multus-ds-amd64-nrmqq started at 2022-06-10 20:00:29 +0000 UTC (0+1 container statuses recorded) Jun 10 22:09:13.601: INFO: Container kube-multus ready: true, restart count 1 Jun 10 22:09:13.601: INFO: coredns-8474476ff8-hlspd started at 2022-06-10 20:01:00 +0000 UTC (0+1 container statuses recorded) Jun 10 22:09:13.601: INFO: Container coredns ready: true, restart count 1 Jun 10 22:09:13.601: INFO: kube-controller-manager-master2 started at 2022-06-10 20:06:49 +0000 UTC (0+1 container statuses recorded) Jun 10 22:09:13.601: INFO: Container kube-controller-manager ready: true, restart count 1 Jun 10 22:09:13.601: INFO: kube-scheduler-master2 started at 2022-06-10 20:06:49 +0000 UTC (0+1 container statuses recorded) Jun 10 22:09:13.601: INFO: Container kube-scheduler ready: true, restart count 3 Jun 10 22:09:13.601: INFO: kube-proxy-2kbvc started at 2022-06-10 19:59:24 +0000 UTC (0+1 container statuses recorded) Jun 10 22:09:13.601: INFO: Container kube-proxy ready: true, restart count 2 Jun 10 22:09:13.601: INFO: kube-flannel-ftn9l started at 2022-06-10 20:00:20 +0000 UTC (1+1 container statuses recorded) Jun 10 22:09:13.601: INFO: Init container install-cni ready: true, restart count 2 Jun 10 22:09:13.601: INFO: Container kube-flannel ready: true, restart count 1 Jun 10 22:09:13.601: INFO: kube-apiserver-master2 started at 2022-06-10 19:58:44 +0000 UTC (0+1 container statuses recorded) Jun 10 22:09:13.601: INFO: Container kube-apiserver ready: true, restart count 0 Jun 10 22:09:13.601: INFO: node-exporter-6fbrb started at 2022-06-10 20:13:33 +0000 UTC (0+2 container statuses recorded) Jun 10 22:09:13.601: INFO: Container kube-rbac-proxy ready: true, restart count 0 Jun 10 22:09:13.601: INFO: Container node-exporter ready: true, restart count 0 Jun 10 22:09:13.683: INFO: Latency metrics for node master2 Jun 10 22:09:13.683: INFO: Logging node info for node master3 Jun 10 22:09:13.686: INFO: Node Info: &Node{ObjectMeta:{master3 e51505ec-e791-4bbe-aeb1-bd0671fd4464 50672 0 2022-06-10 19:58:16 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master3 kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.204 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2022-06-10 19:58:18 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}}} {flanneld Update v1 2022-06-10 20:00:25 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2022-06-10 20:00:31 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.2.0/24\"":{}},"f:taints":{}}}} {kubelet Update v1 2022-06-10 20:10:54 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:capacity":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.2.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{451201003520 0} {} 440625980Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234739200 0} {} 196518300Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{406080902496 0} {} 406080902496 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324575232 0} {} 195629468Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-06-10 20:03:14 +0000 UTC,LastTransitionTime:2022-06-10 20:03:14 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-06-10 22:09:08 +0000 UTC,LastTransitionTime:2022-06-10 19:58:16 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-06-10 22:09:08 +0000 UTC,LastTransitionTime:2022-06-10 19:58:16 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-06-10 22:09:08 +0000 UTC,LastTransitionTime:2022-06-10 19:58:16 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-06-10 22:09:08 +0000 UTC,LastTransitionTime:2022-06-10 20:00:31 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.204,},NodeAddress{Type:Hostname,Address:master3,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:1f373495c4c54f68a37fa0d50cd1da58,SystemUUID:008B1444-141E-E711-906E-0017A4403562,BootID:a719d949-f9d1-4ee4-a79b-ab3a929b7d00,KernelVersion:3.10.0-1160.66.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.17,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:727708945,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e k8s.gcr.io/coredns/coredns:v1.8.0],SizeBytes:42454755,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jun 10 22:09:13.687: INFO: Logging kubelet events for node master3 Jun 10 22:09:13.689: INFO: Logging pods the kubelet thinks is on node master3 Jun 10 22:09:13.697: INFO: kube-scheduler-master3 started at 2022-06-10 20:03:07 +0000 UTC (0+1 container statuses recorded) Jun 10 22:09:13.697: INFO: Container kube-scheduler ready: true, restart count 1 Jun 10 22:09:13.697: INFO: kube-proxy-rm9n6 started at 2022-06-10 19:59:24 +0000 UTC (0+1 container statuses recorded) Jun 10 22:09:13.697: INFO: Container kube-proxy ready: true, restart count 1 Jun 10 22:09:13.697: INFO: kube-flannel-jpd2j started at 2022-06-10 20:00:20 +0000 UTC (1+1 container statuses recorded) Jun 10 22:09:13.697: INFO: Init container install-cni ready: true, restart count 2 Jun 10 22:09:13.697: INFO: Container kube-flannel ready: true, restart count 2 Jun 10 22:09:13.697: INFO: kube-multus-ds-amd64-8b4tg started at 2022-06-10 20:00:29 +0000 UTC (0+1 container statuses recorded) Jun 10 22:09:13.697: INFO: Container kube-multus ready: true, restart count 1 Jun 10 22:09:13.697: INFO: coredns-8474476ff8-s8q89 started at 2022-06-10 20:00:56 +0000 UTC (0+1 container statuses recorded) Jun 10 22:09:13.697: INFO: Container coredns ready: true, restart count 1 Jun 10 22:09:13.697: INFO: node-exporter-q4rw6 started at 2022-06-10 20:13:33 +0000 UTC (0+2 container statuses recorded) Jun 10 22:09:13.697: INFO: Container kube-rbac-proxy ready: true, restart count 0 Jun 10 22:09:13.697: INFO: Container node-exporter ready: true, restart count 0 Jun 10 22:09:13.697: INFO: kube-apiserver-master3 started at 2022-06-10 20:03:07 +0000 UTC (0+1 container statuses recorded) Jun 10 22:09:13.697: INFO: Container kube-apiserver ready: true, restart count 0 Jun 10 22:09:13.697: INFO: kube-controller-manager-master3 started at 2022-06-10 20:06:49 +0000 UTC (0+1 container statuses recorded) Jun 10 22:09:13.697: INFO: Container kube-controller-manager ready: true, restart count 2 Jun 10 22:09:13.771: INFO: Latency metrics for node master3 Jun 10 22:09:13.771: INFO: Logging node info for node node1 Jun 10 22:09:13.774: INFO: Node Info: &Node{ObjectMeta:{node1 fa951133-0317-499e-8a0a-fc7a0636a371 50675 0 2022-06-10 19:59:19 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux cmk.intel.com/cmk-node:true feature.node.kubernetes.io/cpu-cpuid.ADX:true feature.node.kubernetes.io/cpu-cpuid.AESNI:true feature.node.kubernetes.io/cpu-cpuid.AVX:true feature.node.kubernetes.io/cpu-cpuid.AVX2:true feature.node.kubernetes.io/cpu-cpuid.AVX512BW:true feature.node.kubernetes.io/cpu-cpuid.AVX512CD:true feature.node.kubernetes.io/cpu-cpuid.AVX512DQ:true feature.node.kubernetes.io/cpu-cpuid.AVX512F:true feature.node.kubernetes.io/cpu-cpuid.AVX512VL:true feature.node.kubernetes.io/cpu-cpuid.FMA3:true feature.node.kubernetes.io/cpu-cpuid.HLE:true feature.node.kubernetes.io/cpu-cpuid.IBPB:true feature.node.kubernetes.io/cpu-cpuid.MPX:true feature.node.kubernetes.io/cpu-cpuid.RTM:true feature.node.kubernetes.io/cpu-cpuid.SSE4:true feature.node.kubernetes.io/cpu-cpuid.SSE42:true feature.node.kubernetes.io/cpu-cpuid.STIBP:true feature.node.kubernetes.io/cpu-cpuid.VMX:true feature.node.kubernetes.io/cpu-cstate.enabled:true feature.node.kubernetes.io/cpu-hardware_multithreading:true feature.node.kubernetes.io/cpu-pstate.status:active feature.node.kubernetes.io/cpu-pstate.turbo:true feature.node.kubernetes.io/cpu-rdt.RDTCMT:true feature.node.kubernetes.io/cpu-rdt.RDTL3CA:true feature.node.kubernetes.io/cpu-rdt.RDTMBA:true feature.node.kubernetes.io/cpu-rdt.RDTMBM:true feature.node.kubernetes.io/cpu-rdt.RDTMON:true feature.node.kubernetes.io/kernel-config.NO_HZ:true feature.node.kubernetes.io/kernel-config.NO_HZ_FULL:true feature.node.kubernetes.io/kernel-selinux.enabled:true feature.node.kubernetes.io/kernel-version.full:3.10.0-1160.66.1.el7.x86_64 feature.node.kubernetes.io/kernel-version.major:3 feature.node.kubernetes.io/kernel-version.minor:10 feature.node.kubernetes.io/kernel-version.revision:0 feature.node.kubernetes.io/memory-numa:true feature.node.kubernetes.io/network-sriov.capable:true feature.node.kubernetes.io/network-sriov.configured:true feature.node.kubernetes.io/pci-0300_1a03.present:true feature.node.kubernetes.io/storage-nonrotationaldisk:true feature.node.kubernetes.io/system-os_release.ID:centos feature.node.kubernetes.io/system-os_release.VERSION_ID:7 feature.node.kubernetes.io/system-os_release.VERSION_ID.major:7 kubernetes.io/arch:amd64 kubernetes.io/hostname:node1 kubernetes.io/os:linux] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.207 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/extended-resources: nfd.node.kubernetes.io/feature-labels:cpu-cpuid.ADX,cpu-cpuid.AESNI,cpu-cpuid.AVX,cpu-cpuid.AVX2,cpu-cpuid.AVX512BW,cpu-cpuid.AVX512CD,cpu-cpuid.AVX512DQ,cpu-cpuid.AVX512F,cpu-cpuid.AVX512VL,cpu-cpuid.FMA3,cpu-cpuid.HLE,cpu-cpuid.IBPB,cpu-cpuid.MPX,cpu-cpuid.RTM,cpu-cpuid.SSE4,cpu-cpuid.SSE42,cpu-cpuid.STIBP,cpu-cpuid.VMX,cpu-cstate.enabled,cpu-hardware_multithreading,cpu-pstate.status,cpu-pstate.turbo,cpu-rdt.RDTCMT,cpu-rdt.RDTL3CA,cpu-rdt.RDTMBA,cpu-rdt.RDTMBM,cpu-rdt.RDTMON,kernel-config.NO_HZ,kernel-config.NO_HZ_FULL,kernel-selinux.enabled,kernel-version.full,kernel-version.major,kernel-version.minor,kernel-version.revision,memory-numa,network-sriov.capable,network-sriov.configured,pci-0300_1a03.present,storage-nonrotationaldisk,system-os_release.ID,system-os_release.VERSION_ID,system-os_release.VERSION_ID.major nfd.node.kubernetes.io/worker.version:v0.8.2 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2022-06-10 19:59:19 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.3.0/24\"":{}}}}} {kubeadm Update v1 2022-06-10 19:59:20 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {flanneld Update v1 2022-06-10 20:00:25 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {nfd-master Update v1 2022-06-10 20:08:16 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/extended-resources":{},"f:nfd.node.kubernetes.io/feature-labels":{},"f:nfd.node.kubernetes.io/worker.version":{}},"f:labels":{"f:feature.node.kubernetes.io/cpu-cpuid.ADX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AESNI":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX2":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512BW":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512CD":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512DQ":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512F":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512VL":{},"f:feature.node.kubernetes.io/cpu-cpuid.FMA3":{},"f:feature.node.kubernetes.io/cpu-cpuid.HLE":{},"f:feature.node.kubernetes.io/cpu-cpuid.IBPB":{},"f:feature.node.kubernetes.io/cpu-cpuid.MPX":{},"f:feature.node.kubernetes.io/cpu-cpuid.RTM":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE4":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE42":{},"f:feature.node.kubernetes.io/cpu-cpuid.STIBP":{},"f:feature.node.kubernetes.io/cpu-cpuid.VMX":{},"f:feature.node.kubernetes.io/cpu-cstate.enabled":{},"f:feature.node.kubernetes.io/cpu-hardware_multithreading":{},"f:feature.node.kubernetes.io/cpu-pstate.status":{},"f:feature.node.kubernetes.io/cpu-pstate.turbo":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTCMT":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTL3CA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBM":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMON":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ_FULL":{},"f:feature.node.kubernetes.io/kernel-selinux.enabled":{},"f:feature.node.kubernetes.io/kernel-version.full":{},"f:feature.node.kubernetes.io/kernel-version.major":{},"f:feature.node.kubernetes.io/kernel-version.minor":{},"f:feature.node.kubernetes.io/kernel-version.revision":{},"f:feature.node.kubernetes.io/memory-numa":{},"f:feature.node.kubernetes.io/network-sriov.capable":{},"f:feature.node.kubernetes.io/network-sriov.configured":{},"f:feature.node.kubernetes.io/pci-0300_1a03.present":{},"f:feature.node.kubernetes.io/storage-nonrotationaldisk":{},"f:feature.node.kubernetes.io/system-os_release.ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID.major":{}}}}} {Swagger-Codegen Update v1 2022-06-10 20:11:46 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:cmk.intel.com/cmk-node":{}}},"f:status":{"f:capacity":{"f:cmk.intel.com/exclusive-cores":{}}}}} {kubelet Update v1 2022-06-10 20:11:53 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:cmk.intel.com/exclusive-cores":{},"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:capacity":{"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.3.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.3.0/24],},Status:NodeStatus{Capacity:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{451201003520 0} {} 440625980Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{201269608448 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{77 0} {} 77 DecimalSI},ephemeral-storage: {{406080902496 0} {} 406080902496 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{178884608000 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-06-10 20:03:13 +0000 UTC,LastTransitionTime:2022-06-10 20:03:13 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-06-10 22:09:09 +0000 UTC,LastTransitionTime:2022-06-10 19:59:19 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-06-10 22:09:09 +0000 UTC,LastTransitionTime:2022-06-10 19:59:19 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-06-10 22:09:09 +0000 UTC,LastTransitionTime:2022-06-10 19:59:19 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-06-10 22:09:09 +0000 UTC,LastTransitionTime:2022-06-10 20:00:27 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.207,},NodeAddress{Type:Hostname,Address:node1,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:aabc551d0ffe4cb3b41c0db91649a9a2,SystemUUID:00CDA902-D022-E711-906E-0017A4403562,BootID:fea48af7-d08f-4093-b808-340d06faf38b,KernelVersion:3.10.0-1160.66.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.17,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[opnfv/barometer-collectd@sha256:f30e965aa6195e6ac4ca2410f5a15e3704c92e4afa5208178ca22a7911975d66],SizeBytes:1075575763,},ContainerImage{Names:[@ :],SizeBytes:1003977815,},ContainerImage{Names:[localhost:30500/cmk@sha256:fa61e6e6fee0a4d296013d2993a9ff5538ff0b2e232e6b9c661a6604d93ce888 cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:727708945,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[golang@sha256:db2475a1dbb2149508e5db31d7d77a75e6600d54be645f37681f03f2762169ba golang:alpine3.12],SizeBytes:301186719,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[k8s.gcr.io/etcd@sha256:4ad90a11b55313b182afc186b9876c8e891531b8db4c9bf1541953021618d0e2 k8s.gcr.io/etcd:3.4.13-0],SizeBytes:253392289,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:702a992280fb7c3303e84a5801acbb4c9c7fcf48cffe0e9c8be3f0c60f74cf89 k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.4],SizeBytes:253371792,},ContainerImage{Names:[grafana/grafana@sha256:ba39bf5131dcc0464134a3ff0e26e8c6380415249fa725e5f619176601255172 grafana/grafana:7.5.4],SizeBytes:203572842,},ContainerImage{Names:[quay.io/prometheus/prometheus@sha256:b899dbd1b9017b9a379f76ce5b40eead01a62762c4f2057eacef945c3c22d210 quay.io/prometheus/prometheus:v2.22.1],SizeBytes:168344243,},ContainerImage{Names:[nginx@sha256:a05b0cdd4fc1be3b224ba9662ebdf98fe44c09c0c9215b45f84344c12867002e nginx:1.21.1],SizeBytes:133175493,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:716d2f68314c5c4ddd5ecdb45183fcb4ed8019015982c1321571f863989b70b0 k8s.gcr.io/e2e-test-images/httpd:2.4.39-1],SizeBytes:126894770,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:125930239,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:1f36a24cfb5e0c3f725d7565a867c2384282fcbeccc77b07b423c9da95763a9a k8s.gcr.io/e2e-test-images/nautilus:1.4],SizeBytes:121748345,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[k8s.gcr.io/nfd/node-feature-discovery@sha256:74a1cbd82354f148277e20cdce25d57816e355a896bc67f67a0f722164b16945 k8s.gcr.io/nfd/node-feature-discovery:v0.8.2],SizeBytes:108486428,},ContainerImage{Names:[directxman12/k8s-prometheus-adapter@sha256:2b09a571757a12c0245f2f1a74db4d1b9386ff901cf57f5ce48a0a682bd0e3af directxman12/k8s-prometheus-adapter:v0.8.2],SizeBytes:68230450,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/sample-apiserver@sha256:e7fddbaac4c3451da2365ab90bad149d32f11409738034e41e0f460927f7c276 k8s.gcr.io/e2e-test-images/sample-apiserver:1.17.4],SizeBytes:58172101,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[localhost:30500/sriov-device-plugin@sha256:73408b8d6699bf382b8f7526b6d0a986fad0f037440cd9aabd8985a7e1dbea07 nfvpe/sriov-device-plugin:latest localhost:30500/sriov-device-plugin:v3.3.2],SizeBytes:42676189,},ContainerImage{Names:[localhost:30500/tasextender@sha256:bec743bd4fe4525edfd5f3c9bb11da21629092dfe60d396ce7f8168ac1088695 localhost:30500/tasextender:0.4],SizeBytes:28910791,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[prom/collectd-exporter@sha256:73fbda4d24421bff3b741c27efc36f1b6fbe7c57c378d56d4ff78101cd556654],SizeBytes:17463681,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:16032814,},ContainerImage{Names:[quay.io/prometheus-operator/prometheus-config-reloader@sha256:4dee0fcf1820355ddd6986c1317b555693776c731315544a99d6cc59a7e34ce9 quay.io/prometheus-operator/prometheus-config-reloader:v0.44.1],SizeBytes:13433274,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:11443478,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonewprivs@sha256:8ac1264691820febacf3aea5d152cbde6d10685731ec14966a9401c6f47a68ac k8s.gcr.io/e2e-test-images/nonewprivs:1.3],SizeBytes:7107254,},ContainerImage{Names:[appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172cead6502fec989198b98d8bb appropriate/curl:edge],SizeBytes:5654234,},ContainerImage{Names:[alpine@sha256:c75ac27b49326926b803b9ed43bf088bc220d22556de1bc5f72d742c91398f69 alpine:3.12],SizeBytes:5581590,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:1154361,},ContainerImage{Names:[busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 busybox:1.28],SizeBytes:1146369,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jun 10 22:09:13.775: INFO: Logging kubelet events for node node1 Jun 10 22:09:13.777: INFO: Logging pods the kubelet thinks is on node node1 Jun 10 22:09:13.792: INFO: nginx-proxy-node1 started at 2022-06-10 19:59:19 +0000 UTC (0+1 container statuses recorded) Jun 10 22:09:13.792: INFO: Container nginx-proxy ready: true, restart count 2 Jun 10 22:09:13.792: INFO: execpodclt87 started at 2022-06-10 22:07:07 +0000 UTC (0+1 container statuses recorded) Jun 10 22:09:13.792: INFO: Container agnhost-container ready: true, restart count 0 Jun 10 22:09:13.792: INFO: test-webserver-130dfafe-d20f-42b6-9b5d-661bc0a0fc28 started at 2022-06-10 22:06:43 +0000 UTC (0+1 container statuses recorded) Jun 10 22:09:13.792: INFO: Container test-webserver ready: true, restart count 0 Jun 10 22:09:13.792: INFO: kube-proxy-5bkrr started at 2022-06-10 19:59:24 +0000 UTC (0+1 container statuses recorded) Jun 10 22:09:13.792: INFO: Container kube-proxy ready: true, restart count 1 Jun 10 22:09:13.792: INFO: kube-flannel-x926c started at 2022-06-10 20:00:20 +0000 UTC (1+1 container statuses recorded) Jun 10 22:09:13.792: INFO: Init container install-cni ready: true, restart count 2 Jun 10 22:09:13.792: INFO: Container kube-flannel ready: true, restart count 2 Jun 10 22:09:13.792: INFO: kube-multus-ds-amd64-4gckf started at 2022-06-10 20:00:29 +0000 UTC (0+1 container statuses recorded) Jun 10 22:09:13.792: INFO: Container kube-multus ready: true, restart count 1 Jun 10 22:09:13.792: INFO: tas-telemetry-aware-scheduling-84ff454dfb-lb2mn started at 2022-06-10 20:16:40 +0000 UTC (0+1 container statuses recorded) Jun 10 22:09:13.792: INFO: Container tas-extender ready: true, restart count 0 Jun 10 22:09:13.792: INFO: cmk-init-discover-node1-hlbt6 started at 2022-06-10 20:11:42 +0000 UTC (0+3 container statuses recorded) Jun 10 22:09:13.792: INFO: Container discover ready: false, restart count 0 Jun 10 22:09:13.792: INFO: Container init ready: false, restart count 0 Jun 10 22:09:13.792: INFO: Container install ready: false, restart count 0 Jun 10 22:09:13.792: INFO: prometheus-k8s-0 started at 2022-06-10 20:13:45 +0000 UTC (0+4 container statuses recorded) Jun 10 22:09:13.792: INFO: Container config-reloader ready: true, restart count 0 Jun 10 22:09:13.792: INFO: Container custom-metrics-apiserver ready: true, restart count 0 Jun 10 22:09:13.792: INFO: Container grafana ready: true, restart count 0 Jun 10 22:09:13.792: INFO: Container prometheus ready: true, restart count 1 Jun 10 22:09:13.792: INFO: externalname-service-m9jbh started at 2022-06-10 22:07:04 +0000 UTC (0+1 container statuses recorded) Jun 10 22:09:13.792: INFO: Container externalname-service ready: true, restart count 0 Jun 10 22:09:13.792: INFO: node-feature-discovery-worker-9xsdt started at 2022-06-10 20:08:09 +0000 UTC (0+1 container statuses recorded) Jun 10 22:09:13.792: INFO: Container nfd-worker ready: true, restart count 0 Jun 10 22:09:13.792: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-k4f5v started at 2022-06-10 20:09:21 +0000 UTC (0+1 container statuses recorded) Jun 10 22:09:13.792: INFO: Container kube-sriovdp ready: true, restart count 0 Jun 10 22:09:13.792: INFO: node-exporter-tk8f9 started at 2022-06-10 20:13:33 +0000 UTC (0+2 container statuses recorded) Jun 10 22:09:13.792: INFO: Container kube-rbac-proxy ready: true, restart count 0 Jun 10 22:09:13.792: INFO: Container node-exporter ready: true, restart count 0 Jun 10 22:09:13.792: INFO: cmk-webhook-6c9d5f8578-n9w8j started at 2022-06-10 20:12:30 +0000 UTC (0+1 container statuses recorded) Jun 10 22:09:13.792: INFO: Container cmk-webhook ready: true, restart count 0 Jun 10 22:09:13.792: INFO: collectd-kpj5z started at 2022-06-10 20:17:30 +0000 UTC (0+3 container statuses recorded) Jun 10 22:09:13.792: INFO: Container collectd ready: true, restart count 0 Jun 10 22:09:13.792: INFO: Container collectd-exporter ready: true, restart count 0 Jun 10 22:09:13.792: INFO: Container rbac-proxy ready: true, restart count 0 Jun 10 22:09:13.792: INFO: cmk-qjrhs started at 2022-06-10 20:12:29 +0000 UTC (0+2 container statuses recorded) Jun 10 22:09:13.792: INFO: Container nodereport ready: true, restart count 0 Jun 10 22:09:13.792: INFO: Container reconcile ready: true, restart count 0 Jun 10 22:09:13.940: INFO: Latency metrics for node node1 Jun 10 22:09:13.940: INFO: Logging node info for node node2 Jun 10 22:09:13.944: INFO: Node Info: &Node{ObjectMeta:{node2 e3ba5b73-7a35-4d3f-9138-31db06c90dc3 50669 0 2022-06-10 19:59:19 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux cmk.intel.com/cmk-node:true feature.node.kubernetes.io/cpu-cpuid.ADX:true feature.node.kubernetes.io/cpu-cpuid.AESNI:true feature.node.kubernetes.io/cpu-cpuid.AVX:true feature.node.kubernetes.io/cpu-cpuid.AVX2:true feature.node.kubernetes.io/cpu-cpuid.AVX512BW:true feature.node.kubernetes.io/cpu-cpuid.AVX512CD:true feature.node.kubernetes.io/cpu-cpuid.AVX512DQ:true feature.node.kubernetes.io/cpu-cpuid.AVX512F:true feature.node.kubernetes.io/cpu-cpuid.AVX512VL:true feature.node.kubernetes.io/cpu-cpuid.FMA3:true feature.node.kubernetes.io/cpu-cpuid.HLE:true feature.node.kubernetes.io/cpu-cpuid.IBPB:true feature.node.kubernetes.io/cpu-cpuid.MPX:true feature.node.kubernetes.io/cpu-cpuid.RTM:true feature.node.kubernetes.io/cpu-cpuid.SSE4:true feature.node.kubernetes.io/cpu-cpuid.SSE42:true feature.node.kubernetes.io/cpu-cpuid.STIBP:true feature.node.kubernetes.io/cpu-cpuid.VMX:true feature.node.kubernetes.io/cpu-cstate.enabled:true feature.node.kubernetes.io/cpu-hardware_multithreading:true feature.node.kubernetes.io/cpu-pstate.status:active feature.node.kubernetes.io/cpu-pstate.turbo:true feature.node.kubernetes.io/cpu-rdt.RDTCMT:true feature.node.kubernetes.io/cpu-rdt.RDTL3CA:true feature.node.kubernetes.io/cpu-rdt.RDTMBA:true feature.node.kubernetes.io/cpu-rdt.RDTMBM:true feature.node.kubernetes.io/cpu-rdt.RDTMON:true feature.node.kubernetes.io/kernel-config.NO_HZ:true feature.node.kubernetes.io/kernel-config.NO_HZ_FULL:true feature.node.kubernetes.io/kernel-selinux.enabled:true feature.node.kubernetes.io/kernel-version.full:3.10.0-1160.66.1.el7.x86_64 feature.node.kubernetes.io/kernel-version.major:3 feature.node.kubernetes.io/kernel-version.minor:10 feature.node.kubernetes.io/kernel-version.revision:0 feature.node.kubernetes.io/memory-numa:true feature.node.kubernetes.io/network-sriov.capable:true feature.node.kubernetes.io/network-sriov.configured:true feature.node.kubernetes.io/pci-0300_1a03.present:true feature.node.kubernetes.io/storage-nonrotationaldisk:true feature.node.kubernetes.io/system-os_release.ID:centos feature.node.kubernetes.io/system-os_release.VERSION_ID:7 feature.node.kubernetes.io/system-os_release.VERSION_ID.major:7 kubernetes.io/arch:amd64 kubernetes.io/hostname:node2 kubernetes.io/os:linux] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.208 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/extended-resources: nfd.node.kubernetes.io/feature-labels:cpu-cpuid.ADX,cpu-cpuid.AESNI,cpu-cpuid.AVX,cpu-cpuid.AVX2,cpu-cpuid.AVX512BW,cpu-cpuid.AVX512CD,cpu-cpuid.AVX512DQ,cpu-cpuid.AVX512F,cpu-cpuid.AVX512VL,cpu-cpuid.FMA3,cpu-cpuid.HLE,cpu-cpuid.IBPB,cpu-cpuid.MPX,cpu-cpuid.RTM,cpu-cpuid.SSE4,cpu-cpuid.SSE42,cpu-cpuid.STIBP,cpu-cpuid.VMX,cpu-cstate.enabled,cpu-hardware_multithreading,cpu-pstate.status,cpu-pstate.turbo,cpu-rdt.RDTCMT,cpu-rdt.RDTL3CA,cpu-rdt.RDTMBA,cpu-rdt.RDTMBM,cpu-rdt.RDTMON,kernel-config.NO_HZ,kernel-config.NO_HZ_FULL,kernel-selinux.enabled,kernel-version.full,kernel-version.major,kernel-version.minor,kernel-version.revision,memory-numa,network-sriov.capable,network-sriov.configured,pci-0300_1a03.present,storage-nonrotationaldisk,system-os_release.ID,system-os_release.VERSION_ID,system-os_release.VERSION_ID.major nfd.node.kubernetes.io/worker.version:v0.8.2 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2022-06-10 19:59:19 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.4.0/24\"":{}}}}} {kubeadm Update v1 2022-06-10 19:59:20 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {flanneld Update v1 2022-06-10 20:00:25 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {nfd-master Update v1 2022-06-10 20:08:16 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/extended-resources":{},"f:nfd.node.kubernetes.io/feature-labels":{},"f:nfd.node.kubernetes.io/worker.version":{}},"f:labels":{"f:feature.node.kubernetes.io/cpu-cpuid.ADX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AESNI":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX2":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512BW":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512CD":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512DQ":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512F":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512VL":{},"f:feature.node.kubernetes.io/cpu-cpuid.FMA3":{},"f:feature.node.kubernetes.io/cpu-cpuid.HLE":{},"f:feature.node.kubernetes.io/cpu-cpuid.IBPB":{},"f:feature.node.kubernetes.io/cpu-cpuid.MPX":{},"f:feature.node.kubernetes.io/cpu-cpuid.RTM":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE4":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE42":{},"f:feature.node.kubernetes.io/cpu-cpuid.STIBP":{},"f:feature.node.kubernetes.io/cpu-cpuid.VMX":{},"f:feature.node.kubernetes.io/cpu-cstate.enabled":{},"f:feature.node.kubernetes.io/cpu-hardware_multithreading":{},"f:feature.node.kubernetes.io/cpu-pstate.status":{},"f:feature.node.kubernetes.io/cpu-pstate.turbo":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTCMT":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTL3CA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBM":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMON":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ_FULL":{},"f:feature.node.kubernetes.io/kernel-selinux.enabled":{},"f:feature.node.kubernetes.io/kernel-version.full":{},"f:feature.node.kubernetes.io/kernel-version.major":{},"f:feature.node.kubernetes.io/kernel-version.minor":{},"f:feature.node.kubernetes.io/kernel-version.revision":{},"f:feature.node.kubernetes.io/memory-numa":{},"f:feature.node.kubernetes.io/network-sriov.capable":{},"f:feature.node.kubernetes.io/network-sriov.configured":{},"f:feature.node.kubernetes.io/pci-0300_1a03.present":{},"f:feature.node.kubernetes.io/storage-nonrotationaldisk":{},"f:feature.node.kubernetes.io/system-os_release.ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID.major":{}}}}} {Swagger-Codegen Update v1 2022-06-10 20:12:10 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:cmk.intel.com/cmk-node":{}}},"f:status":{"f:capacity":{"f:cmk.intel.com/exclusive-cores":{}}}}} {kubelet Update v1 2022-06-10 20:12:13 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:cmk.intel.com/exclusive-cores":{},"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:capacity":{"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.4.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.4.0/24],},Status:NodeStatus{Capacity:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{451201003520 0} {} 440625980Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{201269604352 0} {} 196552348Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{77 0} {} 77 DecimalSI},ephemeral-storage: {{406080902496 0} {} 406080902496 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{178884603904 0} {} 174691996Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-06-10 20:03:16 +0000 UTC,LastTransitionTime:2022-06-10 20:03:16 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-06-10 22:09:07 +0000 UTC,LastTransitionTime:2022-06-10 19:59:19 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-06-10 22:09:07 +0000 UTC,LastTransitionTime:2022-06-10 19:59:19 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-06-10 22:09:07 +0000 UTC,LastTransitionTime:2022-06-10 19:59:19 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-06-10 22:09:07 +0000 UTC,LastTransitionTime:2022-06-10 20:00:31 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.208,},NodeAddress{Type:Hostname,Address:node2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:bb5fb4a83f9949939cd41b7583e9b343,SystemUUID:80B3CD56-852F-E711-906E-0017A4403562,BootID:bd9c2046-c9ae-4b83-a147-c07e3487254e,KernelVersion:3.10.0-1160.66.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.17,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[opnfv/barometer-collectd@sha256:f30e965aa6195e6ac4ca2410f5a15e3704c92e4afa5208178ca22a7911975d66],SizeBytes:1075575763,},ContainerImage{Names:[localhost:30500/cmk@sha256:fa61e6e6fee0a4d296013d2993a9ff5538ff0b2e232e6b9c661a6604d93ce888 localhost:30500/cmk:v1.5.1],SizeBytes:727708945,},ContainerImage{Names:[cmk:v1.5.1],SizeBytes:727708945,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[aquasec/kube-hunter@sha256:2be6820bc1d7e0f57193a9a27d5a3e16b2fd93c53747b03ce8ca48c6fc323781 aquasec/kube-hunter:0.3.1],SizeBytes:347611549,},ContainerImage{Names:[sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 sirot/netperf-latest:latest],SizeBytes:282025213,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:702a992280fb7c3303e84a5801acbb4c9c7fcf48cffe0e9c8be3f0c60f74cf89 k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.4],SizeBytes:253371792,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[nginx@sha256:a05b0cdd4fc1be3b224ba9662ebdf98fe44c09c0c9215b45f84344c12867002e nginx:1.21.1],SizeBytes:133175493,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:716d2f68314c5c4ddd5ecdb45183fcb4ed8019015982c1321571f863989b70b0 k8s.gcr.io/e2e-test-images/httpd:2.4.39-1],SizeBytes:126894770,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:125930239,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:1f36a24cfb5e0c3f725d7565a867c2384282fcbeccc77b07b423c9da95763a9a k8s.gcr.io/e2e-test-images/nautilus:1.4],SizeBytes:121748345,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[k8s.gcr.io/nfd/node-feature-discovery@sha256:74a1cbd82354f148277e20cdce25d57816e355a896bc67f67a0f722164b16945 k8s.gcr.io/nfd/node-feature-discovery:v0.8.2],SizeBytes:108486428,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[localhost:30500/sriov-device-plugin@sha256:73408b8d6699bf382b8f7526b6d0a986fad0f037440cd9aabd8985a7e1dbea07 localhost:30500/sriov-device-plugin:v3.3.2],SizeBytes:42676189,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 aquasec/kube-bench:0.3.1],SizeBytes:19301876,},ContainerImage{Names:[prom/collectd-exporter@sha256:73fbda4d24421bff3b741c27efc36f1b6fbe7c57c378d56d4ff78101cd556654],SizeBytes:17463681,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:16032814,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:11443478,},ContainerImage{Names:[appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172cead6502fec989198b98d8bb appropriate/curl:edge],SizeBytes:5654234,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:1154361,},ContainerImage{Names:[busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 busybox:1.28],SizeBytes:1146369,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jun 10 22:09:13.945: INFO: Logging kubelet events for node node2 Jun 10 22:09:13.947: INFO: Logging pods the kubelet thinks is on node node2 Jun 10 22:09:13.959: INFO: kube-multus-ds-amd64-nj866 started at 2022-06-10 20:00:29 +0000 UTC (0+1 container statuses recorded) Jun 10 22:09:13.959: INFO: Container kube-multus ready: true, restart count 1 Jun 10 22:09:13.959: INFO: kubernetes-dashboard-785dcbb76d-7pmgn started at 2022-06-10 20:01:00 +0000 UTC (0+1 container statuses recorded) Jun 10 22:09:13.959: INFO: Container kubernetes-dashboard ready: true, restart count 1 Jun 10 22:09:13.959: INFO: cmk-zpstc started at 2022-06-10 20:12:29 +0000 UTC (0+2 container statuses recorded) Jun 10 22:09:13.959: INFO: Container nodereport ready: true, restart count 0 Jun 10 22:09:13.959: INFO: Container reconcile ready: true, restart count 0 Jun 10 22:09:13.959: INFO: nginx-proxy-node2 started at 2022-06-10 19:59:19 +0000 UTC (0+1 container statuses recorded) Jun 10 22:09:13.959: INFO: Container nginx-proxy ready: true, restart count 2 Jun 10 22:09:13.959: INFO: node-feature-discovery-worker-s9mwk started at 2022-06-10 20:08:09 +0000 UTC (0+1 container statuses recorded) Jun 10 22:09:13.959: INFO: Container nfd-worker ready: true, restart count 0 Jun 10 22:09:13.959: INFO: kube-flannel-8jl6m started at 2022-06-10 20:00:20 +0000 UTC (1+1 container statuses recorded) Jun 10 22:09:13.959: INFO: Init container install-cni ready: true, restart count 2 Jun 10 22:09:13.959: INFO: Container kube-flannel ready: true, restart count 2 Jun 10 22:09:13.959: INFO: cmk-init-discover-node2-jxvbr started at 2022-06-10 20:12:04 +0000 UTC (0+3 container statuses recorded) Jun 10 22:09:13.959: INFO: Container discover ready: false, restart count 0 Jun 10 22:09:13.959: INFO: Container init ready: false, restart count 0 Jun 10 22:09:13.959: INFO: Container install ready: false, restart count 0 Jun 10 22:09:13.959: INFO: externalname-service-5zzgz started at 2022-06-10 22:07:04 +0000 UTC (0+1 container statuses recorded) Jun 10 22:09:13.959: INFO: Container externalname-service ready: true, restart count 0 Jun 10 22:09:13.959: INFO: kube-proxy-4clxz started at 2022-06-10 19:59:24 +0000 UTC (0+1 container statuses recorded) Jun 10 22:09:13.960: INFO: Container kube-proxy ready: true, restart count 2 Jun 10 22:09:13.960: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-z4m46 started at 2022-06-10 20:09:21 +0000 UTC (0+1 container statuses recorded) Jun 10 22:09:13.960: INFO: Container kube-sriovdp ready: true, restart count 0 Jun 10 22:09:13.960: INFO: forbid-27581648-nbp7d started at 2022-06-10 22:08:00 +0000 UTC (0+1 container statuses recorded) Jun 10 22:09:13.960: INFO: Container c ready: true, restart count 0 Jun 10 22:09:13.960: INFO: busybox-188ca118-db95-4325-9436-9b857db09e6b started at 2022-06-10 22:08:23 +0000 UTC (0+1 container statuses recorded) Jun 10 22:09:13.960: INFO: Container busybox ready: true, restart count 0 Jun 10 22:09:13.960: INFO: collectd-srmjh started at 2022-06-10 20:17:30 +0000 UTC (0+3 container statuses recorded) Jun 10 22:09:13.960: INFO: Container collectd ready: true, restart count 0 Jun 10 22:09:13.960: INFO: Container collectd-exporter ready: true, restart count 0 Jun 10 22:09:13.960: INFO: Container rbac-proxy ready: true, restart count 0 Jun 10 22:09:13.960: INFO: kubernetes-metrics-scraper-5558854cb-pf6tn started at 2022-06-10 20:01:01 +0000 UTC (0+1 container statuses recorded) Jun 10 22:09:13.960: INFO: Container kubernetes-metrics-scraper ready: true, restart count 1 Jun 10 22:09:13.960: INFO: node-exporter-trpg7 started at 2022-06-10 20:13:33 +0000 UTC (0+2 container statuses recorded) Jun 10 22:09:13.960: INFO: Container kube-rbac-proxy ready: true, restart count 0 Jun 10 22:09:13.960: INFO: Container node-exporter ready: true, restart count 0 Jun 10 22:09:14.127: INFO: Latency metrics for node node2 Jun 10 22:09:14.127: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-961" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750 • Failure [130.083 seconds] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should be able to change the type from ExternalName to NodePort [Conformance] [It] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Jun 10 22:09:13.436: Unexpected error: <*errors.errorString | 0xc003c24ed0>: { s: "service is not reachable within 2m0s timeout on endpoint 10.10.190.207:31833 over TCP protocol", } service is not reachable within 2m0s timeout on endpoint 10.10.190.207:31833 over TCP protocol occurred /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1351 ------------------------------ {"msg":"FAILED [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","total":-1,"completed":21,"skipped":326,"failed":1,"failures":["[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]"]} Jun 10 22:09:14.145: INFO: Running AfterSuite actions on all nodes [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 10 22:08:23.484: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:54 [It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating pod busybox-188ca118-db95-4325-9436-9b857db09e6b in namespace container-probe-5234 Jun 10 22:08:31.523: INFO: Started pod busybox-188ca118-db95-4325-9436-9b857db09e6b in namespace container-probe-5234 STEP: checking the pod's current state and verifying that restartCount is present Jun 10 22:08:31.526: INFO: Initial restart count of pod busybox-188ca118-db95-4325-9436-9b857db09e6b is 0 Jun 10 22:09:15.627: INFO: Restart count of pod container-probe-5234/busybox-188ca118-db95-4325-9436-9b857db09e6b is now 1 (44.101620402s elapsed) STEP: deleting the pod [AfterEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 10 22:09:15.636: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-5234" for this suite. • [SLOW TEST:52.161 seconds] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 10 22:06:43.234: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:54 [It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating pod test-webserver-130dfafe-d20f-42b6-9b5d-661bc0a0fc28 in namespace container-probe-5969 Jun 10 22:06:47.279: INFO: Started pod test-webserver-130dfafe-d20f-42b6-9b5d-661bc0a0fc28 in namespace container-probe-5969 STEP: checking the pod's current state and verifying that restartCount is present Jun 10 22:06:47.282: INFO: Initial restart count of pod test-webserver-130dfafe-d20f-42b6-9b5d-661bc0a0fc28 is 0 STEP: deleting the pod [AfterEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 10 22:10:47.989: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-5969" for this suite. • [SLOW TEST:244.764 seconds] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":-1,"completed":50,"skipped":912,"failed":0} Jun 10 22:10:48.002: INFO: Running AfterSuite actions on all nodes [BeforeEach] [sig-apps] CronJob /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 10 22:07:57.017: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename cronjob STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] CronJob /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/cronjob.go:63 W0610 22:07:57.040597 35 warnings.go:70] batch/v1beta1 CronJob is deprecated in v1.21+, unavailable in v1.25+; use batch/v1 CronJob [It] should not schedule new jobs when ForbidConcurrent [Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a ForbidConcurrent cronjob STEP: Ensuring a job is scheduled STEP: Ensuring exactly one is scheduled STEP: Ensuring exactly one running job exists by listing jobs explicitly STEP: Ensuring no more jobs are scheduled STEP: Removing cronjob [AfterEach] [sig-apps] CronJob /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 10 22:13:01.066: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "cronjob-5983" for this suite. • [SLOW TEST:304.059 seconds] [sig-apps] CronJob /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should not schedule new jobs when ForbidConcurrent [Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","total":-1,"completed":18,"skipped":196,"failed":0} Jun 10 22:13:01.079: INFO: Running AfterSuite actions on all nodes {"msg":"PASSED [sig-node] Probing container should be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":-1,"completed":34,"skipped":685,"failed":1,"failures":["[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} Jun 10 22:09:15.647: INFO: Running AfterSuite actions on all nodes Jun 10 22:13:01.118: INFO: Running AfterSuite actions on node 1 Jun 10 22:13:01.118: INFO: Skipping dumping logs from cluster Summarizing 6 Failures: [Fail] [sig-network] Services [It] should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:2497 [Fail] [sig-network] Services [It] should be able to create a functioning NodePort service [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1169 [Fail] [sig-network] Services [It] should have session affinity work for NodePort service [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:2576 [Fail] [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] [It] Should recreate evicted statefulset [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes/runner.go:113 [Fail] [sig-network] Services [It] should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:2576 [Fail] [sig-network] Services [It] should be able to change the type from ExternalName to NodePort [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1351 Ran 320 of 5773 Specs in 919.451 seconds FAIL! -- 314 Passed | 6 Failed | 0 Pending | 5453 Skipped Ginkgo ran 1 suite in 15m21.081075339s Test Suite Failed