Running Suite: Kubernetes e2e suite =================================== Random Seed: 1654293329 - Will randomize all specs Will run 5773 specs Running in parallel across 10 nodes Jun 3 21:55:30.932: INFO: >>> kubeConfig: /root/.kube/config Jun 3 21:55:30.937: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Jun 3 21:55:30.963: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Jun 3 21:55:31.034: INFO: The status of Pod cmk-init-discover-node1-n75dv is Succeeded, skipping waiting Jun 3 21:55:31.034: INFO: The status of Pod cmk-init-discover-node2-xvf8p is Succeeded, skipping waiting Jun 3 21:55:31.034: INFO: 40 / 42 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Jun 3 21:55:31.034: INFO: expected 8 pod replicas in namespace 'kube-system', 8 are Running and Ready. Jun 3 21:55:31.034: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Jun 3 21:55:31.052: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'cmk' (0 seconds elapsed) Jun 3 21:55:31.052: INFO: 5 / 5 pods ready in namespace 'kube-system' in daemonset 'kube-flannel' (0 seconds elapsed) Jun 3 21:55:31.052: INFO: 0 / 0 pods ready in namespace 'kube-system' in daemonset 'kube-flannel-ds-arm' (0 seconds elapsed) Jun 3 21:55:31.052: INFO: 0 / 0 pods ready in namespace 'kube-system' in daemonset 'kube-flannel-ds-arm64' (0 seconds elapsed) Jun 3 21:55:31.052: INFO: 0 / 0 pods ready in namespace 'kube-system' in daemonset 'kube-flannel-ds-ppc64le' (0 seconds elapsed) Jun 3 21:55:31.052: INFO: 0 / 0 pods ready in namespace 'kube-system' in daemonset 'kube-flannel-ds-s390x' (0 seconds elapsed) Jun 3 21:55:31.052: INFO: 5 / 5 pods ready in namespace 'kube-system' in daemonset 'kube-multus-ds-amd64' (0 seconds elapsed) Jun 3 21:55:31.052: INFO: 5 / 5 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Jun 3 21:55:31.052: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'node-feature-discovery-worker' (0 seconds elapsed) Jun 3 21:55:31.052: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'sriov-net-dp-kube-sriov-device-plugin-amd64' (0 seconds elapsed) Jun 3 21:55:31.052: INFO: e2e test version: v1.21.9 Jun 3 21:55:31.052: INFO: kube-apiserver version: v1.21.1 Jun 3 21:55:31.053: INFO: >>> kubeConfig: /root/.kube/config Jun 3 21:55:31.059: INFO: Cluster IP family: ipv4 Jun 3 21:55:31.056: INFO: >>> kubeConfig: /root/.kube/config Jun 3 21:55:31.075: INFO: Cluster IP family: ipv4 SSSSSSSSSSSSS ------------------------------ Jun 3 21:55:31.076: INFO: >>> kubeConfig: /root/.kube/config Jun 3 21:55:31.096: INFO: Cluster IP family: ipv4 SSSS ------------------------------ Jun 3 21:55:31.084: INFO: >>> kubeConfig: /root/.kube/config Jun 3 21:55:31.103: INFO: Cluster IP family: ipv4 SS ------------------------------ Jun 3 21:55:31.082: INFO: >>> kubeConfig: /root/.kube/config Jun 3 21:55:31.105: INFO: Cluster IP family: ipv4 SSS ------------------------------ Jun 3 21:55:31.088: INFO: >>> kubeConfig: /root/.kube/config Jun 3 21:55:31.107: INFO: Cluster IP family: ipv4 SSSSSSSSSS ------------------------------ Jun 3 21:55:31.090: INFO: >>> kubeConfig: /root/.kube/config Jun 3 21:55:31.113: INFO: Cluster IP family: ipv4 SSS ------------------------------ Jun 3 21:55:31.091: INFO: >>> kubeConfig: /root/.kube/config Jun 3 21:55:31.115: INFO: Cluster IP family: ipv4 SS ------------------------------ Jun 3 21:55:31.090: INFO: >>> kubeConfig: /root/.kube/config Jun 3 21:55:31.116: INFO: Cluster IP family: ipv4 SSSSSSSSSSSSSS ------------------------------ Jun 3 21:55:31.101: INFO: >>> kubeConfig: /root/.kube/config Jun 3 21:55:31.123: INFO: Cluster IP family: ipv4 SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 3 21:55:31.130: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl W0603 21:55:31.162838 38 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Jun 3 21:55:31.163: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Jun 3 21:55:31.164: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:241 [It] should check if kubectl diff finds a difference for Deployments [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: create deployment with httpd image Jun 3 21:55:31.167: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-9754 create -f -' Jun 3 21:55:31.600: INFO: stderr: "" Jun 3 21:55:31.600: INFO: stdout: "deployment.apps/httpd-deployment created\n" STEP: verify diff finds difference between live and declared image Jun 3 21:55:31.600: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-9754 diff -f -' Jun 3 21:55:32.012: INFO: rc: 1 Jun 3 21:55:32.012: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-9754 delete -f -' Jun 3 21:55:32.142: INFO: stderr: "" Jun 3 21:55:32.142: INFO: stdout: "deployment.apps \"httpd-deployment\" deleted\n" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 3 21:55:32.142: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9754" for this suite. • ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl diff should check if kubectl diff finds a difference for Deployments [Conformance]","total":-1,"completed":1,"skipped":8,"failed":0} SSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 3 21:55:31.142: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc W0603 21:55:31.170555 30 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Jun 3 21:55:31.170: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Jun 3 21:55:31.172: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for deployment deletion to see if the garbage collector mistakenly deletes the rs STEP: Gathering metrics Jun 3 21:55:32.240: INFO: The status of Pod kube-controller-manager-master3 is Running (Ready = true) Jun 3 21:55:32.384: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 3 21:55:32.384: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-7244" for this suite. • ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]","total":-1,"completed":1,"skipped":7,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 3 21:55:31.067: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap W0603 21:55:31.093342 27 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Jun 3 21:55:31.093: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Jun 3 21:55:31.096: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating configMap with name configmap-test-volume-069e1d81-7499-4832-92f7-7b99be0aa7d5 STEP: Creating a pod to test consume configMaps Jun 3 21:55:31.119: INFO: Waiting up to 5m0s for pod "pod-configmaps-e59f25c6-8d82-49d6-88c3-8e4f615e0420" in namespace "configmap-3987" to be "Succeeded or Failed" Jun 3 21:55:31.123: INFO: Pod "pod-configmaps-e59f25c6-8d82-49d6-88c3-8e4f615e0420": Phase="Pending", Reason="", readiness=false. Elapsed: 4.063772ms Jun 3 21:55:33.127: INFO: Pod "pod-configmaps-e59f25c6-8d82-49d6-88c3-8e4f615e0420": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007769546s Jun 3 21:55:35.130: INFO: Pod "pod-configmaps-e59f25c6-8d82-49d6-88c3-8e4f615e0420": Phase="Pending", Reason="", readiness=false. Elapsed: 4.011251206s Jun 3 21:55:37.135: INFO: Pod "pod-configmaps-e59f25c6-8d82-49d6-88c3-8e4f615e0420": Phase="Pending", Reason="", readiness=false. Elapsed: 6.016023448s Jun 3 21:55:39.140: INFO: Pod "pod-configmaps-e59f25c6-8d82-49d6-88c3-8e4f615e0420": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.021116671s STEP: Saw pod success Jun 3 21:55:39.140: INFO: Pod "pod-configmaps-e59f25c6-8d82-49d6-88c3-8e4f615e0420" satisfied condition "Succeeded or Failed" Jun 3 21:55:39.143: INFO: Trying to get logs from node node2 pod pod-configmaps-e59f25c6-8d82-49d6-88c3-8e4f615e0420 container configmap-volume-test: STEP: delete the pod Jun 3 21:55:39.371: INFO: Waiting for pod pod-configmaps-e59f25c6-8d82-49d6-88c3-8e4f615e0420 to disappear Jun 3 21:55:39.374: INFO: Pod pod-configmaps-e59f25c6-8d82-49d6-88c3-8e4f615e0420 no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 3 21:55:39.374: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-3987" for this suite. • [SLOW TEST:8.316 seconds] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":3,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 3 21:55:31.129: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets W0603 21:55:31.154825 32 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Jun 3 21:55:31.155: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Jun 3 21:55:31.156: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in env vars [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating secret with name secret-test-cc08c4fa-cb7d-481e-aa87-0abfbeb38e28 STEP: Creating a pod to test consume secrets Jun 3 21:55:31.174: INFO: Waiting up to 5m0s for pod "pod-secrets-ceaa784c-0ee3-4c66-b695-bb942aa76ce7" in namespace "secrets-3037" to be "Succeeded or Failed" Jun 3 21:55:31.177: INFO: Pod "pod-secrets-ceaa784c-0ee3-4c66-b695-bb942aa76ce7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.702577ms Jun 3 21:55:33.180: INFO: Pod "pod-secrets-ceaa784c-0ee3-4c66-b695-bb942aa76ce7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005705196s Jun 3 21:55:35.183: INFO: Pod "pod-secrets-ceaa784c-0ee3-4c66-b695-bb942aa76ce7": Phase="Pending", Reason="", readiness=false. Elapsed: 4.00950425s Jun 3 21:55:37.187: INFO: Pod "pod-secrets-ceaa784c-0ee3-4c66-b695-bb942aa76ce7": Phase="Pending", Reason="", readiness=false. Elapsed: 6.012724497s Jun 3 21:55:39.192: INFO: Pod "pod-secrets-ceaa784c-0ee3-4c66-b695-bb942aa76ce7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.017766197s STEP: Saw pod success Jun 3 21:55:39.192: INFO: Pod "pod-secrets-ceaa784c-0ee3-4c66-b695-bb942aa76ce7" satisfied condition "Succeeded or Failed" Jun 3 21:55:39.195: INFO: Trying to get logs from node node2 pod pod-secrets-ceaa784c-0ee3-4c66-b695-bb942aa76ce7 container secret-env-test: STEP: delete the pod Jun 3 21:55:39.770: INFO: Waiting for pod pod-secrets-ceaa784c-0ee3-4c66-b695-bb942aa76ce7 to disappear Jun 3 21:55:39.773: INFO: Pod pod-secrets-ceaa784c-0ee3-4c66-b695-bb942aa76ce7 no longer exists [AfterEach] [sig-node] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 3 21:55:39.773: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-3037" for this suite. • [SLOW TEST:8.651 seconds] [sig-node] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should be consumable from pods in env vars [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":6,"failed":0} SSSS ------------------------------ [BeforeEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 3 21:55:32.177: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating secret with name secret-test-11f99be0-4dc9-4f29-aeaf-c3614994cc30 STEP: Creating a pod to test consume secrets Jun 3 21:55:32.215: INFO: Waiting up to 5m0s for pod "pod-secrets-76b0663c-5503-4ea0-a24c-7f33c43700ee" in namespace "secrets-6809" to be "Succeeded or Failed" Jun 3 21:55:32.218: INFO: Pod "pod-secrets-76b0663c-5503-4ea0-a24c-7f33c43700ee": Phase="Pending", Reason="", readiness=false. Elapsed: 3.258739ms Jun 3 21:55:34.224: INFO: Pod "pod-secrets-76b0663c-5503-4ea0-a24c-7f33c43700ee": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008836492s Jun 3 21:55:36.227: INFO: Pod "pod-secrets-76b0663c-5503-4ea0-a24c-7f33c43700ee": Phase="Pending", Reason="", readiness=false. Elapsed: 4.012200464s Jun 3 21:55:38.231: INFO: Pod "pod-secrets-76b0663c-5503-4ea0-a24c-7f33c43700ee": Phase="Pending", Reason="", readiness=false. Elapsed: 6.016150547s Jun 3 21:55:40.237: INFO: Pod "pod-secrets-76b0663c-5503-4ea0-a24c-7f33c43700ee": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.021663687s STEP: Saw pod success Jun 3 21:55:40.237: INFO: Pod "pod-secrets-76b0663c-5503-4ea0-a24c-7f33c43700ee" satisfied condition "Succeeded or Failed" Jun 3 21:55:40.266: INFO: Trying to get logs from node node2 pod pod-secrets-76b0663c-5503-4ea0-a24c-7f33c43700ee container secret-volume-test: STEP: delete the pod Jun 3 21:55:40.281: INFO: Waiting for pod pod-secrets-76b0663c-5503-4ea0-a24c-7f33c43700ee to disappear Jun 3 21:55:40.283: INFO: Pod pod-secrets-76b0663c-5503-4ea0-a24c-7f33c43700ee no longer exists [AfterEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 3 21:55:40.283: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-6809" for this suite. • [SLOW TEST:8.114 seconds] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance]","total":-1,"completed":2,"skipped":17,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected combined /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 3 21:55:32.620: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating configMap with name configmap-projected-all-test-volume-578325e4-9d4b-49fc-a5b8-8bc9d72b4495 STEP: Creating secret with name secret-projected-all-test-volume-5c8cd8c5-6d5b-4533-befb-e48dfc2d42a4 STEP: Creating a pod to test Check all projections for projected volume plugin Jun 3 21:55:32.657: INFO: Waiting up to 5m0s for pod "projected-volume-2c56f609-eed1-44c0-aef1-4ff950989aa9" in namespace "projected-2940" to be "Succeeded or Failed" Jun 3 21:55:32.660: INFO: Pod "projected-volume-2c56f609-eed1-44c0-aef1-4ff950989aa9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.608183ms Jun 3 21:55:34.663: INFO: Pod "projected-volume-2c56f609-eed1-44c0-aef1-4ff950989aa9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005619821s Jun 3 21:55:36.668: INFO: Pod "projected-volume-2c56f609-eed1-44c0-aef1-4ff950989aa9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.010811068s Jun 3 21:55:38.673: INFO: Pod "projected-volume-2c56f609-eed1-44c0-aef1-4ff950989aa9": Phase="Pending", Reason="", readiness=false. Elapsed: 6.015548887s Jun 3 21:55:40.677: INFO: Pod "projected-volume-2c56f609-eed1-44c0-aef1-4ff950989aa9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.020148753s STEP: Saw pod success Jun 3 21:55:40.677: INFO: Pod "projected-volume-2c56f609-eed1-44c0-aef1-4ff950989aa9" satisfied condition "Succeeded or Failed" Jun 3 21:55:40.680: INFO: Trying to get logs from node node2 pod projected-volume-2c56f609-eed1-44c0-aef1-4ff950989aa9 container projected-all-volume-test: STEP: delete the pod Jun 3 21:55:40.695: INFO: Waiting for pod projected-volume-2c56f609-eed1-44c0-aef1-4ff950989aa9 to disappear Jun 3 21:55:40.697: INFO: Pod projected-volume-2c56f609-eed1-44c0-aef1-4ff950989aa9 no longer exists [AfterEach] [sig-storage] Projected combined /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 3 21:55:40.697: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2940" for this suite. • [SLOW TEST:8.085 seconds] [sig-storage] Projected combined /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance]","total":-1,"completed":2,"skipped":96,"failed":0} SSSSS ------------------------------ [BeforeEach] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 3 21:55:31.126: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api W0603 21:55:31.152228 34 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Jun 3 21:55:31.152: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Jun 3 21:55:31.154: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test downward api env vars Jun 3 21:55:31.168: INFO: Waiting up to 5m0s for pod "downward-api-7dc1db1a-d121-44f0-8643-8a863e1bde08" in namespace "downward-api-1064" to be "Succeeded or Failed" Jun 3 21:55:31.170: INFO: Pod "downward-api-7dc1db1a-d121-44f0-8643-8a863e1bde08": Phase="Pending", Reason="", readiness=false. Elapsed: 1.811357ms Jun 3 21:55:33.173: INFO: Pod "downward-api-7dc1db1a-d121-44f0-8643-8a863e1bde08": Phase="Pending", Reason="", readiness=false. Elapsed: 2.0046732s Jun 3 21:55:35.177: INFO: Pod "downward-api-7dc1db1a-d121-44f0-8643-8a863e1bde08": Phase="Pending", Reason="", readiness=false. Elapsed: 4.00900098s Jun 3 21:55:37.181: INFO: Pod "downward-api-7dc1db1a-d121-44f0-8643-8a863e1bde08": Phase="Pending", Reason="", readiness=false. Elapsed: 6.013006692s Jun 3 21:55:39.188: INFO: Pod "downward-api-7dc1db1a-d121-44f0-8643-8a863e1bde08": Phase="Pending", Reason="", readiness=false. Elapsed: 8.019606037s Jun 3 21:55:41.192: INFO: Pod "downward-api-7dc1db1a-d121-44f0-8643-8a863e1bde08": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.023374276s STEP: Saw pod success Jun 3 21:55:41.192: INFO: Pod "downward-api-7dc1db1a-d121-44f0-8643-8a863e1bde08" satisfied condition "Succeeded or Failed" Jun 3 21:55:41.194: INFO: Trying to get logs from node node1 pod downward-api-7dc1db1a-d121-44f0-8643-8a863e1bde08 container dapi-container: STEP: delete the pod Jun 3 21:55:41.213: INFO: Waiting for pod downward-api-7dc1db1a-d121-44f0-8643-8a863e1bde08 to disappear Jun 3 21:55:41.215: INFO: Pod downward-api-7dc1db1a-d121-44f0-8643-8a863e1bde08 no longer exists [AfterEach] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 3 21:55:41.215: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-1064" for this suite. • [SLOW TEST:10.098 seconds] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":3,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 3 21:55:31.150: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods W0603 21:55:31.174621 31 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Jun 3 21:55:31.174: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Jun 3 21:55:31.176: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:186 [It] should support remote command execution over websockets [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Jun 3 21:55:31.178: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes Jun 3 21:55:31.194: INFO: The status of Pod pod-exec-websocket-e0314787-a484-4908-97ce-3a8e172b15d8 is Pending, waiting for it to be Running (with Ready = true) Jun 3 21:55:33.198: INFO: The status of Pod pod-exec-websocket-e0314787-a484-4908-97ce-3a8e172b15d8 is Pending, waiting for it to be Running (with Ready = true) Jun 3 21:55:35.196: INFO: The status of Pod pod-exec-websocket-e0314787-a484-4908-97ce-3a8e172b15d8 is Pending, waiting for it to be Running (with Ready = true) Jun 3 21:55:37.198: INFO: The status of Pod pod-exec-websocket-e0314787-a484-4908-97ce-3a8e172b15d8 is Pending, waiting for it to be Running (with Ready = true) Jun 3 21:55:39.201: INFO: The status of Pod pod-exec-websocket-e0314787-a484-4908-97ce-3a8e172b15d8 is Pending, waiting for it to be Running (with Ready = true) Jun 3 21:55:41.196: INFO: The status of Pod pod-exec-websocket-e0314787-a484-4908-97ce-3a8e172b15d8 is Running (Ready = true) [AfterEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 3 21:55:41.273: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-9783" for this suite. • [SLOW TEST:10.130 seconds] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should support remote command execution over websockets [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Pods should support remote command execution over websockets [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":13,"failed":0} SSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 3 21:55:41.270: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] should include custom resource definition resources in discovery documents [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: fetching the /apis discovery document STEP: finding the apiextensions.k8s.io API group in the /apis discovery document STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis discovery document STEP: fetching the /apis/apiextensions.k8s.io discovery document STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis/apiextensions.k8s.io discovery document STEP: fetching the /apis/apiextensions.k8s.io/v1 discovery document STEP: finding customresourcedefinitions resources in the /apis/apiextensions.k8s.io/v1 discovery document [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 3 21:55:41.296: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-7886" for this suite. •S ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance]","total":-1,"completed":2,"skipped":21,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 3 21:55:31.142: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets W0603 21:55:31.166132 35 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Jun 3 21:55:31.166: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Jun 3 21:55:31.168: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating secret with name secret-test-map-7d963ce5-8a48-46fd-a3fa-cf4c50a17bed STEP: Creating a pod to test consume secrets Jun 3 21:55:31.185: INFO: Waiting up to 5m0s for pod "pod-secrets-b01694bc-78e1-4f72-af04-af637a5a5a5c" in namespace "secrets-1230" to be "Succeeded or Failed" Jun 3 21:55:31.187: INFO: Pod "pod-secrets-b01694bc-78e1-4f72-af04-af637a5a5a5c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.173964ms Jun 3 21:55:33.190: INFO: Pod "pod-secrets-b01694bc-78e1-4f72-af04-af637a5a5a5c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005086957s Jun 3 21:55:35.193: INFO: Pod "pod-secrets-b01694bc-78e1-4f72-af04-af637a5a5a5c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.008687814s Jun 3 21:55:37.198: INFO: Pod "pod-secrets-b01694bc-78e1-4f72-af04-af637a5a5a5c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.013756322s Jun 3 21:55:39.203: INFO: Pod "pod-secrets-b01694bc-78e1-4f72-af04-af637a5a5a5c": Phase="Pending", Reason="", readiness=false. Elapsed: 8.018725564s Jun 3 21:55:41.207: INFO: Pod "pod-secrets-b01694bc-78e1-4f72-af04-af637a5a5a5c": Phase="Pending", Reason="", readiness=false. Elapsed: 10.022178862s Jun 3 21:55:43.211: INFO: Pod "pod-secrets-b01694bc-78e1-4f72-af04-af637a5a5a5c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.026752072s STEP: Saw pod success Jun 3 21:55:43.212: INFO: Pod "pod-secrets-b01694bc-78e1-4f72-af04-af637a5a5a5c" satisfied condition "Succeeded or Failed" Jun 3 21:55:43.214: INFO: Trying to get logs from node node1 pod pod-secrets-b01694bc-78e1-4f72-af04-af637a5a5a5c container secret-volume-test: STEP: delete the pod Jun 3 21:55:43.226: INFO: Waiting for pod pod-secrets-b01694bc-78e1-4f72-af04-af637a5a5a5c to disappear Jun 3 21:55:43.228: INFO: Pod pod-secrets-b01694bc-78e1-4f72-af04-af637a5a5a5c no longer exists [AfterEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 3 21:55:43.228: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-1230" for this suite. • [SLOW TEST:12.092 seconds] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":24,"failed":0} SSSSS ------------------------------ [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 3 21:55:31.182: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap W0603 21:55:31.203786 33 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Jun 3 21:55:31.204: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Jun 3 21:55:31.205: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating configMap with name configmap-test-volume-b0c7e3fb-6d43-44c2-a931-3587977c1a47 STEP: Creating a pod to test consume configMaps Jun 3 21:55:31.221: INFO: Waiting up to 5m0s for pod "pod-configmaps-252904f9-ec46-4ab8-96b7-a2276b88e7fc" in namespace "configmap-8134" to be "Succeeded or Failed" Jun 3 21:55:31.223: INFO: Pod "pod-configmaps-252904f9-ec46-4ab8-96b7-a2276b88e7fc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.256334ms Jun 3 21:55:33.226: INFO: Pod "pod-configmaps-252904f9-ec46-4ab8-96b7-a2276b88e7fc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00569464s Jun 3 21:55:35.230: INFO: Pod "pod-configmaps-252904f9-ec46-4ab8-96b7-a2276b88e7fc": Phase="Pending", Reason="", readiness=false. Elapsed: 4.009041306s Jun 3 21:55:37.233: INFO: Pod "pod-configmaps-252904f9-ec46-4ab8-96b7-a2276b88e7fc": Phase="Pending", Reason="", readiness=false. Elapsed: 6.012645036s Jun 3 21:55:39.238: INFO: Pod "pod-configmaps-252904f9-ec46-4ab8-96b7-a2276b88e7fc": Phase="Pending", Reason="", readiness=false. Elapsed: 8.017778777s Jun 3 21:55:41.242: INFO: Pod "pod-configmaps-252904f9-ec46-4ab8-96b7-a2276b88e7fc": Phase="Pending", Reason="", readiness=false. Elapsed: 10.021159596s Jun 3 21:55:43.244: INFO: Pod "pod-configmaps-252904f9-ec46-4ab8-96b7-a2276b88e7fc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.02370976s STEP: Saw pod success Jun 3 21:55:43.244: INFO: Pod "pod-configmaps-252904f9-ec46-4ab8-96b7-a2276b88e7fc" satisfied condition "Succeeded or Failed" Jun 3 21:55:43.247: INFO: Trying to get logs from node node1 pod pod-configmaps-252904f9-ec46-4ab8-96b7-a2276b88e7fc container agnhost-container: STEP: delete the pod Jun 3 21:55:43.258: INFO: Waiting for pod pod-configmaps-252904f9-ec46-4ab8-96b7-a2276b88e7fc to disappear Jun 3 21:55:43.260: INFO: Pod pod-configmaps-252904f9-ec46-4ab8-96b7-a2276b88e7fc no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 3 21:55:43.260: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-8134" for this suite. • [SLOW TEST:12.085 seconds] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":34,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] PodTemplates /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 3 21:55:43.352: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename podtemplate STEP: Waiting for a default service account to be provisioned in namespace [It] should run the lifecycle of PodTemplates [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [AfterEach] [sig-node] PodTemplates /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 3 21:55:43.395: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "podtemplate-9500" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] PodTemplates should run the lifecycle of PodTemplates [Conformance]","total":-1,"completed":2,"skipped":72,"failed":0} SS ------------------------------ [BeforeEach] [sig-network] EndpointSliceMirroring /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 3 21:55:39.508: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename endpointslicemirroring STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] EndpointSliceMirroring /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/endpointslicemirroring.go:39 [It] should mirror a custom Endpoints resource through create update and delete [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: mirroring a new custom Endpoint Jun 3 21:55:39.541: INFO: Waiting for at least 1 EndpointSlice to exist, got 0 STEP: mirroring an update to a custom Endpoint STEP: mirroring deletion of a custom Endpoint Jun 3 21:55:41.556: INFO: Waiting for 0 EndpointSlices to exist, got 1 [AfterEach] [sig-network] EndpointSliceMirroring /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 3 21:55:43.562: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "endpointslicemirroring-2748" for this suite. • ------------------------------ {"msg":"PASSED [sig-network] EndpointSliceMirroring should mirror a custom Endpoints resource through create update and delete [Conformance]","total":-1,"completed":2,"skipped":54,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 3 21:55:40.428: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:186 [It] should get a host IP [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating pod Jun 3 21:55:40.463: INFO: The status of Pod pod-hostip-56db63c9-26d9-4356-b9c8-f05902545749 is Pending, waiting for it to be Running (with Ready = true) Jun 3 21:55:42.466: INFO: The status of Pod pod-hostip-56db63c9-26d9-4356-b9c8-f05902545749 is Pending, waiting for it to be Running (with Ready = true) Jun 3 21:55:44.467: INFO: The status of Pod pod-hostip-56db63c9-26d9-4356-b9c8-f05902545749 is Pending, waiting for it to be Running (with Ready = true) Jun 3 21:55:46.467: INFO: The status of Pod pod-hostip-56db63c9-26d9-4356-b9c8-f05902545749 is Running (Ready = true) Jun 3 21:55:46.473: INFO: Pod pod-hostip-56db63c9-26d9-4356-b9c8-f05902545749 has hostIP: 10.10.190.208 [AfterEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 3 21:55:46.473: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-9828" for this suite. • [SLOW TEST:6.053 seconds] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should get a host IP [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Pods should get a host IP [NodeConformance] [Conformance]","total":-1,"completed":3,"skipped":78,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 3 21:55:41.345: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46 [It] should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Jun 3 21:55:41.376: INFO: Waiting up to 5m0s for pod "busybox-privileged-false-f977fc0f-eab3-4271-af4f-e4db94d87758" in namespace "security-context-test-8480" to be "Succeeded or Failed" Jun 3 21:55:41.378: INFO: Pod "busybox-privileged-false-f977fc0f-eab3-4271-af4f-e4db94d87758": Phase="Pending", Reason="", readiness=false. Elapsed: 2.234516ms Jun 3 21:55:43.381: INFO: Pod "busybox-privileged-false-f977fc0f-eab3-4271-af4f-e4db94d87758": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005161602s Jun 3 21:55:45.385: INFO: Pod "busybox-privileged-false-f977fc0f-eab3-4271-af4f-e4db94d87758": Phase="Pending", Reason="", readiness=false. Elapsed: 4.009318379s Jun 3 21:55:47.389: INFO: Pod "busybox-privileged-false-f977fc0f-eab3-4271-af4f-e4db94d87758": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.013299356s Jun 3 21:55:47.389: INFO: Pod "busybox-privileged-false-f977fc0f-eab3-4271-af4f-e4db94d87758" satisfied condition "Succeeded or Failed" Jun 3 21:55:47.395: INFO: Got logs for pod "busybox-privileged-false-f977fc0f-eab3-4271-af4f-e4db94d87758": "ip: RTNETLINK answers: Operation not permitted\n" [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 3 21:55:47.395: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-8480" for this suite. • [SLOW TEST:6.058 seconds] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 When creating a pod with privileged /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:232 should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":3,"skipped":37,"failed":0} S ------------------------------ [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 3 21:55:41.338: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41 [It] should provide podname only [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test downward API volume plugin Jun 3 21:55:41.373: INFO: Waiting up to 5m0s for pod "downwardapi-volume-d187953f-03a8-4450-beca-856111da5884" in namespace "projected-3773" to be "Succeeded or Failed" Jun 3 21:55:41.376: INFO: Pod "downwardapi-volume-d187953f-03a8-4450-beca-856111da5884": Phase="Pending", Reason="", readiness=false. Elapsed: 3.416635ms Jun 3 21:55:43.380: INFO: Pod "downwardapi-volume-d187953f-03a8-4450-beca-856111da5884": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007024267s Jun 3 21:55:45.387: INFO: Pod "downwardapi-volume-d187953f-03a8-4450-beca-856111da5884": Phase="Pending", Reason="", readiness=false. Elapsed: 4.014091955s Jun 3 21:55:47.390: INFO: Pod "downwardapi-volume-d187953f-03a8-4450-beca-856111da5884": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.01773718s STEP: Saw pod success Jun 3 21:55:47.390: INFO: Pod "downwardapi-volume-d187953f-03a8-4450-beca-856111da5884" satisfied condition "Succeeded or Failed" Jun 3 21:55:47.392: INFO: Trying to get logs from node node2 pod downwardapi-volume-d187953f-03a8-4450-beca-856111da5884 container client-container: STEP: delete the pod Jun 3 21:55:47.405: INFO: Waiting for pod downwardapi-volume-d187953f-03a8-4450-beca-856111da5884 to disappear Jun 3 21:55:47.407: INFO: Pod downwardapi-volume-d187953f-03a8-4450-beca-856111da5884 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 3 21:55:47.407: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3773" for this suite. • [SLOW TEST:6.077 seconds] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should provide podname only [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance]","total":-1,"completed":2,"skipped":37,"failed":0} SSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 3 21:55:31.150: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook W0603 21:55:31.178091 40 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Jun 3 21:55:31.178: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Jun 3 21:55:31.180: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Jun 3 21:55:31.476: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Jun 3 21:55:33.485: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63789890131, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63789890131, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63789890131, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63789890131, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} Jun 3 21:55:35.492: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63789890131, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63789890131, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63789890131, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63789890131, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} Jun 3 21:55:37.492: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63789890131, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63789890131, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63789890131, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63789890131, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jun 3 21:55:40.499: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny custom resource creation, update and deletion [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Jun 3 21:55:40.502: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the custom resource webhook via the AdmissionRegistration API STEP: Creating a custom resource that should be denied by the webhook STEP: Creating a custom resource whose deletion would be denied by the webhook STEP: Updating the custom resource with disallowed data should be denied STEP: Deleting the custom resource should be denied STEP: Remove the offending key and value from the custom resource data STEP: Deleting the updated custom resource should be successful [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 3 21:55:48.604: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-3667" for this suite. STEP: Destroying namespace "webhook-3667-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:17.482 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny custom resource creation, update and deletion [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","total":-1,"completed":1,"skipped":8,"failed":0} S ------------------------------ [BeforeEach] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 3 21:55:43.410: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow composing env vars into new env vars [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test env composition Jun 3 21:55:43.439: INFO: Waiting up to 5m0s for pod "var-expansion-8869fe51-a79d-4f86-9543-135371b10547" in namespace "var-expansion-3876" to be "Succeeded or Failed" Jun 3 21:55:43.441: INFO: Pod "var-expansion-8869fe51-a79d-4f86-9543-135371b10547": Phase="Pending", Reason="", readiness=false. Elapsed: 2.145164ms Jun 3 21:55:45.444: INFO: Pod "var-expansion-8869fe51-a79d-4f86-9543-135371b10547": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005512023s Jun 3 21:55:47.448: INFO: Pod "var-expansion-8869fe51-a79d-4f86-9543-135371b10547": Phase="Pending", Reason="", readiness=false. Elapsed: 4.008890605s Jun 3 21:55:49.451: INFO: Pod "var-expansion-8869fe51-a79d-4f86-9543-135371b10547": Phase="Pending", Reason="", readiness=false. Elapsed: 6.012115601s Jun 3 21:55:51.456: INFO: Pod "var-expansion-8869fe51-a79d-4f86-9543-135371b10547": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.016600621s STEP: Saw pod success Jun 3 21:55:51.456: INFO: Pod "var-expansion-8869fe51-a79d-4f86-9543-135371b10547" satisfied condition "Succeeded or Failed" Jun 3 21:55:51.459: INFO: Trying to get logs from node node2 pod var-expansion-8869fe51-a79d-4f86-9543-135371b10547 container dapi-container: STEP: delete the pod Jun 3 21:55:51.470: INFO: Waiting for pod var-expansion-8869fe51-a79d-4f86-9543-135371b10547 to disappear Jun 3 21:55:51.472: INFO: Pod var-expansion-8869fe51-a79d-4f86-9543-135371b10547 no longer exists [AfterEach] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 3 21:55:51.472: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-3876" for this suite. • [SLOW TEST:8.069 seconds] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should allow composing env vars into new env vars [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance]","total":-1,"completed":3,"skipped":74,"failed":0} SSSSS ------------------------------ [BeforeEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 3 21:55:39.794: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-5262.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-2.dns-test-service-2.dns-5262.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/wheezy_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-5262.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-5262.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-2.dns-test-service-2.dns-5262.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/jessie_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-5262.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Jun 3 21:55:51.868: INFO: DNS probes using dns-5262/dns-test-d8cc542e-a61f-417c-9193-c7b194a7db85 succeeded STEP: deleting the pod STEP: deleting the test headless service [AfterEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 3 21:55:51.879: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-5262" for this suite. • [SLOW TEST:12.094 seconds] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance]","total":-1,"completed":2,"skipped":10,"failed":0} SSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] Watchers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 3 21:55:51.913: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to start watching from a specific resource version [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating a new configmap STEP: modifying the configmap once STEP: modifying the configmap a second time STEP: deleting the configmap STEP: creating a watch on configmaps from the resource version returned by the first update STEP: Expecting to observe notifications for all changes to the configmap after the first update Jun 3 21:55:51.953: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version watch-6222 67dad38d-43db-4376-87b6-8decfe635cb0 31785 0 2022-06-03 21:55:51 +0000 UTC map[watch-this-configmap:from-resource-version] map[] [] [] [{e2e.test Update v1 2022-06-03 21:55:51 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} Jun 3 21:55:51.953: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version watch-6222 67dad38d-43db-4376-87b6-8decfe635cb0 31786 0 2022-06-03 21:55:51 +0000 UTC map[watch-this-configmap:from-resource-version] map[] [] [] [{e2e.test Update v1 2022-06-03 21:55:51 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 3 21:55:51.953: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-6222" for this suite. • ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance]","total":-1,"completed":3,"skipped":19,"failed":0} SSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 3 21:55:31.130: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container W0603 21:55:31.158919 36 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Jun 3 21:55:31.159: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Jun 3 21:55:31.161: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/init_container.go:162 [It] should invoke init containers on a RestartAlways pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating the pod Jun 3 21:55:31.163: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [sig-node] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 3 21:55:52.244: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-671" for this suite. • [SLOW TEST:21.123 seconds] [sig-node] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should invoke init containers on a RestartAlways pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance]","total":-1,"completed":1,"skipped":3,"failed":0} SSSSSS ------------------------------ [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 3 21:55:47.431: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context STEP: Waiting for a default service account to be provisioned in namespace [It] should support pod.Spec.SecurityContext.RunAsUser And pod.Spec.SecurityContext.RunAsGroup [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test pod.Spec.SecurityContext.RunAsUser Jun 3 21:55:47.463: INFO: Waiting up to 5m0s for pod "security-context-fefe25d2-7276-4147-9ced-e61f70c5f1d7" in namespace "security-context-3087" to be "Succeeded or Failed" Jun 3 21:55:47.467: INFO: Pod "security-context-fefe25d2-7276-4147-9ced-e61f70c5f1d7": Phase="Pending", Reason="", readiness=false. Elapsed: 3.87424ms Jun 3 21:55:49.470: INFO: Pod "security-context-fefe25d2-7276-4147-9ced-e61f70c5f1d7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007329943s Jun 3 21:55:51.474: INFO: Pod "security-context-fefe25d2-7276-4147-9ced-e61f70c5f1d7": Phase="Pending", Reason="", readiness=false. Elapsed: 4.011192463s Jun 3 21:55:53.479: INFO: Pod "security-context-fefe25d2-7276-4147-9ced-e61f70c5f1d7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.01582634s STEP: Saw pod success Jun 3 21:55:53.479: INFO: Pod "security-context-fefe25d2-7276-4147-9ced-e61f70c5f1d7" satisfied condition "Succeeded or Failed" Jun 3 21:55:53.481: INFO: Trying to get logs from node node2 pod security-context-fefe25d2-7276-4147-9ced-e61f70c5f1d7 container test-container: STEP: delete the pod Jun 3 21:55:53.494: INFO: Waiting for pod security-context-fefe25d2-7276-4147-9ced-e61f70c5f1d7 to disappear Jun 3 21:55:53.496: INFO: Pod security-context-fefe25d2-7276-4147-9ced-e61f70c5f1d7 no longer exists [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 3 21:55:53.496: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-3087" for this suite. • [SLOW TEST:6.073 seconds] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 should support pod.Spec.SecurityContext.RunAsUser And pod.Spec.SecurityContext.RunAsGroup [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Security Context should support pod.Spec.SecurityContext.RunAsUser And pod.Spec.SecurityContext.RunAsGroup [LinuxOnly] [Conformance]","total":-1,"completed":3,"skipped":43,"failed":0} SSSSSSS ------------------------------ [BeforeEach] [sig-node] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 3 21:55:46.562: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/kubelet.go:38 [It] should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Jun 3 21:55:46.596: INFO: The status of Pod busybox-host-aliases83c60b49-a8ea-40cb-818f-a13d11e9f926 is Pending, waiting for it to be Running (with Ready = true) Jun 3 21:55:48.600: INFO: The status of Pod busybox-host-aliases83c60b49-a8ea-40cb-818f-a13d11e9f926 is Pending, waiting for it to be Running (with Ready = true) Jun 3 21:55:50.601: INFO: The status of Pod busybox-host-aliases83c60b49-a8ea-40cb-818f-a13d11e9f926 is Pending, waiting for it to be Running (with Ready = true) Jun 3 21:55:52.601: INFO: The status of Pod busybox-host-aliases83c60b49-a8ea-40cb-818f-a13d11e9f926 is Pending, waiting for it to be Running (with Ready = true) Jun 3 21:55:54.600: INFO: The status of Pod busybox-host-aliases83c60b49-a8ea-40cb-818f-a13d11e9f926 is Running (Ready = true) [AfterEach] [sig-node] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 3 21:55:54.607: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-7791" for this suite. • [SLOW TEST:8.055 seconds] [sig-node] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 when scheduling a busybox Pod with hostAliases /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/kubelet.go:137 should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":4,"skipped":111,"failed":0} SSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 3 21:55:51.494: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating configMap with name configmap-test-volume-fda76ab6-0c0b-46dd-bee1-c04de8644c17 STEP: Creating a pod to test consume configMaps Jun 3 21:55:51.530: INFO: Waiting up to 5m0s for pod "pod-configmaps-46cdaa0c-239c-4434-b84e-ab9274d7a068" in namespace "configmap-6573" to be "Succeeded or Failed" Jun 3 21:55:51.535: INFO: Pod "pod-configmaps-46cdaa0c-239c-4434-b84e-ab9274d7a068": Phase="Pending", Reason="", readiness=false. Elapsed: 4.740928ms Jun 3 21:55:53.538: INFO: Pod "pod-configmaps-46cdaa0c-239c-4434-b84e-ab9274d7a068": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00738982s Jun 3 21:55:55.543: INFO: Pod "pod-configmaps-46cdaa0c-239c-4434-b84e-ab9274d7a068": Phase="Pending", Reason="", readiness=false. Elapsed: 4.012355798s Jun 3 21:55:57.548: INFO: Pod "pod-configmaps-46cdaa0c-239c-4434-b84e-ab9274d7a068": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.017489649s STEP: Saw pod success Jun 3 21:55:57.548: INFO: Pod "pod-configmaps-46cdaa0c-239c-4434-b84e-ab9274d7a068" satisfied condition "Succeeded or Failed" Jun 3 21:55:57.551: INFO: Trying to get logs from node node2 pod pod-configmaps-46cdaa0c-239c-4434-b84e-ab9274d7a068 container agnhost-container: STEP: delete the pod Jun 3 21:55:57.612: INFO: Waiting for pod pod-configmaps-46cdaa0c-239c-4434-b84e-ab9274d7a068 to disappear Jun 3 21:55:57.614: INFO: Pod pod-configmaps-46cdaa0c-239c-4434-b84e-ab9274d7a068 no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 3 21:55:57.614: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-6573" for this suite. • [SLOW TEST:6.130 seconds] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":-1,"completed":4,"skipped":79,"failed":0} SSSSS ------------------------------ [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 3 21:55:47.409: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746 [It] should serve multiport endpoints from pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating service multi-endpoint-test in namespace services-8280 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-8280 to expose endpoints map[] Jun 3 21:55:47.438: INFO: Failed go get Endpoints object: endpoints "multi-endpoint-test" not found Jun 3 21:55:48.445: INFO: successfully validated that service multi-endpoint-test in namespace services-8280 exposes endpoints map[] STEP: Creating pod pod1 in namespace services-8280 Jun 3 21:55:48.459: INFO: The status of Pod pod1 is Pending, waiting for it to be Running (with Ready = true) Jun 3 21:55:50.463: INFO: The status of Pod pod1 is Pending, waiting for it to be Running (with Ready = true) Jun 3 21:55:52.463: INFO: The status of Pod pod1 is Pending, waiting for it to be Running (with Ready = true) Jun 3 21:55:54.462: INFO: The status of Pod pod1 is Running (Ready = true) STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-8280 to expose endpoints map[pod1:[100]] Jun 3 21:55:54.473: INFO: successfully validated that service multi-endpoint-test in namespace services-8280 exposes endpoints map[pod1:[100]] STEP: Creating pod pod2 in namespace services-8280 Jun 3 21:55:54.488: INFO: The status of Pod pod2 is Pending, waiting for it to be Running (with Ready = true) Jun 3 21:55:56.493: INFO: The status of Pod pod2 is Pending, waiting for it to be Running (with Ready = true) Jun 3 21:55:58.493: INFO: The status of Pod pod2 is Running (Ready = true) STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-8280 to expose endpoints map[pod1:[100] pod2:[101]] Jun 3 21:55:58.508: INFO: successfully validated that service multi-endpoint-test in namespace services-8280 exposes endpoints map[pod1:[100] pod2:[101]] STEP: Deleting pod pod1 in namespace services-8280 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-8280 to expose endpoints map[pod2:[101]] Jun 3 21:55:58.531: INFO: successfully validated that service multi-endpoint-test in namespace services-8280 exposes endpoints map[pod2:[101]] STEP: Deleting pod pod2 in namespace services-8280 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-8280 to expose endpoints map[] Jun 3 21:55:58.542: INFO: successfully validated that service multi-endpoint-test in namespace services-8280 exposes endpoints map[] [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 3 21:55:58.552: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-8280" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750 • [SLOW TEST:11.152 seconds] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should serve multiport endpoints from pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-network] Services should serve multiport endpoints from pods [Conformance]","total":-1,"completed":4,"skipped":38,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 3 21:55:48.637: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/rc.go:54 [It] should test the lifecycle of a ReplicationController [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating a ReplicationController STEP: waiting for RC to be added STEP: waiting for available Replicas STEP: patching ReplicationController STEP: waiting for RC to be modified STEP: patching ReplicationController status STEP: waiting for RC to be modified STEP: waiting for available Replicas STEP: fetching ReplicationController status STEP: patching ReplicationController scale STEP: waiting for RC to be modified STEP: waiting for ReplicationController's scale to be the max amount STEP: fetching ReplicationController; ensuring that it's patched STEP: updating ReplicationController status STEP: waiting for RC to be modified STEP: listing all ReplicationControllers STEP: checking that ReplicationController has expected values STEP: deleting ReplicationControllers by collection STEP: waiting for ReplicationController to have a DELETED watchEvent [AfterEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 3 21:56:00.903: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-5636" for this suite. • [SLOW TEST:12.274 seconds] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should test the lifecycle of a ReplicationController [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should test the lifecycle of a ReplicationController [Conformance]","total":-1,"completed":2,"skipped":9,"failed":0} SSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 3 21:55:51.997: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:186 [It] should contain environment variables for services [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Jun 3 21:55:52.033: INFO: The status of Pod server-envvars-6b24f04d-4c8f-4e30-9a96-9d2026f55c17 is Pending, waiting for it to be Running (with Ready = true) Jun 3 21:55:54.038: INFO: The status of Pod server-envvars-6b24f04d-4c8f-4e30-9a96-9d2026f55c17 is Pending, waiting for it to be Running (with Ready = true) Jun 3 21:55:56.036: INFO: The status of Pod server-envvars-6b24f04d-4c8f-4e30-9a96-9d2026f55c17 is Pending, waiting for it to be Running (with Ready = true) Jun 3 21:55:58.038: INFO: The status of Pod server-envvars-6b24f04d-4c8f-4e30-9a96-9d2026f55c17 is Running (Ready = true) Jun 3 21:55:58.056: INFO: Waiting up to 5m0s for pod "client-envvars-670d7dcf-0876-458d-ab0d-0b5be779385f" in namespace "pods-8786" to be "Succeeded or Failed" Jun 3 21:55:58.060: INFO: Pod "client-envvars-670d7dcf-0876-458d-ab0d-0b5be779385f": Phase="Pending", Reason="", readiness=false. Elapsed: 3.815489ms Jun 3 21:56:00.064: INFO: Pod "client-envvars-670d7dcf-0876-458d-ab0d-0b5be779385f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007864112s Jun 3 21:56:02.070: INFO: Pod "client-envvars-670d7dcf-0876-458d-ab0d-0b5be779385f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.013799693s Jun 3 21:56:04.075: INFO: Pod "client-envvars-670d7dcf-0876-458d-ab0d-0b5be779385f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.018723486s STEP: Saw pod success Jun 3 21:56:04.075: INFO: Pod "client-envvars-670d7dcf-0876-458d-ab0d-0b5be779385f" satisfied condition "Succeeded or Failed" Jun 3 21:56:04.077: INFO: Trying to get logs from node node2 pod client-envvars-670d7dcf-0876-458d-ab0d-0b5be779385f container env3cont: STEP: delete the pod Jun 3 21:56:04.089: INFO: Waiting for pod client-envvars-670d7dcf-0876-458d-ab0d-0b5be779385f to disappear Jun 3 21:56:04.091: INFO: Pod client-envvars-670d7dcf-0876-458d-ab0d-0b5be779385f no longer exists [AfterEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 3 21:56:04.091: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-8786" for this suite. • [SLOW TEST:12.102 seconds] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should contain environment variables for services [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Pods should contain environment variables for services [NodeConformance] [Conformance]","total":-1,"completed":4,"skipped":31,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 3 21:55:43.724: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication Jun 3 21:55:44.070: INFO: role binding webhook-auth-reader already exists STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Jun 3 21:55:44.084: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Jun 3 21:55:46.092: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63789890144, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63789890144, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63789890144, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63789890144, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} Jun 3 21:55:48.095: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63789890144, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63789890144, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63789890144, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63789890144, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} Jun 3 21:55:50.097: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63789890144, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63789890144, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63789890144, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63789890144, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} Jun 3 21:55:52.096: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63789890144, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63789890144, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63789890144, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63789890144, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jun 3 21:55:55.103: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny pod and configmap creation [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Registering the webhook via the AdmissionRegistration API STEP: create a pod that should be denied by the webhook STEP: create a pod that causes the webhook to hang STEP: create a configmap that should be denied by the webhook STEP: create a configmap that should be admitted by the webhook STEP: update (PUT) the admitted configmap to a non-compliant one should be rejected by the webhook STEP: update (PATCH) the admitted configmap to a non-compliant one should be rejected by the webhook STEP: create a namespace that bypass the webhook STEP: create a configmap that violates the webhook policy but is in a whitelisted namespace [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 3 21:56:05.209: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-8128" for this suite. STEP: Destroying namespace "webhook-8128-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:21.513 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny pod and configmap creation [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","total":-1,"completed":3,"skipped":125,"failed":0} SSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 3 21:56:05.269: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:241 [It] should check if v1 is in available api versions [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: validating api versions Jun 3 21:56:05.288: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-6896 api-versions' Jun 3 21:56:05.396: INFO: stderr: "" Jun 3 21:56:05.396: INFO: stdout: "admissionregistration.k8s.io/v1\nadmissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1\ncoordination.k8s.io/v1beta1\ncrd-publish-openapi-test-foo.example.com/v1\ncrd-publish-openapi-test-multi-ver.example.com/v2\ncrd-publish-openapi-test-multi-ver.example.com/v3\ncustom.metrics.k8s.io/v1beta1\ndiscovery.k8s.io/v1\ndiscovery.k8s.io/v1beta1\nevents.k8s.io/v1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nflowcontrol.apiserver.k8s.io/v1beta1\nintel.com/v1\nk8s.cni.cncf.io/v1\nmonitoring.coreos.com/v1\nmonitoring.coreos.com/v1alpha1\nnetworking.k8s.io/v1\nnetworking.k8s.io/v1beta1\nnode.k8s.io/v1\nnode.k8s.io/v1beta1\npolicy/v1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\ntelemetry.intel.com/v1alpha1\nv1\n" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 3 21:56:05.396: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6896" for this suite. • ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions [Conformance]","total":-1,"completed":4,"skipped":138,"failed":0} SSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 3 21:56:00.937: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating configMap with name projected-configmap-test-volume-map-accb2e67-13ef-46af-8668-839362700664 STEP: Creating a pod to test consume configMaps Jun 3 21:56:00.973: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-c4cbbcec-d454-4b0d-a1a5-588da026a154" in namespace "projected-4383" to be "Succeeded or Failed" Jun 3 21:56:00.976: INFO: Pod "pod-projected-configmaps-c4cbbcec-d454-4b0d-a1a5-588da026a154": Phase="Pending", Reason="", readiness=false. Elapsed: 3.333622ms Jun 3 21:56:02.980: INFO: Pod "pod-projected-configmaps-c4cbbcec-d454-4b0d-a1a5-588da026a154": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006870542s Jun 3 21:56:04.982: INFO: Pod "pod-projected-configmaps-c4cbbcec-d454-4b0d-a1a5-588da026a154": Phase="Pending", Reason="", readiness=false. Elapsed: 4.009490283s Jun 3 21:56:06.987: INFO: Pod "pod-projected-configmaps-c4cbbcec-d454-4b0d-a1a5-588da026a154": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.014017575s STEP: Saw pod success Jun 3 21:56:06.987: INFO: Pod "pod-projected-configmaps-c4cbbcec-d454-4b0d-a1a5-588da026a154" satisfied condition "Succeeded or Failed" Jun 3 21:56:06.989: INFO: Trying to get logs from node node2 pod pod-projected-configmaps-c4cbbcec-d454-4b0d-a1a5-588da026a154 container agnhost-container: STEP: delete the pod Jun 3 21:56:07.031: INFO: Waiting for pod pod-projected-configmaps-c4cbbcec-d454-4b0d-a1a5-588da026a154 to disappear Jun 3 21:56:07.033: INFO: Pod pod-projected-configmaps-c4cbbcec-d454-4b0d-a1a5-588da026a154 no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 3 21:56:07.033: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4383" for this suite. • [SLOW TEST:6.103 seconds] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":-1,"completed":3,"skipped":20,"failed":0} SSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 3 21:56:07.061: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test emptydir 0666 on tmpfs Jun 3 21:56:07.105: INFO: Waiting up to 5m0s for pod "pod-d3ab21df-d0f9-4ba9-a165-b24cdfdd0023" in namespace "emptydir-4893" to be "Succeeded or Failed" Jun 3 21:56:07.110: INFO: Pod "pod-d3ab21df-d0f9-4ba9-a165-b24cdfdd0023": Phase="Pending", Reason="", readiness=false. Elapsed: 4.661559ms Jun 3 21:56:09.113: INFO: Pod "pod-d3ab21df-d0f9-4ba9-a165-b24cdfdd0023": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008018961s Jun 3 21:56:11.116: INFO: Pod "pod-d3ab21df-d0f9-4ba9-a165-b24cdfdd0023": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010680783s STEP: Saw pod success Jun 3 21:56:11.116: INFO: Pod "pod-d3ab21df-d0f9-4ba9-a165-b24cdfdd0023" satisfied condition "Succeeded or Failed" Jun 3 21:56:11.118: INFO: Trying to get logs from node node2 pod pod-d3ab21df-d0f9-4ba9-a165-b24cdfdd0023 container test-container: STEP: delete the pod Jun 3 21:56:11.131: INFO: Waiting for pod pod-d3ab21df-d0f9-4ba9-a165-b24cdfdd0023 to disappear Jun 3 21:56:11.134: INFO: Pod pod-d3ab21df-d0f9-4ba9-a165-b24cdfdd0023 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 3 21:56:11.134: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-4893" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":4,"skipped":29,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 3 21:55:43.250: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/lifecycle_hook.go:52 STEP: create the container to handle the HTTPGet hook request. Jun 3 21:55:43.287: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Jun 3 21:55:45.291: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Jun 3 21:55:47.293: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Jun 3 21:55:49.292: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Jun 3 21:55:51.291: INFO: The status of Pod pod-handle-http-request is Running (Ready = true) [It] should execute poststart exec hook properly [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: create the pod with lifecycle hook Jun 3 21:55:51.307: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jun 3 21:55:53.312: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jun 3 21:55:55.312: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jun 3 21:55:57.311: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = true) STEP: check poststart hook STEP: delete the pod with lifecycle hook Jun 3 21:55:57.323: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jun 3 21:55:57.325: INFO: Pod pod-with-poststart-exec-hook still exists Jun 3 21:55:59.326: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jun 3 21:55:59.330: INFO: Pod pod-with-poststart-exec-hook still exists Jun 3 21:56:01.326: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jun 3 21:56:01.329: INFO: Pod pod-with-poststart-exec-hook still exists Jun 3 21:56:03.327: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jun 3 21:56:03.330: INFO: Pod pod-with-poststart-exec-hook still exists Jun 3 21:56:05.326: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jun 3 21:56:05.330: INFO: Pod pod-with-poststart-exec-hook still exists Jun 3 21:56:07.326: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jun 3 21:56:07.330: INFO: Pod pod-with-poststart-exec-hook still exists Jun 3 21:56:09.326: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jun 3 21:56:09.329: INFO: Pod pod-with-poststart-exec-hook still exists Jun 3 21:56:11.325: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jun 3 21:56:11.328: INFO: Pod pod-with-poststart-exec-hook no longer exists [AfterEach] [sig-node] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 3 21:56:11.328: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-766" for this suite. • [SLOW TEST:28.087 seconds] [sig-node] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 when create a pod with lifecycle hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/lifecycle_hook.go:43 should execute poststart exec hook properly [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]","total":-1,"completed":2,"skipped":29,"failed":0} SSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 3 21:56:05.434: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating projection with secret that has name projected-secret-test-map-7b0ce3a1-bd2d-4a9b-863e-3117751ade04 STEP: Creating a pod to test consume secrets Jun 3 21:56:05.474: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-b1d5d53c-e3f8-4352-a00f-192dc65e085b" in namespace "projected-4478" to be "Succeeded or Failed" Jun 3 21:56:05.479: INFO: Pod "pod-projected-secrets-b1d5d53c-e3f8-4352-a00f-192dc65e085b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.629688ms Jun 3 21:56:07.482: INFO: Pod "pod-projected-secrets-b1d5d53c-e3f8-4352-a00f-192dc65e085b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00797409s Jun 3 21:56:09.485: INFO: Pod "pod-projected-secrets-b1d5d53c-e3f8-4352-a00f-192dc65e085b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.010415422s Jun 3 21:56:11.489: INFO: Pod "pod-projected-secrets-b1d5d53c-e3f8-4352-a00f-192dc65e085b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.015197379s STEP: Saw pod success Jun 3 21:56:11.489: INFO: Pod "pod-projected-secrets-b1d5d53c-e3f8-4352-a00f-192dc65e085b" satisfied condition "Succeeded or Failed" Jun 3 21:56:11.491: INFO: Trying to get logs from node node2 pod pod-projected-secrets-b1d5d53c-e3f8-4352-a00f-192dc65e085b container projected-secret-volume-test: STEP: delete the pod Jun 3 21:56:11.516: INFO: Waiting for pod pod-projected-secrets-b1d5d53c-e3f8-4352-a00f-192dc65e085b to disappear Jun 3 21:56:11.518: INFO: Pod pod-projected-secrets-b1d5d53c-e3f8-4352-a00f-192dc65e085b no longer exists [AfterEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 3 21:56:11.518: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4478" for this suite. • [SLOW TEST:6.091 seconds] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":-1,"completed":5,"skipped":151,"failed":0} S ------------------------------ [BeforeEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 3 21:56:11.188: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating configMap with name projected-configmap-test-volume-19a42c39-f610-4590-abe2-5117db175484 STEP: Creating a pod to test consume configMaps Jun 3 21:56:11.225: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-3ddbb181-140e-444d-afa0-7aa477fd1b54" in namespace "projected-9353" to be "Succeeded or Failed" Jun 3 21:56:11.228: INFO: Pod "pod-projected-configmaps-3ddbb181-140e-444d-afa0-7aa477fd1b54": Phase="Pending", Reason="", readiness=false. Elapsed: 3.555695ms Jun 3 21:56:13.232: INFO: Pod "pod-projected-configmaps-3ddbb181-140e-444d-afa0-7aa477fd1b54": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006739351s Jun 3 21:56:15.236: INFO: Pod "pod-projected-configmaps-3ddbb181-140e-444d-afa0-7aa477fd1b54": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010827425s STEP: Saw pod success Jun 3 21:56:15.236: INFO: Pod "pod-projected-configmaps-3ddbb181-140e-444d-afa0-7aa477fd1b54" satisfied condition "Succeeded or Failed" Jun 3 21:56:15.239: INFO: Trying to get logs from node node2 pod pod-projected-configmaps-3ddbb181-140e-444d-afa0-7aa477fd1b54 container agnhost-container: STEP: delete the pod Jun 3 21:56:15.254: INFO: Waiting for pod pod-projected-configmaps-3ddbb181-140e-444d-afa0-7aa477fd1b54 to disappear Jun 3 21:56:15.256: INFO: Pod pod-projected-configmaps-3ddbb181-140e-444d-afa0-7aa477fd1b54 no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 3 21:56:15.256: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9353" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":-1,"completed":5,"skipped":46,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 3 21:56:11.529: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating secret secrets-1260/secret-test-b76b20bc-972b-43f0-b2c3-14cd67546d49 STEP: Creating a pod to test consume secrets Jun 3 21:56:11.563: INFO: Waiting up to 5m0s for pod "pod-configmaps-2191e646-4da0-40bd-ac24-e4a01c800708" in namespace "secrets-1260" to be "Succeeded or Failed" Jun 3 21:56:11.565: INFO: Pod "pod-configmaps-2191e646-4da0-40bd-ac24-e4a01c800708": Phase="Pending", Reason="", readiness=false. Elapsed: 2.413509ms Jun 3 21:56:13.568: INFO: Pod "pod-configmaps-2191e646-4da0-40bd-ac24-e4a01c800708": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005661929s Jun 3 21:56:15.572: INFO: Pod "pod-configmaps-2191e646-4da0-40bd-ac24-e4a01c800708": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.00901055s STEP: Saw pod success Jun 3 21:56:15.572: INFO: Pod "pod-configmaps-2191e646-4da0-40bd-ac24-e4a01c800708" satisfied condition "Succeeded or Failed" Jun 3 21:56:15.574: INFO: Trying to get logs from node node1 pod pod-configmaps-2191e646-4da0-40bd-ac24-e4a01c800708 container env-test: STEP: delete the pod Jun 3 21:56:15.852: INFO: Waiting for pod pod-configmaps-2191e646-4da0-40bd-ac24-e4a01c800708 to disappear Jun 3 21:56:15.853: INFO: Pod pod-configmaps-2191e646-4da0-40bd-ac24-e4a01c800708 no longer exists [AfterEach] [sig-node] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 3 21:56:15.853: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-1260" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Secrets should be consumable via the environment [NodeConformance] [Conformance]","total":-1,"completed":6,"skipped":152,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 3 21:56:15.909: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746 [It] should complete a service status lifecycle [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating a Service STEP: watching for the Service to be added Jun 3 21:56:15.939: INFO: Found Service test-service-fkvrx in namespace services-4647 with labels: map[test-service-static:true] & ports [{http TCP 80 {0 80 } 0}] Jun 3 21:56:15.939: INFO: Service test-service-fkvrx created STEP: Getting /status Jun 3 21:56:15.942: INFO: Service test-service-fkvrx has LoadBalancer: {[]} STEP: patching the ServiceStatus STEP: watching for the Service to be patched Jun 3 21:56:15.946: INFO: observed Service test-service-fkvrx in namespace services-4647 with annotations: map[] & LoadBalancer: {[]} Jun 3 21:56:15.946: INFO: Found Service test-service-fkvrx in namespace services-4647 with annotations: map[patchedstatus:true] & LoadBalancer: {[{203.0.113.1 []}]} Jun 3 21:56:15.946: INFO: Service test-service-fkvrx has service status patched STEP: updating the ServiceStatus Jun 3 21:56:15.954: INFO: updatedStatus.Conditions: []v1.Condition{v1.Condition{Type:"StatusUpdate", Status:"True", ObservedGeneration:0, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Reason:"E2E", Message:"Set from e2e test"}} STEP: watching for the Service to be updated Jun 3 21:56:15.955: INFO: Observed Service test-service-fkvrx in namespace services-4647 with annotations: map[] & Conditions: {[]} Jun 3 21:56:15.955: INFO: Observed event: &Service{ObjectMeta:{test-service-fkvrx services-4647 83ff6998-af58-4a6c-b2ba-093acae63f85 32558 0 2022-06-03 21:56:15 +0000 UTC map[test-service-static:true] map[patchedstatus:true] [] [] [{e2e.test Update v1 2022-06-03 21:56:15 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:patchedstatus":{}},"f:labels":{".":{},"f:test-service-static":{}}},"f:spec":{"f:ports":{".":{},"k:{\"port\":80,\"protocol\":\"TCP\"}":{".":{},"f:name":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:sessionAffinity":{},"f:type":{}},"f:status":{"f:loadBalancer":{"f:ingress":{}}}}}]},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:http,Protocol:TCP,Port:80,TargetPort:{0 80 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{},ClusterIP:10.233.3.127,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,TopologyKeys:[],IPFamilyPolicy:*SingleStack,ClusterIPs:[10.233.3.127],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{LoadBalancerIngress{IP:203.0.113.1,Hostname:,Ports:[]PortStatus{},},},},Conditions:[]Condition{},},} Jun 3 21:56:15.955: INFO: Found Service test-service-fkvrx in namespace services-4647 with annotations: map[patchedstatus:true] & Conditions: [{StatusUpdate True 0 0001-01-01 00:00:00 +0000 UTC E2E Set from e2e test}] Jun 3 21:56:15.955: INFO: Service test-service-fkvrx has service status updated STEP: patching the service STEP: watching for the Service to be patched Jun 3 21:56:15.969: INFO: observed Service test-service-fkvrx in namespace services-4647 with labels: map[test-service-static:true] Jun 3 21:56:15.969: INFO: observed Service test-service-fkvrx in namespace services-4647 with labels: map[test-service-static:true] Jun 3 21:56:15.969: INFO: observed Service test-service-fkvrx in namespace services-4647 with labels: map[test-service-static:true] Jun 3 21:56:15.969: INFO: Found Service test-service-fkvrx in namespace services-4647 with labels: map[test-service:patched test-service-static:true] Jun 3 21:56:15.969: INFO: Service test-service-fkvrx patched STEP: deleting the service STEP: watching for the Service to be deleted Jun 3 21:56:15.985: INFO: Observed event: ADDED Jun 3 21:56:15.985: INFO: Observed event: MODIFIED Jun 3 21:56:15.985: INFO: Observed event: MODIFIED Jun 3 21:56:15.985: INFO: Observed event: MODIFIED Jun 3 21:56:15.985: INFO: Found Service test-service-fkvrx in namespace services-4647 with labels: map[test-service:patched test-service-static:true] & annotations: map[patchedstatus:true] Jun 3 21:56:15.985: INFO: Service test-service-fkvrx deleted [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 3 21:56:15.985: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-4647" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750 • ------------------------------ [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 3 21:56:11.363: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41 [It] should provide container's cpu limit [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test downward API volume plugin Jun 3 21:56:11.396: INFO: Waiting up to 5m0s for pod "downwardapi-volume-1334718d-310d-41b5-89bc-7d6f0a35bedf" in namespace "downward-api-2699" to be "Succeeded or Failed" Jun 3 21:56:11.400: INFO: Pod "downwardapi-volume-1334718d-310d-41b5-89bc-7d6f0a35bedf": Phase="Pending", Reason="", readiness=false. Elapsed: 4.60494ms Jun 3 21:56:13.403: INFO: Pod "downwardapi-volume-1334718d-310d-41b5-89bc-7d6f0a35bedf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007334275s Jun 3 21:56:15.406: INFO: Pod "downwardapi-volume-1334718d-310d-41b5-89bc-7d6f0a35bedf": Phase="Pending", Reason="", readiness=false. Elapsed: 4.010574645s Jun 3 21:56:17.412: INFO: Pod "downwardapi-volume-1334718d-310d-41b5-89bc-7d6f0a35bedf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.016535021s STEP: Saw pod success Jun 3 21:56:17.412: INFO: Pod "downwardapi-volume-1334718d-310d-41b5-89bc-7d6f0a35bedf" satisfied condition "Succeeded or Failed" Jun 3 21:56:17.416: INFO: Trying to get logs from node node2 pod downwardapi-volume-1334718d-310d-41b5-89bc-7d6f0a35bedf container client-container: STEP: delete the pod Jun 3 21:56:17.429: INFO: Waiting for pod downwardapi-volume-1334718d-310d-41b5-89bc-7d6f0a35bedf to disappear Jun 3 21:56:17.430: INFO: Pod downwardapi-volume-1334718d-310d-41b5-89bc-7d6f0a35bedf no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 3 21:56:17.430: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-2699" for this suite. • [SLOW TEST:6.077 seconds] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should provide container's cpu limit [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance]","total":-1,"completed":3,"skipped":39,"failed":0} SSSSSSS ------------------------------ [BeforeEach] [sig-network] HostPort /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 3 21:55:54.645: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename hostport STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] HostPort /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/hostport.go:47 [It] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Trying to create a pod(pod1) with hostport 54323 and hostIP 127.0.0.1 and expect scheduled Jun 3 21:55:54.689: INFO: The status of Pod pod1 is Pending, waiting for it to be Running (with Ready = true) Jun 3 21:55:56.693: INFO: The status of Pod pod1 is Pending, waiting for it to be Running (with Ready = true) Jun 3 21:55:58.695: INFO: The status of Pod pod1 is Pending, waiting for it to be Running (with Ready = true) Jun 3 21:56:00.696: INFO: The status of Pod pod1 is Pending, waiting for it to be Running (with Ready = true) Jun 3 21:56:02.693: INFO: The status of Pod pod1 is Running (Ready = true) STEP: Trying to create another pod(pod2) with hostport 54323 but hostIP 10.10.190.207 on the node which pod1 resides and expect scheduled Jun 3 21:56:02.705: INFO: The status of Pod pod2 is Pending, waiting for it to be Running (with Ready = true) Jun 3 21:56:04.708: INFO: The status of Pod pod2 is Pending, waiting for it to be Running (with Ready = true) Jun 3 21:56:06.708: INFO: The status of Pod pod2 is Running (Ready = true) STEP: Trying to create a third pod(pod3) with hostport 54323, hostIP 10.10.190.207 but use UDP protocol on the node which pod2 resides Jun 3 21:56:06.719: INFO: The status of Pod pod3 is Pending, waiting for it to be Running (with Ready = true) Jun 3 21:56:08.723: INFO: The status of Pod pod3 is Pending, waiting for it to be Running (with Ready = true) Jun 3 21:56:10.723: INFO: The status of Pod pod3 is Running (Ready = true) Jun 3 21:56:10.736: INFO: The status of Pod e2e-host-exec is Pending, waiting for it to be Running (with Ready = true) Jun 3 21:56:12.744: INFO: The status of Pod e2e-host-exec is Running (Ready = true) STEP: checking connectivity from pod e2e-host-exec to serverIP: 127.0.0.1, port: 54323 Jun 3 21:56:12.747: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g --connect-timeout 5 --interface 10.10.190.207 http://127.0.0.1:54323/hostname] Namespace:hostport-1025 PodName:e2e-host-exec ContainerName:e2e-host-exec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 3 21:56:12.747: INFO: >>> kubeConfig: /root/.kube/config STEP: checking connectivity from pod e2e-host-exec to serverIP: 10.10.190.207, port: 54323 Jun 3 21:56:13.057: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g --connect-timeout 5 http://10.10.190.207:54323/hostname] Namespace:hostport-1025 PodName:e2e-host-exec ContainerName:e2e-host-exec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 3 21:56:13.057: INFO: >>> kubeConfig: /root/.kube/config STEP: checking connectivity from pod e2e-host-exec to serverIP: 10.10.190.207, port: 54323 UDP Jun 3 21:56:13.146: INFO: ExecWithOptions {Command:[/bin/sh -c nc -vuz -w 5 10.10.190.207 54323] Namespace:hostport-1025 PodName:e2e-host-exec ContainerName:e2e-host-exec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 3 21:56:13.146: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-network] HostPort /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 3 21:56:18.235: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "hostport-1025" for this suite. • [SLOW TEST:23.599 seconds] [sig-network] HostPort /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 validates that there is no conflict between pods with same hostPort but different hostIP and protocol [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-network] HostPort validates that there is no conflict between pods with same hostPort but different hostIP and protocol [LinuxOnly] [Conformance]","total":-1,"completed":5,"skipped":122,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 3 21:56:04.205: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:126 STEP: Setting up server cert STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication STEP: Deploying the custom resource conversion webhook pod STEP: Wait for the deployment to be ready Jun 3 21:56:04.772: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set Jun 3 21:56:06.779: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63789890164, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63789890164, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63789890164, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63789890164, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-697cdbd8f4\" is progressing."}}, CollisionCount:(*int32)(nil)} Jun 3 21:56:08.783: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63789890164, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63789890164, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63789890164, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63789890164, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-697cdbd8f4\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jun 3 21:56:11.789: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1 [It] should be able to convert a non homogeneous list of CRs [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Jun 3 21:56:11.801: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating a v1 custom resource STEP: Create a v2 custom resource STEP: List CRs in v1 STEP: List CRs in v2 [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 3 21:56:19.943: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-webhook-3090" for this suite. [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:137 • [SLOW TEST:15.766 seconds] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to convert a non homogeneous list of CRs [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","total":-1,"completed":5,"skipped":86,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Events /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 3 21:56:15.310: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [It] should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: retrieving the pod Jun 3 21:56:19.355: INFO: &Pod{ObjectMeta:{send-events-ec9b4a36-a00f-4e0c-839e-42e5b4b4bae3 events-7642 a964f3a6-10ab-42e5-9b98-66d21367aa89 32656 0 2022-06-03 21:56:15 +0000 UTC map[name:foo time:332993357] map[k8s.v1.cni.cncf.io/network-status:[{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.4.227" ], "mac": "2a:61:bb:08:0d:6e", "default": true, "dns": {} }] k8s.v1.cni.cncf.io/networks-status:[{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.4.227" ], "mac": "2a:61:bb:08:0d:6e", "default": true, "dns": {} }] kubernetes.io/psp:collectd] [] [] [{e2e.test Update v1 2022-06-03 21:56:15 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{},"f:time":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"p\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:ports":{".":{},"k:{\"containerPort\":80,\"protocol\":\"TCP\"}":{".":{},"f:containerPort":{},"f:protocol":{}}},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {multus Update v1 2022-06-03 21:56:17 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:k8s.v1.cni.cncf.io/network-status":{},"f:k8s.v1.cni.cncf.io/networks-status":{}}}}} {kubelet Update v1 2022-06-03 21:56:19 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.4.227\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-xrrb7,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:p,Image:k8s.gcr.io/e2e-test-images/agnhost:2.32,Command:[],Args:[serve-hostname],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:,HostPort:0,ContainerPort:80,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-xrrb7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-06-03 21:56:15 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-06-03 21:56:19 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-06-03 21:56:19 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-06-03 21:56:15 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.10.190.208,PodIP:10.244.4.227,StartTime:2022-06-03 21:56:15 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:p,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2022-06-03 21:56:18 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/agnhost:2.32,ImageID:docker-pullable://k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1,ContainerID:docker://f903401306eb8b1d9839cea7ece0974a516256b281bd80da1191ac2d7c407489,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.4.227,},},EphemeralContainerStatuses:[]ContainerStatus{},},} STEP: checking for scheduler event about the pod Jun 3 21:56:21.359: INFO: Saw scheduler event for our pod. STEP: checking for kubelet event about the pod Jun 3 21:56:23.362: INFO: Saw kubelet event for our pod. STEP: deleting the pod [AfterEach] [sig-node] Events /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 3 21:56:23.366: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-7642" for this suite. • [SLOW TEST:8.064 seconds] [sig-node] Events /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance]","total":-1,"completed":6,"skipped":69,"failed":0} SSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 3 21:56:17.459: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test downward API volume plugin Jun 3 21:56:17.492: INFO: Waiting up to 5m0s for pod "downwardapi-volume-23a7f396-1eb4-483b-968e-fa627f70cd70" in namespace "projected-5526" to be "Succeeded or Failed" Jun 3 21:56:17.496: INFO: Pod "downwardapi-volume-23a7f396-1eb4-483b-968e-fa627f70cd70": Phase="Pending", Reason="", readiness=false. Elapsed: 3.200685ms Jun 3 21:56:19.499: INFO: Pod "downwardapi-volume-23a7f396-1eb4-483b-968e-fa627f70cd70": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00652564s Jun 3 21:56:21.503: INFO: Pod "downwardapi-volume-23a7f396-1eb4-483b-968e-fa627f70cd70": Phase="Pending", Reason="", readiness=false. Elapsed: 4.010098901s Jun 3 21:56:23.509: INFO: Pod "downwardapi-volume-23a7f396-1eb4-483b-968e-fa627f70cd70": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.016185324s STEP: Saw pod success Jun 3 21:56:23.509: INFO: Pod "downwardapi-volume-23a7f396-1eb4-483b-968e-fa627f70cd70" satisfied condition "Succeeded or Failed" Jun 3 21:56:23.511: INFO: Trying to get logs from node node2 pod downwardapi-volume-23a7f396-1eb4-483b-968e-fa627f70cd70 container client-container: STEP: delete the pod Jun 3 21:56:23.523: INFO: Waiting for pod downwardapi-volume-23a7f396-1eb4-483b-968e-fa627f70cd70 to disappear Jun 3 21:56:23.525: INFO: Pod downwardapi-volume-23a7f396-1eb4-483b-968e-fa627f70cd70 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 3 21:56:23.525: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5526" for this suite. • [SLOW TEST:6.073 seconds] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":-1,"completed":4,"skipped":46,"failed":0} SS ------------------------------ [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 3 21:56:18.277: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41 [It] should update annotations on modification [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating the pod Jun 3 21:56:18.315: INFO: The status of Pod annotationupdatef0fb058f-74ae-4263-ab6a-734cbb6b6423 is Pending, waiting for it to be Running (with Ready = true) Jun 3 21:56:20.318: INFO: The status of Pod annotationupdatef0fb058f-74ae-4263-ab6a-734cbb6b6423 is Pending, waiting for it to be Running (with Ready = true) Jun 3 21:56:22.319: INFO: The status of Pod annotationupdatef0fb058f-74ae-4263-ab6a-734cbb6b6423 is Running (Ready = true) Jun 3 21:56:22.840: INFO: Successfully updated pod "annotationupdatef0fb058f-74ae-4263-ab6a-734cbb6b6423" [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 3 21:56:26.875: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-7723" for this suite. • [SLOW TEST:8.607 seconds] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should update annotations on modification [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance]","total":-1,"completed":6,"skipped":138,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 3 21:55:57.640: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of different groups [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: CRs in different groups (two CRDs) show up in OpenAPI documentation Jun 3 21:55:57.663: INFO: >>> kubeConfig: /root/.kube/config Jun 3 21:56:05.809: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 3 21:56:28.618: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-2265" for this suite. • [SLOW TEST:30.996 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of different groups [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","total":-1,"completed":5,"skipped":84,"failed":0} SSSS ------------------------------ [BeforeEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 3 21:56:23.540: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted STEP: Gathering metrics Jun 3 21:56:29.603: INFO: The status of Pod kube-controller-manager-master3 is Running (Ready = true) Jun 3 21:56:29.739: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 3 21:56:29.739: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-2990" for this suite. • [SLOW TEST:6.208 seconds] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]","total":-1,"completed":5,"skipped":48,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 3 21:56:28.649: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Jun 3 21:56:35.709: INFO: Expected: &{} to match Container's Termination Message: -- STEP: delete the container [AfterEach] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 3 21:56:35.717: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-7688" for this suite. • [SLOW TEST:7.076 seconds] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 blackbox test /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:41 on terminated container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:134 should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":-1,"completed":6,"skipped":88,"failed":0} SSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 3 21:56:29.803: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41 [It] should provide container's cpu request [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test downward API volume plugin Jun 3 21:56:29.835: INFO: Waiting up to 5m0s for pod "downwardapi-volume-bdf0da63-43e8-4376-a3ff-72ec6e4667c6" in namespace "projected-7357" to be "Succeeded or Failed" Jun 3 21:56:29.838: INFO: Pod "downwardapi-volume-bdf0da63-43e8-4376-a3ff-72ec6e4667c6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.550106ms Jun 3 21:56:31.841: INFO: Pod "downwardapi-volume-bdf0da63-43e8-4376-a3ff-72ec6e4667c6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005473251s Jun 3 21:56:33.845: INFO: Pod "downwardapi-volume-bdf0da63-43e8-4376-a3ff-72ec6e4667c6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.009890975s Jun 3 21:56:35.850: INFO: Pod "downwardapi-volume-bdf0da63-43e8-4376-a3ff-72ec6e4667c6": Phase="Pending", Reason="", readiness=false. Elapsed: 6.013981115s Jun 3 21:56:37.854: INFO: Pod "downwardapi-volume-bdf0da63-43e8-4376-a3ff-72ec6e4667c6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.018301186s STEP: Saw pod success Jun 3 21:56:37.854: INFO: Pod "downwardapi-volume-bdf0da63-43e8-4376-a3ff-72ec6e4667c6" satisfied condition "Succeeded or Failed" Jun 3 21:56:37.856: INFO: Trying to get logs from node node1 pod downwardapi-volume-bdf0da63-43e8-4376-a3ff-72ec6e4667c6 container client-container: STEP: delete the pod Jun 3 21:56:37.868: INFO: Waiting for pod downwardapi-volume-bdf0da63-43e8-4376-a3ff-72ec6e4667c6 to disappear Jun 3 21:56:37.870: INFO: Pod downwardapi-volume-bdf0da63-43e8-4376-a3ff-72ec6e4667c6 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 3 21:56:37.870: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7357" for this suite. • [SLOW TEST:8.074 seconds] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should provide container's cpu request [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance]","total":-1,"completed":6,"skipped":75,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 3 21:56:26.946: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Jun 3 21:56:27.465: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Jun 3 21:56:29.474: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63789890187, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63789890187, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63789890187, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63789890187, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} Jun 3 21:56:31.478: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63789890187, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63789890187, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63789890187, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63789890187, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} Jun 3 21:56:33.479: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63789890187, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63789890187, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63789890187, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63789890187, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} Jun 3 21:56:35.477: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63789890187, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63789890187, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63789890187, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63789890187, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jun 3 21:56:38.485: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] patching/updating a mutating webhook should work [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a mutating webhook configuration STEP: Updating a mutating webhook configuration's rules to not include the create operation STEP: Creating a configMap that should not be mutated STEP: Patching a mutating webhook configuration's rules to include the create operation STEP: Creating a configMap that should be mutated [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 3 21:56:38.544: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-9108" for this suite. STEP: Destroying namespace "webhook-9108-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:11.629 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 patching/updating a mutating webhook should work [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","total":-1,"completed":7,"skipped":164,"failed":0} SSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 3 21:56:35.753: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test emptydir volume type on tmpfs Jun 3 21:56:35.788: INFO: Waiting up to 5m0s for pod "pod-8cae9dab-3d95-48f8-aa85-e9f6767b01e6" in namespace "emptydir-3184" to be "Succeeded or Failed" Jun 3 21:56:35.795: INFO: Pod "pod-8cae9dab-3d95-48f8-aa85-e9f6767b01e6": Phase="Pending", Reason="", readiness=false. Elapsed: 6.621225ms Jun 3 21:56:37.798: INFO: Pod "pod-8cae9dab-3d95-48f8-aa85-e9f6767b01e6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009440244s Jun 3 21:56:39.802: INFO: Pod "pod-8cae9dab-3d95-48f8-aa85-e9f6767b01e6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013836825s STEP: Saw pod success Jun 3 21:56:39.802: INFO: Pod "pod-8cae9dab-3d95-48f8-aa85-e9f6767b01e6" satisfied condition "Succeeded or Failed" Jun 3 21:56:39.805: INFO: Trying to get logs from node node2 pod pod-8cae9dab-3d95-48f8-aa85-e9f6767b01e6 container test-container: STEP: delete the pod Jun 3 21:56:39.875: INFO: Waiting for pod pod-8cae9dab-3d95-48f8-aa85-e9f6767b01e6 to disappear Jun 3 21:56:39.877: INFO: Pod pod-8cae9dab-3d95-48f8-aa85-e9f6767b01e6 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 3 21:56:39.877: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-3184" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":7,"skipped":100,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] PreStop /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 3 21:56:20.016: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename prestop STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] PreStop /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:157 [It] should call prestop when killing a pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating server pod server in namespace prestop-3994 STEP: Waiting for pods to come up. STEP: Creating tester pod tester in namespace prestop-3994 STEP: Deleting pre-stop pod Jun 3 21:56:41.083: INFO: Saw: { "Hostname": "server", "Sent": null, "Received": { "prestop": 1 }, "Errors": null, "Log": [ "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up." ], "StillContactingPeers": true } STEP: Deleting the server pod [AfterEach] [sig-node] PreStop /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 3 21:56:41.089: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "prestop-3994" for this suite. • [SLOW TEST:21.082 seconds] [sig-node] PreStop /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 should call prestop when killing a pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] PreStop should call prestop when killing a pod [Conformance]","total":-1,"completed":6,"skipped":106,"failed":0} SSSS ------------------------------ [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 3 21:56:37.993: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46 [It] should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Jun 3 21:56:38.029: INFO: Waiting up to 5m0s for pod "alpine-nnp-false-493f30f4-ad90-4883-babb-3073f453fe93" in namespace "security-context-test-7889" to be "Succeeded or Failed" Jun 3 21:56:38.033: INFO: Pod "alpine-nnp-false-493f30f4-ad90-4883-babb-3073f453fe93": Phase="Pending", Reason="", readiness=false. Elapsed: 3.789359ms Jun 3 21:56:40.036: INFO: Pod "alpine-nnp-false-493f30f4-ad90-4883-babb-3073f453fe93": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007319219s Jun 3 21:56:42.041: INFO: Pod "alpine-nnp-false-493f30f4-ad90-4883-babb-3073f453fe93": Phase="Pending", Reason="", readiness=false. Elapsed: 4.011460857s Jun 3 21:56:44.044: INFO: Pod "alpine-nnp-false-493f30f4-ad90-4883-babb-3073f453fe93": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.015239474s Jun 3 21:56:44.044: INFO: Pod "alpine-nnp-false-493f30f4-ad90-4883-babb-3073f453fe93" satisfied condition "Succeeded or Failed" [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 3 21:56:44.051: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-7889" for this suite. • [SLOW TEST:6.066 seconds] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 when creating containers with AllowPrivilegeEscalation /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:296 should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":7,"skipped":133,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-scheduling] LimitRange /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 3 21:56:38.605: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename limitrange STEP: Waiting for a default service account to be provisioned in namespace [It] should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a LimitRange STEP: Setting up watch STEP: Submitting a LimitRange Jun 3 21:56:38.630: INFO: observed the limitRanges list STEP: Verifying LimitRange creation was observed STEP: Fetching the LimitRange to ensure it has proper values Jun 3 21:56:38.635: INFO: Verifying requests: expected map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] with actual map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] Jun 3 21:56:38.635: INFO: Verifying limits: expected map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] with actual map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] STEP: Creating a Pod with no resource requirements STEP: Ensuring Pod has resource requirements applied from LimitRange Jun 3 21:56:38.649: INFO: Verifying requests: expected map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] with actual map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] Jun 3 21:56:38.649: INFO: Verifying limits: expected map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] with actual map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] STEP: Creating a Pod with partial resource requirements STEP: Ensuring Pod has merged resource requirements applied from LimitRange Jun 3 21:56:38.661: INFO: Verifying requests: expected map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{161061273600 0} {} 150Gi BinarySI} memory:{{157286400 0} {} 150Mi BinarySI}] with actual map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{161061273600 0} {} 150Gi BinarySI} memory:{{157286400 0} {} 150Mi BinarySI}] Jun 3 21:56:38.661: INFO: Verifying limits: expected map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] with actual map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] STEP: Failing to create a Pod with less than min resources STEP: Failing to create a Pod with more than max resources STEP: Updating a LimitRange STEP: Verifying LimitRange updating is effective STEP: Creating a Pod with less than former min resources STEP: Failing to create a Pod with more than max resources STEP: Deleting a LimitRange STEP: Verifying the LimitRange was deleted Jun 3 21:56:45.707: INFO: limitRange is already deleted STEP: Creating a Pod with more than former max resources [AfterEach] [sig-scheduling] LimitRange /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 3 21:56:45.719: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "limitrange-3800" for this suite. • [SLOW TEST:7.121 seconds] [sig-scheduling] LimitRange /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-scheduling] LimitRange should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance]","total":-1,"completed":8,"skipped":174,"failed":0} SS ------------------------------ [BeforeEach] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 3 21:56:45.733: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should run through the lifecycle of a ServiceAccount [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating a ServiceAccount STEP: watching for the ServiceAccount to be added STEP: patching the ServiceAccount STEP: finding ServiceAccount in list of all ServiceAccounts (by LabelSelector) STEP: deleting the ServiceAccount [AfterEach] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 3 21:56:45.781: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-7183" for this suite. • ------------------------------ {"msg":"PASSED [sig-auth] ServiceAccounts should run through the lifecycle of a ServiceAccount [Conformance]","total":-1,"completed":9,"skipped":176,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] Watchers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 3 21:56:41.111: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should receive events on concurrent watches in same order [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: getting a starting resourceVersion STEP: starting a background goroutine to produce watch events STEP: creating watches starting from each resource version of the events produced and verifying they all receive resource versions in the same order [AfterEach] [sig-api-machinery] Watchers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 3 21:56:45.917: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-6447" for this suite. • ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance]","total":-1,"completed":7,"skipped":110,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 3 21:55:58.662: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of same group but different versions [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: CRs in the same group but different versions (one multiversion CRD) show up in OpenAPI documentation Jun 3 21:55:58.684: INFO: >>> kubeConfig: /root/.kube/config STEP: CRs in the same group but different versions (two CRDs) show up in OpenAPI documentation Jun 3 21:56:18.065: INFO: >>> kubeConfig: /root/.kube/config Jun 3 21:56:26.734: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 3 21:56:48.540: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-6206" for this suite. • [SLOW TEST:49.898 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of same group but different versions [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance]","total":-1,"completed":5,"skipped":83,"failed":0} SS ------------------------------ [BeforeEach] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 3 21:56:44.093: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should mount an API token into pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: getting the auto-created API token STEP: reading a file in the container Jun 3 21:56:48.642: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-1444 pod-service-account-96a00ec3-7e0e-44f0-b4a8-3ad50ac98d23 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/token' STEP: reading a file in the container Jun 3 21:56:48.884: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-1444 pod-service-account-96a00ec3-7e0e-44f0-b4a8-3ad50ac98d23 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/ca.crt' STEP: reading a file in the container Jun 3 21:56:49.119: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-1444 pod-service-account-96a00ec3-7e0e-44f0-b4a8-3ad50ac98d23 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/namespace' [AfterEach] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 3 21:56:49.368: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-1444" for this suite. • [SLOW TEST:5.283 seconds] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23 should mount an API token into pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-auth] ServiceAccounts should mount an API token into pods [Conformance]","total":-1,"completed":8,"skipped":149,"failed":0} SSSSSSSS ------------------------------ [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 3 21:56:45.846: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating configMap with name configmap-test-volume-map-af65901f-47cb-4652-9a93-8614acbd83fd STEP: Creating a pod to test consume configMaps Jun 3 21:56:45.879: INFO: Waiting up to 5m0s for pod "pod-configmaps-7fa4b959-4c1d-4f55-93b9-88cea72ecfb6" in namespace "configmap-7513" to be "Succeeded or Failed" Jun 3 21:56:45.882: INFO: Pod "pod-configmaps-7fa4b959-4c1d-4f55-93b9-88cea72ecfb6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.905051ms Jun 3 21:56:47.887: INFO: Pod "pod-configmaps-7fa4b959-4c1d-4f55-93b9-88cea72ecfb6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007505481s Jun 3 21:56:49.890: INFO: Pod "pod-configmaps-7fa4b959-4c1d-4f55-93b9-88cea72ecfb6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010240806s STEP: Saw pod success Jun 3 21:56:49.890: INFO: Pod "pod-configmaps-7fa4b959-4c1d-4f55-93b9-88cea72ecfb6" satisfied condition "Succeeded or Failed" Jun 3 21:56:49.892: INFO: Trying to get logs from node node2 pod pod-configmaps-7fa4b959-4c1d-4f55-93b9-88cea72ecfb6 container agnhost-container: STEP: delete the pod Jun 3 21:56:49.904: INFO: Waiting for pod pod-configmaps-7fa4b959-4c1d-4f55-93b9-88cea72ecfb6 to disappear Jun 3 21:56:49.906: INFO: Pod pod-configmaps-7fa4b959-4c1d-4f55-93b9-88cea72ecfb6 no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 3 21:56:49.906: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-7513" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":10,"skipped":203,"failed":0} SSSSS ------------------------------ [BeforeEach] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 3 21:56:46.121: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test downward api env vars Jun 3 21:56:46.153: INFO: Waiting up to 5m0s for pod "downward-api-4048cd44-7fee-4a90-bf2b-63acdc213a25" in namespace "downward-api-2806" to be "Succeeded or Failed" Jun 3 21:56:46.156: INFO: Pod "downward-api-4048cd44-7fee-4a90-bf2b-63acdc213a25": Phase="Pending", Reason="", readiness=false. Elapsed: 2.594477ms Jun 3 21:56:48.159: INFO: Pod "downward-api-4048cd44-7fee-4a90-bf2b-63acdc213a25": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006283718s Jun 3 21:56:50.164: INFO: Pod "downward-api-4048cd44-7fee-4a90-bf2b-63acdc213a25": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010824119s STEP: Saw pod success Jun 3 21:56:50.164: INFO: Pod "downward-api-4048cd44-7fee-4a90-bf2b-63acdc213a25" satisfied condition "Succeeded or Failed" Jun 3 21:56:50.167: INFO: Trying to get logs from node node2 pod downward-api-4048cd44-7fee-4a90-bf2b-63acdc213a25 container dapi-container: STEP: delete the pod Jun 3 21:56:50.178: INFO: Waiting for pod downward-api-4048cd44-7fee-4a90-bf2b-63acdc213a25 to disappear Jun 3 21:56:50.182: INFO: Pod downward-api-4048cd44-7fee-4a90-bf2b-63acdc213a25 no longer exists [AfterEach] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 3 21:56:50.182: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-2806" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]","total":-1,"completed":8,"skipped":158,"failed":0} SSSS ------------------------------ [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 3 21:55:53.524: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:54 [It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [AfterEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 3 21:56:53.560: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-7154" for this suite. • [SLOW TEST:60.044 seconds] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]","total":-1,"completed":4,"skipped":50,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 3 21:56:48.569: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Jun 3 21:56:49.021: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Jun 3 21:56:51.030: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63789890209, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63789890209, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63789890209, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63789890209, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} Jun 3 21:56:53.035: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63789890209, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63789890209, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63789890209, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63789890209, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jun 3 21:56:56.039: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] listing validating webhooks should work [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Listing all of the created validation webhooks STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Deleting the collection of validation webhooks STEP: Creating a configMap that does not comply to the validation webhook rules [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 3 21:56:57.260: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-4502" for this suite. STEP: Destroying namespace "webhook-4502-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:8.730 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 listing validating webhooks should work [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 3 21:56:49.399: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:86 [It] RollingUpdateDeployment should delete old pods and create new ones [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Jun 3 21:56:49.420: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted) Jun 3 21:56:49.426: INFO: Pod name sample-pod: Found 0 pods out of 1 Jun 3 21:56:54.429: INFO: Pod name sample-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Jun 3 21:56:56.434: INFO: Creating deployment "test-rolling-update-deployment" Jun 3 21:56:56.438: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has Jun 3 21:56:56.443: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created Jun 3 21:56:58.452: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected Jun 3 21:56:58.454: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63789890216, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63789890216, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63789890216, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63789890216, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-585b757574\" is progressing."}}, CollisionCount:(*int32)(nil)} Jun 3 21:57:00.458: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted) [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:80 Jun 3 21:57:00.465: INFO: Deployment "test-rolling-update-deployment": &Deployment{ObjectMeta:{test-rolling-update-deployment deployment-7581 89895463-790a-4935-91b3-2a84f1c9cbd9 34083 1 2022-06-03 21:56:56 +0000 UTC map[name:sample-pod] map[deployment.kubernetes.io/revision:3546343826724305833] [] [] [{e2e.test Update apps/v1 2022-06-03 21:56:56 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2022-06-03 21:56:59 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:availableReplicas":{},"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{},"f:updatedReplicas":{}}}}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.32 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc004d2ffc8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2022-06-03 21:56:56 +0000 UTC,LastTransitionTime:2022-06-03 21:56:56 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rolling-update-deployment-585b757574" has successfully progressed.,LastUpdateTime:2022-06-03 21:56:59 +0000 UTC,LastTransitionTime:2022-06-03 21:56:56 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} Jun 3 21:57:00.469: INFO: New ReplicaSet "test-rolling-update-deployment-585b757574" of Deployment "test-rolling-update-deployment": &ReplicaSet{ObjectMeta:{test-rolling-update-deployment-585b757574 deployment-7581 479c7f95-7031-4f34-b30a-44ed0a3fe9fc 34073 1 2022-06-03 21:56:56 +0000 UTC map[name:sample-pod pod-template-hash:585b757574] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305833] [{apps/v1 Deployment test-rolling-update-deployment 89895463-790a-4935-91b3-2a84f1c9cbd9 0xc004f14477 0xc004f14478}] [] [{kube-controller-manager Update apps/v1 2022-06-03 21:56:59 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"89895463-790a-4935-91b3-2a84f1c9cbd9\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: 585b757574,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod pod-template-hash:585b757574] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.32 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc004f14508 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} Jun 3 21:57:00.469: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment": Jun 3 21:57:00.469: INFO: &ReplicaSet{ObjectMeta:{test-rolling-update-controller deployment-7581 014bd2b5-9b27-44da-9c02-132ad4f1c801 34082 2 2022-06-03 21:56:49 +0000 UTC map[name:sample-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305832] [{apps/v1 Deployment test-rolling-update-deployment 89895463-790a-4935-91b3-2a84f1c9cbd9 0xc004f14367 0xc004f14368}] [] [{e2e.test Update apps/v1 2022-06-03 21:56:49 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2022-06-03 21:56:59 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"89895463-790a-4935-91b3-2a84f1c9cbd9\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{}},"f:status":{"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod pod:httpd] map[] [] [] []} {[] [] [{httpd k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc004f14408 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Jun 3 21:57:00.472: INFO: Pod "test-rolling-update-deployment-585b757574-6dvxm" is available: &Pod{ObjectMeta:{test-rolling-update-deployment-585b757574-6dvxm test-rolling-update-deployment-585b757574- deployment-7581 dfb6c3fb-aa94-49da-a6d2-93151c5b9e14 34072 0 2022-06-03 21:56:56 +0000 UTC map[name:sample-pod pod-template-hash:585b757574] map[k8s.v1.cni.cncf.io/network-status:[{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.4.251" ], "mac": "ae:d8:bf:9b:75:5e", "default": true, "dns": {} }] k8s.v1.cni.cncf.io/networks-status:[{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.4.251" ], "mac": "ae:d8:bf:9b:75:5e", "default": true, "dns": {} }] kubernetes.io/psp:collectd] [{apps/v1 ReplicaSet test-rolling-update-deployment-585b757574 479c7f95-7031-4f34-b30a-44ed0a3fe9fc 0xc004f1491f 0xc004f14930}] [] [{kube-controller-manager Update v1 2022-06-03 21:56:56 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"479c7f95-7031-4f34-b30a-44ed0a3fe9fc\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {multus Update v1 2022-06-03 21:56:58 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:k8s.v1.cni.cncf.io/network-status":{},"f:k8s.v1.cni.cncf.io/networks-status":{}}}}} {kubelet Update v1 2022-06-03 21:56:59 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.4.251\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-shfdt,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:k8s.gcr.io/e2e-test-images/agnhost:2.32,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-shfdt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-06-03 21:56:56 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-06-03 21:56:59 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-06-03 21:56:59 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-06-03 21:56:56 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.10.190.208,PodIP:10.244.4.251,StartTime:2022-06-03 21:56:56 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2022-06-03 21:56:59 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/agnhost:2.32,ImageID:docker-pullable://k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1,ContainerID:docker://02174c739a951d063e100263acc55c7cf3652db6f2718446e050f207f9265f9b,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.4.251,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 3 21:57:00.472: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-7581" for this suite. • [SLOW TEST:11.082 seconds] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 RollingUpdateDeployment should delete old pods and create new ones [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance]","total":-1,"completed":9,"skipped":157,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-apps] CronJob /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 3 21:55:40.719: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename cronjob STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] CronJob /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/cronjob.go:63 W0603 21:55:40.743296 30 warnings.go:70] batch/v1beta1 CronJob is deprecated in v1.21+, unavailable in v1.25+; use batch/v1 CronJob [It] should replace jobs when ReplaceConcurrent [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a ReplaceConcurrent cronjob STEP: Ensuring a job is scheduled STEP: Ensuring exactly one is scheduled STEP: Ensuring exactly one running job exists by listing jobs explicitly STEP: Ensuring the job is replaced with a new one STEP: Removing cronjob [AfterEach] [sig-apps] CronJob /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 3 21:57:00.763: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "cronjob-7309" for this suite. • [SLOW TEST:80.051 seconds] [sig-apps] CronJob /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should replace jobs when ReplaceConcurrent [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","total":-1,"completed":3,"skipped":101,"failed":0} SSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 3 21:56:49.928: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a replica set. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ReplicaSet STEP: Ensuring resource quota status captures replicaset creation STEP: Deleting a ReplicaSet STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 3 21:57:00.988: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-6002" for this suite. • [SLOW TEST:11.068 seconds] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a replica set. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance]","total":-1,"completed":11,"skipped":208,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]","total":-1,"completed":6,"skipped":85,"failed":0} [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 3 21:56:57.302: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41 [It] should update labels on modification [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating the pod Jun 3 21:56:57.348: INFO: The status of Pod labelsupdate636d7484-a48f-4dde-a388-fdbefce10bb8 is Pending, waiting for it to be Running (with Ready = true) Jun 3 21:56:59.351: INFO: The status of Pod labelsupdate636d7484-a48f-4dde-a388-fdbefce10bb8 is Pending, waiting for it to be Running (with Ready = true) Jun 3 21:57:01.352: INFO: The status of Pod labelsupdate636d7484-a48f-4dde-a388-fdbefce10bb8 is Pending, waiting for it to be Running (with Ready = true) Jun 3 21:57:03.357: INFO: The status of Pod labelsupdate636d7484-a48f-4dde-a388-fdbefce10bb8 is Running (Ready = true) Jun 3 21:57:03.874: INFO: Successfully updated pod "labelsupdate636d7484-a48f-4dde-a388-fdbefce10bb8" [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 3 21:57:05.893: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6349" for this suite. • [SLOW TEST:8.599 seconds] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should update labels on modification [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance]","total":-1,"completed":7,"skipped":85,"failed":0} SSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 3 21:57:00.530: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Jun 3 21:57:01.341: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Jun 3 21:57:03.349: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63789890221, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63789890221, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63789890221, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63789890221, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jun 3 21:57:06.359: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate configmap [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Registering the mutating configmap webhook via the AdmissionRegistration API STEP: create a configmap that should be updated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 3 21:57:06.383: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-7533" for this suite. STEP: Destroying namespace "webhook-7533-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:5.886 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate configmap [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ [BeforeEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 3 21:57:01.023: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating projection with secret that has name projected-secret-test-4f76f5c1-f568-4b5c-a6f8-b9d8ef245050 STEP: Creating a pod to test consume secrets Jun 3 21:57:01.056: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-1c1ee8cb-686e-4997-951b-c17616913202" in namespace "projected-7974" to be "Succeeded or Failed" Jun 3 21:57:01.060: INFO: Pod "pod-projected-secrets-1c1ee8cb-686e-4997-951b-c17616913202": Phase="Pending", Reason="", readiness=false. Elapsed: 3.110213ms Jun 3 21:57:03.064: INFO: Pod "pod-projected-secrets-1c1ee8cb-686e-4997-951b-c17616913202": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007076153s Jun 3 21:57:05.067: INFO: Pod "pod-projected-secrets-1c1ee8cb-686e-4997-951b-c17616913202": Phase="Pending", Reason="", readiness=false. Elapsed: 4.010449608s Jun 3 21:57:07.071: INFO: Pod "pod-projected-secrets-1c1ee8cb-686e-4997-951b-c17616913202": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.014942813s STEP: Saw pod success Jun 3 21:57:07.072: INFO: Pod "pod-projected-secrets-1c1ee8cb-686e-4997-951b-c17616913202" satisfied condition "Succeeded or Failed" Jun 3 21:57:07.074: INFO: Trying to get logs from node node2 pod pod-projected-secrets-1c1ee8cb-686e-4997-951b-c17616913202 container projected-secret-volume-test: STEP: delete the pod Jun 3 21:57:07.089: INFO: Waiting for pod pod-projected-secrets-1c1ee8cb-686e-4997-951b-c17616913202 to disappear Jun 3 21:57:07.091: INFO: Pod pod-projected-secrets-1c1ee8cb-686e-4997-951b-c17616913202 no longer exists [AfterEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 3 21:57:07.091: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7974" for this suite. • [SLOW TEST:6.075 seconds] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 3 21:57:05.919: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41 [It] should provide container's memory request [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test downward API volume plugin Jun 3 21:57:05.953: INFO: Waiting up to 5m0s for pod "downwardapi-volume-4aaddcfe-702b-4fe9-af08-847994a22edb" in namespace "projected-9977" to be "Succeeded or Failed" Jun 3 21:57:05.957: INFO: Pod "downwardapi-volume-4aaddcfe-702b-4fe9-af08-847994a22edb": Phase="Pending", Reason="", readiness=false. Elapsed: 3.909444ms Jun 3 21:57:07.961: INFO: Pod "downwardapi-volume-4aaddcfe-702b-4fe9-af08-847994a22edb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007807086s Jun 3 21:57:09.965: INFO: Pod "downwardapi-volume-4aaddcfe-702b-4fe9-af08-847994a22edb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012417546s STEP: Saw pod success Jun 3 21:57:09.965: INFO: Pod "downwardapi-volume-4aaddcfe-702b-4fe9-af08-847994a22edb" satisfied condition "Succeeded or Failed" Jun 3 21:57:09.968: INFO: Trying to get logs from node node1 pod downwardapi-volume-4aaddcfe-702b-4fe9-af08-847994a22edb container client-container: STEP: delete the pod Jun 3 21:57:09.981: INFO: Waiting for pod downwardapi-volume-4aaddcfe-702b-4fe9-af08-847994a22edb to disappear Jun 3 21:57:09.983: INFO: Pod downwardapi-volume-4aaddcfe-702b-4fe9-af08-847994a22edb no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 3 21:57:09.983: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9977" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance]","total":-1,"completed":8,"skipped":91,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance]","total":-1,"completed":12,"skipped":224,"failed":0} [BeforeEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 3 21:57:07.102: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating secret with name secret-test-map-99417e81-79f5-4b9d-9f30-3af27aada0e0 STEP: Creating a pod to test consume secrets Jun 3 21:57:07.140: INFO: Waiting up to 5m0s for pod "pod-secrets-c299179f-72a2-4a9a-9e7a-7c20580fd6f2" in namespace "secrets-8385" to be "Succeeded or Failed" Jun 3 21:57:07.144: INFO: Pod "pod-secrets-c299179f-72a2-4a9a-9e7a-7c20580fd6f2": Phase="Pending", Reason="", readiness=false. Elapsed: 3.962247ms Jun 3 21:57:09.148: INFO: Pod "pod-secrets-c299179f-72a2-4a9a-9e7a-7c20580fd6f2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007390307s Jun 3 21:57:11.151: INFO: Pod "pod-secrets-c299179f-72a2-4a9a-9e7a-7c20580fd6f2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.010806077s Jun 3 21:57:13.156: INFO: Pod "pod-secrets-c299179f-72a2-4a9a-9e7a-7c20580fd6f2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.015698614s STEP: Saw pod success Jun 3 21:57:13.156: INFO: Pod "pod-secrets-c299179f-72a2-4a9a-9e7a-7c20580fd6f2" satisfied condition "Succeeded or Failed" Jun 3 21:57:13.159: INFO: Trying to get logs from node node1 pod pod-secrets-c299179f-72a2-4a9a-9e7a-7c20580fd6f2 container secret-volume-test: STEP: delete the pod Jun 3 21:57:13.178: INFO: Waiting for pod pod-secrets-c299179f-72a2-4a9a-9e7a-7c20580fd6f2 to disappear Jun 3 21:57:13.180: INFO: Pod pod-secrets-c299179f-72a2-4a9a-9e7a-7c20580fd6f2 no longer exists [AfterEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 3 21:57:13.180: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-8385" for this suite. • [SLOW TEST:6.087 seconds] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":-1,"completed":13,"skipped":224,"failed":0} [BeforeEach] [sig-node] Lease /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 3 21:57:13.192: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename lease-test STEP: Waiting for a default service account to be provisioned in namespace [It] lease API should be available [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [AfterEach] [sig-node] Lease /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 3 21:57:13.267: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "lease-test-7600" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Lease lease API should be available [Conformance]","total":-1,"completed":14,"skipped":224,"failed":0} S ------------------------------ [BeforeEach] [sig-node] Pods Extended /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 3 21:57:13.280: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Pods Set QOS Class /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:150 [It] should be set on Pods with matching resource requests and limits for memory and cpu [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying QOS class is set on the pod [AfterEach] [sig-node] Pods Extended /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 3 21:57:13.313: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-1450" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Pods Extended Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]","total":-1,"completed":15,"skipped":225,"failed":0} SSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 3 21:56:39.924: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] ServiceAccountIssuerDiscovery should support OIDC discovery of service account issuer [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Jun 3 21:56:39.966: INFO: created pod Jun 3 21:56:39.966: INFO: Waiting up to 5m0s for pod "oidc-discovery-validator" in namespace "svcaccounts-3080" to be "Succeeded or Failed" Jun 3 21:56:39.971: INFO: Pod "oidc-discovery-validator": Phase="Pending", Reason="", readiness=false. Elapsed: 4.475355ms Jun 3 21:56:41.975: INFO: Pod "oidc-discovery-validator": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00855777s Jun 3 21:56:43.979: INFO: Pod "oidc-discovery-validator": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012657935s STEP: Saw pod success Jun 3 21:56:43.979: INFO: Pod "oidc-discovery-validator" satisfied condition "Succeeded or Failed" Jun 3 21:57:13.982: INFO: polling logs Jun 3 21:57:13.989: INFO: Pod logs: 2022/06/03 21:56:42 OK: Got token 2022/06/03 21:56:42 validating with in-cluster discovery 2022/06/03 21:56:42 OK: got issuer https://kubernetes.default.svc.cluster.local 2022/06/03 21:56:42 Full, not-validated claims: openidmetadata.claims{Claims:jwt.Claims{Issuer:"https://kubernetes.default.svc.cluster.local", Subject:"system:serviceaccount:svcaccounts-3080:default", Audience:jwt.Audience{"oidc-discovery-test"}, Expiry:1654294000, NotBefore:1654293400, IssuedAt:1654293400, ID:""}, Kubernetes:openidmetadata.kubeClaims{Namespace:"svcaccounts-3080", ServiceAccount:openidmetadata.kubeName{Name:"default", UID:"41d1cc99-a123-4c98-a8b4-5095ffde88f5"}}} 2022/06/03 21:56:42 OK: Constructed OIDC provider for issuer https://kubernetes.default.svc.cluster.local 2022/06/03 21:56:42 OK: Validated signature on JWT 2022/06/03 21:56:42 OK: Got valid claims from token! 2022/06/03 21:56:42 Full, validated claims: &openidmetadata.claims{Claims:jwt.Claims{Issuer:"https://kubernetes.default.svc.cluster.local", Subject:"system:serviceaccount:svcaccounts-3080:default", Audience:jwt.Audience{"oidc-discovery-test"}, Expiry:1654294000, NotBefore:1654293400, IssuedAt:1654293400, ID:""}, Kubernetes:openidmetadata.kubeClaims{Namespace:"svcaccounts-3080", ServiceAccount:openidmetadata.kubeName{Name:"default", UID:"41d1cc99-a123-4c98-a8b4-5095ffde88f5"}}} Jun 3 21:57:13.989: INFO: completed pod [AfterEach] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 3 21:57:13.993: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-3080" for this suite. • [SLOW TEST:34.077 seconds] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23 ServiceAccountIssuerDiscovery should support OIDC discovery of service account issuer [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-auth] ServiceAccounts ServiceAccountIssuerDiscovery should support OIDC discovery of service account issuer [Conformance]","total":-1,"completed":8,"skipped":115,"failed":0} SSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 3 21:57:14.022: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to update and delete ResourceQuota. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a ResourceQuota STEP: Getting a ResourceQuota STEP: Updating a ResourceQuota STEP: Verifying a ResourceQuota was modified STEP: Deleting a ResourceQuota STEP: Verifying the deleted ResourceQuota [AfterEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 3 21:57:14.063: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-3272" for this suite. • ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance]","total":-1,"completed":9,"skipped":122,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ {"msg":"PASSED [sig-network] Services should complete a service status lifecycle [Conformance]","total":-1,"completed":7,"skipped":175,"failed":0} [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 3 21:56:15.997: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] listing custom resource definition objects works [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Jun 3 21:56:16.017: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 3 21:57:17.299: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-8669" for this suite. • [SLOW TEST:61.310 seconds] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Simple CustomResourceDefinition /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:48 listing custom resource definition objects works [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works [Conformance]","total":-1,"completed":8,"skipped":175,"failed":0} SSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 3 21:57:13.355: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Jun 3 21:57:20.415: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 3 21:57:20.423: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-3393" for this suite. • [SLOW TEST:7.079 seconds] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 blackbox test /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:41 on terminated container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:134 should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]","total":-1,"completed":16,"skipped":238,"failed":0} SSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 3 21:57:14.136: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating configMap with name projected-configmap-test-volume-c4a999e7-fcf7-4bcb-874f-3246676d06d6 STEP: Creating a pod to test consume configMaps Jun 3 21:57:14.171: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-7ab47f29-7ffd-4d08-81c5-7f2bf52515c2" in namespace "projected-9041" to be "Succeeded or Failed" Jun 3 21:57:14.174: INFO: Pod "pod-projected-configmaps-7ab47f29-7ffd-4d08-81c5-7f2bf52515c2": Phase="Pending", Reason="", readiness=false. Elapsed: 3.23934ms Jun 3 21:57:16.177: INFO: Pod "pod-projected-configmaps-7ab47f29-7ffd-4d08-81c5-7f2bf52515c2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005714138s Jun 3 21:57:18.181: INFO: Pod "pod-projected-configmaps-7ab47f29-7ffd-4d08-81c5-7f2bf52515c2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.009844336s Jun 3 21:57:20.185: INFO: Pod "pod-projected-configmaps-7ab47f29-7ffd-4d08-81c5-7f2bf52515c2": Phase="Pending", Reason="", readiness=false. Elapsed: 6.014060016s Jun 3 21:57:22.189: INFO: Pod "pod-projected-configmaps-7ab47f29-7ffd-4d08-81c5-7f2bf52515c2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.018224979s STEP: Saw pod success Jun 3 21:57:22.189: INFO: Pod "pod-projected-configmaps-7ab47f29-7ffd-4d08-81c5-7f2bf52515c2" satisfied condition "Succeeded or Failed" Jun 3 21:57:22.191: INFO: Trying to get logs from node node1 pod pod-projected-configmaps-7ab47f29-7ffd-4d08-81c5-7f2bf52515c2 container agnhost-container: STEP: delete the pod Jun 3 21:57:22.205: INFO: Waiting for pod pod-projected-configmaps-7ab47f29-7ffd-4d08-81c5-7f2bf52515c2 to disappear Jun 3 21:57:22.206: INFO: Pod pod-projected-configmaps-7ab47f29-7ffd-4d08-81c5-7f2bf52515c2 no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 3 21:57:22.206: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9041" for this suite. • [SLOW TEST:8.079 seconds] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":10,"skipped":145,"failed":0} SSSSS ------------------------------ [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 3 21:56:23.394: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 [BeforeEach] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:105 STEP: Creating service test in namespace statefulset-1355 [It] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating stateful set ss in namespace statefulset-1355 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-1355 Jun 3 21:56:23.423: INFO: Found 0 stateful pods, waiting for 1 Jun 3 21:56:33.429: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Pending - Ready=false Jun 3 21:56:43.430: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod Jun 3 21:56:43.432: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-1355 exec ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Jun 3 21:56:43.735: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" Jun 3 21:56:43.735: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Jun 3 21:56:43.735: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Jun 3 21:56:43.737: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Jun 3 21:56:53.743: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Jun 3 21:56:53.743: INFO: Waiting for statefulset status.replicas updated to 0 Jun 3 21:56:53.753: INFO: POD NODE PHASE GRACE CONDITIONS Jun 3 21:56:53.753: INFO: ss-0 node2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-06-03 21:56:23 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-06-03 21:56:44 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-06-03 21:56:44 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-06-03 21:56:23 +0000 UTC }] Jun 3 21:56:53.753: INFO: Jun 3 21:56:53.753: INFO: StatefulSet ss has not reached scale 3, at 1 Jun 3 21:56:54.757: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.996734537s Jun 3 21:56:55.761: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.992227458s Jun 3 21:56:56.767: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.987403971s Jun 3 21:56:57.772: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.983461899s Jun 3 21:56:58.777: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.978505773s Jun 3 21:56:59.782: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.972078515s Jun 3 21:57:00.787: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.96632841s Jun 3 21:57:01.792: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.962500492s Jun 3 21:57:02.796: INFO: Verifying statefulset ss doesn't scale past 3 for another 957.997597ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-1355 Jun 3 21:57:03.803: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-1355 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jun 3 21:57:04.055: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n" Jun 3 21:57:04.055: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Jun 3 21:57:04.055: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Jun 3 21:57:04.055: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-1355 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jun 3 21:57:04.297: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\n" Jun 3 21:57:04.297: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Jun 3 21:57:04.297: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Jun 3 21:57:04.298: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-1355 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jun 3 21:57:04.910: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\n" Jun 3 21:57:04.911: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Jun 3 21:57:04.911: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Jun 3 21:57:04.914: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Jun 3 21:57:04.914: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Jun 3 21:57:04.914: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Scale down will not halt with unhealthy stateful pod Jun 3 21:57:04.917: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-1355 exec ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Jun 3 21:57:05.174: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" Jun 3 21:57:05.174: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Jun 3 21:57:05.174: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Jun 3 21:57:05.174: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-1355 exec ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Jun 3 21:57:05.429: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" Jun 3 21:57:05.429: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Jun 3 21:57:05.429: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Jun 3 21:57:05.429: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-1355 exec ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Jun 3 21:57:05.693: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" Jun 3 21:57:05.693: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Jun 3 21:57:05.693: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Jun 3 21:57:05.693: INFO: Waiting for statefulset status.replicas updated to 0 Jun 3 21:57:05.696: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 2 Jun 3 21:57:15.703: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Jun 3 21:57:15.703: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Jun 3 21:57:15.703: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Jun 3 21:57:15.714: INFO: POD NODE PHASE GRACE CONDITIONS Jun 3 21:57:15.714: INFO: ss-0 node2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-06-03 21:56:23 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-06-03 21:57:05 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-06-03 21:57:05 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-06-03 21:56:23 +0000 UTC }] Jun 3 21:57:15.714: INFO: ss-1 node1 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-06-03 21:56:53 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-06-03 21:57:05 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-06-03 21:57:05 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-06-03 21:56:53 +0000 UTC }] Jun 3 21:57:15.714: INFO: ss-2 node2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-06-03 21:56:53 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-06-03 21:57:05 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-06-03 21:57:05 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-06-03 21:56:53 +0000 UTC }] Jun 3 21:57:15.714: INFO: Jun 3 21:57:15.714: INFO: StatefulSet ss has not reached scale 0, at 3 Jun 3 21:57:16.718: INFO: POD NODE PHASE GRACE CONDITIONS Jun 3 21:57:16.718: INFO: ss-0 node2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-06-03 21:56:23 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-06-03 21:57:05 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-06-03 21:57:05 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-06-03 21:56:23 +0000 UTC }] Jun 3 21:57:16.718: INFO: ss-1 node1 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-06-03 21:56:53 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-06-03 21:57:05 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-06-03 21:57:05 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-06-03 21:56:53 +0000 UTC }] Jun 3 21:57:16.718: INFO: ss-2 node2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-06-03 21:56:53 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-06-03 21:57:05 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-06-03 21:57:05 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-06-03 21:56:53 +0000 UTC }] Jun 3 21:57:16.719: INFO: Jun 3 21:57:16.719: INFO: StatefulSet ss has not reached scale 0, at 3 Jun 3 21:57:17.724: INFO: POD NODE PHASE GRACE CONDITIONS Jun 3 21:57:17.724: INFO: ss-0 node2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-06-03 21:56:23 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-06-03 21:57:05 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-06-03 21:57:05 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-06-03 21:56:23 +0000 UTC }] Jun 3 21:57:17.724: INFO: ss-1 node1 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-06-03 21:56:53 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-06-03 21:57:05 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-06-03 21:57:05 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-06-03 21:56:53 +0000 UTC }] Jun 3 21:57:17.725: INFO: ss-2 node2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-06-03 21:56:53 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-06-03 21:57:05 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-06-03 21:57:05 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-06-03 21:56:53 +0000 UTC }] Jun 3 21:57:17.725: INFO: Jun 3 21:57:17.725: INFO: StatefulSet ss has not reached scale 0, at 3 Jun 3 21:57:18.728: INFO: POD NODE PHASE GRACE CONDITIONS Jun 3 21:57:18.728: INFO: ss-0 node2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-06-03 21:56:23 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-06-03 21:57:05 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-06-03 21:57:05 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-06-03 21:56:23 +0000 UTC }] Jun 3 21:57:18.728: INFO: ss-1 node1 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-06-03 21:56:53 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-06-03 21:57:05 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-06-03 21:57:05 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-06-03 21:56:53 +0000 UTC }] Jun 3 21:57:18.729: INFO: ss-2 node2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-06-03 21:56:53 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-06-03 21:57:05 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-06-03 21:57:05 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-06-03 21:56:53 +0000 UTC }] Jun 3 21:57:18.729: INFO: Jun 3 21:57:18.729: INFO: StatefulSet ss has not reached scale 0, at 3 Jun 3 21:57:19.732: INFO: Verifying statefulset ss doesn't scale past 0 for another 5.98159706s Jun 3 21:57:20.734: INFO: Verifying statefulset ss doesn't scale past 0 for another 4.978642959s Jun 3 21:57:21.739: INFO: Verifying statefulset ss doesn't scale past 0 for another 3.975141057s Jun 3 21:57:22.741: INFO: Verifying statefulset ss doesn't scale past 0 for another 2.972016218s Jun 3 21:57:23.744: INFO: Verifying statefulset ss doesn't scale past 0 for another 1.969593381s Jun 3 21:57:24.746: INFO: Verifying statefulset ss doesn't scale past 0 for another 966.982873ms STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-1355 Jun 3 21:57:25.750: INFO: Scaling statefulset ss to 0 Jun 3 21:57:25.759: INFO: Waiting for statefulset status.replicas updated to 0 [AfterEach] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:116 Jun 3 21:57:25.761: INFO: Deleting all statefulset in ns statefulset-1355 Jun 3 21:57:25.763: INFO: Scaling statefulset ss to 0 Jun 3 21:57:25.771: INFO: Waiting for statefulset status.replicas updated to 0 Jun 3 21:57:25.773: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 3 21:57:25.781: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-1355" for this suite. • [SLOW TEST:62.394 seconds] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:95 Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]","total":-1,"completed":7,"skipped":75,"failed":0} SSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 3 21:57:20.466: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Jun 3 21:57:20.737: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Jun 3 21:57:22.746: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63789890240, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63789890240, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63789890240, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63789890240, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jun 3 21:57:25.757: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] listing mutating webhooks should work [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Listing all of the created validation webhooks STEP: Creating a configMap that should be mutated STEP: Deleting the collection of validation webhooks STEP: Creating a configMap that should not be mutated [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 3 21:57:25.865: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-9563" for this suite. STEP: Destroying namespace "webhook-9563-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:5.431 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 listing mutating webhooks should work [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","total":-1,"completed":17,"skipped":250,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 3 21:57:00.784: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:86 [It] deployment should support rollover [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Jun 3 21:57:00.810: INFO: Pod name rollover-pod: Found 0 pods out of 1 Jun 3 21:57:05.814: INFO: Pod name rollover-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Jun 3 21:57:07.821: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready Jun 3 21:57:09.824: INFO: Creating deployment "test-rollover-deployment" Jun 3 21:57:09.832: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations Jun 3 21:57:11.839: INFO: Check revision of new replica set for deployment "test-rollover-deployment" Jun 3 21:57:11.844: INFO: Ensure that both replica sets have 1 created replica Jun 3 21:57:11.850: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update Jun 3 21:57:11.858: INFO: Updating deployment test-rollover-deployment Jun 3 21:57:11.858: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller Jun 3 21:57:13.866: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2 Jun 3 21:57:13.871: INFO: Make sure deployment "test-rollover-deployment" is complete Jun 3 21:57:13.876: INFO: all replica sets need to contain the pod-template-hash label Jun 3 21:57:13.876: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63789890229, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63789890229, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63789890231, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63789890229, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-98c5f4599\" is progressing."}}, CollisionCount:(*int32)(nil)} Jun 3 21:57:15.883: INFO: all replica sets need to contain the pod-template-hash label Jun 3 21:57:15.883: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63789890229, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63789890229, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63789890231, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63789890229, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-98c5f4599\" is progressing."}}, CollisionCount:(*int32)(nil)} Jun 3 21:57:17.883: INFO: all replica sets need to contain the pod-template-hash label Jun 3 21:57:17.883: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63789890229, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63789890229, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63789890231, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63789890229, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-98c5f4599\" is progressing."}}, CollisionCount:(*int32)(nil)} Jun 3 21:57:19.881: INFO: all replica sets need to contain the pod-template-hash label Jun 3 21:57:19.882: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63789890229, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63789890229, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63789890238, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63789890229, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-98c5f4599\" is progressing."}}, CollisionCount:(*int32)(nil)} Jun 3 21:57:21.884: INFO: all replica sets need to contain the pod-template-hash label Jun 3 21:57:21.884: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63789890229, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63789890229, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63789890238, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63789890229, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-98c5f4599\" is progressing."}}, CollisionCount:(*int32)(nil)} Jun 3 21:57:23.885: INFO: all replica sets need to contain the pod-template-hash label Jun 3 21:57:23.885: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63789890229, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63789890229, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63789890238, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63789890229, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-98c5f4599\" is progressing."}}, CollisionCount:(*int32)(nil)} Jun 3 21:57:25.881: INFO: all replica sets need to contain the pod-template-hash label Jun 3 21:57:25.881: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63789890229, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63789890229, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63789890238, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63789890229, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-98c5f4599\" is progressing."}}, CollisionCount:(*int32)(nil)} Jun 3 21:57:27.884: INFO: all replica sets need to contain the pod-template-hash label Jun 3 21:57:27.885: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63789890229, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63789890229, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63789890238, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63789890229, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-98c5f4599\" is progressing."}}, CollisionCount:(*int32)(nil)} Jun 3 21:57:29.884: INFO: Jun 3 21:57:29.884: INFO: Ensure that both old replica sets have no replicas [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:80 Jun 3 21:57:29.891: INFO: Deployment "test-rollover-deployment": &Deployment{ObjectMeta:{test-rollover-deployment deployment-1150 0c35a7ac-5ab7-4cdd-b9f9-04faed243b97 35043 2 2022-06-03 21:57:09 +0000 UTC map[name:rollover-pod] map[deployment.kubernetes.io/revision:2] [] [] [{e2e.test Update apps/v1 2022-06-03 21:57:11 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:minReadySeconds":{},"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2022-06-03 21:57:28 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:availableReplicas":{},"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{},"f:updatedReplicas":{}}}}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.32 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc0045ddd88 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2022-06-03 21:57:09 +0000 UTC,LastTransitionTime:2022-06-03 21:57:09 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rollover-deployment-98c5f4599" has successfully progressed.,LastUpdateTime:2022-06-03 21:57:28 +0000 UTC,LastTransitionTime:2022-06-03 21:57:09 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} Jun 3 21:57:29.895: INFO: New ReplicaSet "test-rollover-deployment-98c5f4599" of Deployment "test-rollover-deployment": &ReplicaSet{ObjectMeta:{test-rollover-deployment-98c5f4599 deployment-1150 62154b92-5033-47a9-ba6e-d9015fcd6aa1 35032 2 2022-06-03 21:57:11 +0000 UTC map[name:rollover-pod pod-template-hash:98c5f4599] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-rollover-deployment 0c35a7ac-5ab7-4cdd-b9f9-04faed243b97 0xc003ee79c0 0xc003ee79c1}] [] [{kube-controller-manager Update apps/v1 2022-06-03 21:57:28 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0c35a7ac-5ab7-4cdd-b9f9-04faed243b97\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:minReadySeconds":{},"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 98c5f4599,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod-template-hash:98c5f4599] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.32 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc003ee7a38 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} Jun 3 21:57:29.895: INFO: All old ReplicaSets of Deployment "test-rollover-deployment": Jun 3 21:57:29.895: INFO: &ReplicaSet{ObjectMeta:{test-rollover-controller deployment-1150 bec5ce8d-ca9f-4d28-bf72-1b1a49c0ce0b 35041 2 2022-06-03 21:57:00 +0000 UTC map[name:rollover-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2] [{apps/v1 Deployment test-rollover-deployment 0c35a7ac-5ab7-4cdd-b9f9-04faed243b97 0xc003ee77b7 0xc003ee77b8}] [] [{e2e.test Update apps/v1 2022-06-03 21:57:00 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2022-06-03 21:57:28 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0c35a7ac-5ab7-4cdd-b9f9-04faed243b97\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{}},"f:status":{"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod:httpd] map[] [] [] []} {[] [] [{httpd k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc003ee7858 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Jun 3 21:57:29.895: INFO: &ReplicaSet{ObjectMeta:{test-rollover-deployment-78bc8b888c deployment-1150 c4f052fc-b1aa-4307-a3fb-7774a4aa9f03 34541 2 2022-06-03 21:57:09 +0000 UTC map[name:rollover-pod pod-template-hash:78bc8b888c] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-rollover-deployment 0c35a7ac-5ab7-4cdd-b9f9-04faed243b97 0xc003ee78c7 0xc003ee78c8}] [] [{kube-controller-manager Update apps/v1 2022-06-03 21:57:11 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0c35a7ac-5ab7-4cdd-b9f9-04faed243b97\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:minReadySeconds":{},"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"redis-slave\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 78bc8b888c,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod-template-hash:78bc8b888c] map[] [] [] []} {[] [] [{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc003ee7958 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Jun 3 21:57:29.898: INFO: Pod "test-rollover-deployment-98c5f4599-qm54p" is available: &Pod{ObjectMeta:{test-rollover-deployment-98c5f4599-qm54p test-rollover-deployment-98c5f4599- deployment-1150 3373213d-c253-464f-8786-f4f2dff894fa 34770 0 2022-06-03 21:57:11 +0000 UTC map[name:rollover-pod pod-template-hash:98c5f4599] map[k8s.v1.cni.cncf.io/network-status:[{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.3.191" ], "mac": "1a:d0:07:9b:dc:ec", "default": true, "dns": {} }] k8s.v1.cni.cncf.io/networks-status:[{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.3.191" ], "mac": "1a:d0:07:9b:dc:ec", "default": true, "dns": {} }] kubernetes.io/psp:collectd] [{apps/v1 ReplicaSet test-rollover-deployment-98c5f4599 62154b92-5033-47a9-ba6e-d9015fcd6aa1 0xc003ee7f2f 0xc003ee7f40}] [] [{kube-controller-manager Update v1 2022-06-03 21:57:11 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"62154b92-5033-47a9-ba6e-d9015fcd6aa1\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {multus Update v1 2022-06-03 21:57:16 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:k8s.v1.cni.cncf.io/network-status":{},"f:k8s.v1.cni.cncf.io/networks-status":{}}}}} {kubelet Update v1 2022-06-03 21:57:18 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.3.191\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-scd5x,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:k8s.gcr.io/e2e-test-images/agnhost:2.32,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-scd5x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node1,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-06-03 21:57:11 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-06-03 21:57:17 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-06-03 21:57:17 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-06-03 21:57:11 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.10.190.207,PodIP:10.244.3.191,StartTime:2022-06-03 21:57:11 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2022-06-03 21:57:16 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/agnhost:2.32,ImageID:docker-pullable://k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1,ContainerID:docker://10fa877134f75089d57dd7a54968ebde7c30704fbfcc5b91c8e2b83c7efeb5ca,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.3.191,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 3 21:57:29.898: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-1150" for this suite. • [SLOW TEST:29.122 seconds] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support rollover [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should support rollover [Conformance]","total":-1,"completed":4,"skipped":108,"failed":0} SSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 3 21:57:10.059: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746 [It] should be able to change the type from ClusterIP to ExternalName [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating a service clusterip-service with the type=ClusterIP in namespace services-4204 STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service STEP: creating service externalsvc in namespace services-4204 STEP: creating replication controller externalsvc in namespace services-4204 I0603 21:57:10.096859 34 runners.go:190] Created replication controller with name: externalsvc, namespace: services-4204, replica count: 2 I0603 21:57:13.149445 34 runners.go:190] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0603 21:57:16.150658 34 runners.go:190] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0603 21:57:19.152164 34 runners.go:190] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady STEP: changing the ClusterIP service to type=ExternalName Jun 3 21:57:19.163: INFO: Creating new exec pod Jun 3 21:57:23.185: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4204 exec execpod2pl84 -- /bin/sh -x -c nslookup clusterip-service.services-4204.svc.cluster.local' Jun 3 21:57:23.640: INFO: stderr: "+ nslookup clusterip-service.services-4204.svc.cluster.local\n" Jun 3 21:57:23.640: INFO: stdout: "Server:\t\t10.233.0.3\nAddress:\t10.233.0.3#53\n\nclusterip-service.services-4204.svc.cluster.local\tcanonical name = externalsvc.services-4204.svc.cluster.local.\nName:\texternalsvc.services-4204.svc.cluster.local\nAddress: 10.233.33.78\n\n" STEP: deleting ReplicationController externalsvc in namespace services-4204, will wait for the garbage collector to delete the pods Jun 3 21:57:23.696: INFO: Deleting ReplicationController externalsvc took: 3.46959ms Jun 3 21:57:23.797: INFO: Terminating ReplicationController externalsvc pods took: 100.952555ms Jun 3 21:57:30.207: INFO: Cleaning up the ClusterIP to ExternalName test service [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 3 21:57:30.212: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-4204" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750 • [SLOW TEST:20.162 seconds] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should be able to change the type from ClusterIP to ExternalName [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance]","total":-1,"completed":9,"skipped":120,"failed":0} SSSSS ------------------------------ [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 3 21:57:25.938: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:241 [It] should check if kubectl describe prints relevant information for rc and pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Jun 3 21:57:25.970: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-9995 create -f -' Jun 3 21:57:26.363: INFO: stderr: "" Jun 3 21:57:26.363: INFO: stdout: "replicationcontroller/agnhost-primary created\n" Jun 3 21:57:26.363: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-9995 create -f -' Jun 3 21:57:26.723: INFO: stderr: "" Jun 3 21:57:26.723: INFO: stdout: "service/agnhost-primary created\n" STEP: Waiting for Agnhost primary to start. Jun 3 21:57:27.727: INFO: Selector matched 1 pods for map[app:agnhost] Jun 3 21:57:27.727: INFO: Found 0 / 1 Jun 3 21:57:28.728: INFO: Selector matched 1 pods for map[app:agnhost] Jun 3 21:57:28.728: INFO: Found 0 / 1 Jun 3 21:57:29.727: INFO: Selector matched 1 pods for map[app:agnhost] Jun 3 21:57:29.727: INFO: Found 0 / 1 Jun 3 21:57:30.727: INFO: Selector matched 1 pods for map[app:agnhost] Jun 3 21:57:30.727: INFO: Found 1 / 1 Jun 3 21:57:30.727: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Jun 3 21:57:30.731: INFO: Selector matched 1 pods for map[app:agnhost] Jun 3 21:57:30.731: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Jun 3 21:57:30.731: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-9995 describe pod agnhost-primary-77vsv' Jun 3 21:57:30.929: INFO: stderr: "" Jun 3 21:57:30.929: INFO: stdout: "Name: agnhost-primary-77vsv\nNamespace: kubectl-9995\nPriority: 0\nNode: node1/10.10.190.207\nStart Time: Fri, 03 Jun 2022 21:57:26 +0000\nLabels: app=agnhost\n role=primary\nAnnotations: k8s.v1.cni.cncf.io/network-status:\n [{\n \"name\": \"default-cni-network\",\n \"interface\": \"eth0\",\n \"ips\": [\n \"10.244.3.196\"\n ],\n \"mac\": \"2a:c1:09:e0:b9:53\",\n \"default\": true,\n \"dns\": {}\n }]\n k8s.v1.cni.cncf.io/networks-status:\n [{\n \"name\": \"default-cni-network\",\n \"interface\": \"eth0\",\n \"ips\": [\n \"10.244.3.196\"\n ],\n \"mac\": \"2a:c1:09:e0:b9:53\",\n \"default\": true,\n \"dns\": {}\n }]\n kubernetes.io/psp: collectd\nStatus: Running\nIP: 10.244.3.196\nIPs:\n IP: 10.244.3.196\nControlled By: ReplicationController/agnhost-primary\nContainers:\n agnhost-primary:\n Container ID: docker://455b3fd8f32c7342e317e8606dc0a20ba7bdc8448c0d79b92e5bffe9e6aa7313\n Image: k8s.gcr.io/e2e-test-images/agnhost:2.32\n Image ID: docker-pullable://k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1\n Port: 6379/TCP\n Host Port: 0/TCP\n State: Running\n Started: Fri, 03 Jun 2022 21:57:29 +0000\n Ready: True\n Restart Count: 0\n Environment: \n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-ctcmg (ro)\nConditions:\n Type Status\n Initialized True \n Ready True \n ContainersReady True \n PodScheduled True \nVolumes:\n kube-api-access-ctcmg:\n Type: Projected (a volume that contains injected data from multiple sources)\n TokenExpirationSeconds: 3607\n ConfigMapName: kube-root-ca.crt\n ConfigMapOptional: \n DownwardAPI: true\nQoS Class: BestEffort\nNode-Selectors: \nTolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s\n node.kubernetes.io/unreachable:NoExecute op=Exists for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled 4s default-scheduler Successfully assigned kubectl-9995/agnhost-primary-77vsv to node1\n Normal Pulling 1s kubelet Pulling image \"k8s.gcr.io/e2e-test-images/agnhost:2.32\"\n Normal Pulled 1s kubelet Successfully pulled image \"k8s.gcr.io/e2e-test-images/agnhost:2.32\" in 277.203839ms\n Normal Created 1s kubelet Created container agnhost-primary\n Normal Started 1s kubelet Started container agnhost-primary\n" Jun 3 21:57:30.929: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-9995 describe rc agnhost-primary' Jun 3 21:57:31.125: INFO: stderr: "" Jun 3 21:57:31.125: INFO: stdout: "Name: agnhost-primary\nNamespace: kubectl-9995\nSelector: app=agnhost,role=primary\nLabels: app=agnhost\n role=primary\nAnnotations: \nReplicas: 1 current / 1 desired\nPods Status: 1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n Labels: app=agnhost\n role=primary\n Containers:\n agnhost-primary:\n Image: k8s.gcr.io/e2e-test-images/agnhost:2.32\n Port: 6379/TCP\n Host Port: 0/TCP\n Environment: \n Mounts: \n Volumes: \nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal SuccessfulCreate 5s replication-controller Created pod: agnhost-primary-77vsv\n" Jun 3 21:57:31.125: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-9995 describe service agnhost-primary' Jun 3 21:57:31.300: INFO: stderr: "" Jun 3 21:57:31.300: INFO: stdout: "Name: agnhost-primary\nNamespace: kubectl-9995\nLabels: app=agnhost\n role=primary\nAnnotations: \nSelector: app=agnhost,role=primary\nType: ClusterIP\nIP Family Policy: SingleStack\nIP Families: IPv4\nIP: 10.233.7.106\nIPs: 10.233.7.106\nPort: 6379/TCP\nTargetPort: agnhost-server/TCP\nEndpoints: 10.244.3.196:6379\nSession Affinity: None\nEvents: \n" Jun 3 21:57:31.305: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-9995 describe node master1' Jun 3 21:57:31.530: INFO: stderr: "" Jun 3 21:57:31.531: INFO: stdout: "Name: master1\nRoles: control-plane,master\nLabels: beta.kubernetes.io/arch=amd64\n beta.kubernetes.io/os=linux\n kubernetes.io/arch=amd64\n kubernetes.io/hostname=master1\n kubernetes.io/os=linux\n node-role.kubernetes.io/control-plane=\n node-role.kubernetes.io/master=\n node.kubernetes.io/exclude-from-external-load-balancers=\nAnnotations: flannel.alpha.coreos.com/backend-data: null\n flannel.alpha.coreos.com/backend-type: host-gw\n flannel.alpha.coreos.com/kube-subnet-manager: true\n flannel.alpha.coreos.com/public-ip: 10.10.190.202\n kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock\n node.alpha.kubernetes.io/ttl: 0\n volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp: Fri, 03 Jun 2022 19:57:53 +0000\nTaints: node-role.kubernetes.io/master:NoSchedule\nUnschedulable: false\nLease:\n HolderIdentity: master1\n AcquireTime: \n RenewTime: Fri, 03 Jun 2022 21:57:26 +0000\nConditions:\n Type Status LastHeartbeatTime LastTransitionTime Reason Message\n ---- ------ ----------------- ------------------ ------ -------\n NetworkUnavailable False Fri, 03 Jun 2022 20:03:30 +0000 Fri, 03 Jun 2022 20:03:30 +0000 FlannelIsUp Flannel is running on this node\n MemoryPressure False Fri, 03 Jun 2022 21:57:31 +0000 Fri, 03 Jun 2022 19:57:50 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available\n DiskPressure False Fri, 03 Jun 2022 21:57:31 +0000 Fri, 03 Jun 2022 19:57:50 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure\n PIDPressure False Fri, 03 Jun 2022 21:57:31 +0000 Fri, 03 Jun 2022 19:57:50 +0000 KubeletHasSufficientPID kubelet has sufficient PID available\n Ready True Fri, 03 Jun 2022 21:57:31 +0000 Fri, 03 Jun 2022 20:00:47 +0000 KubeletReady kubelet is posting ready status\nAddresses:\n InternalIP: 10.10.190.202\n Hostname: master1\nCapacity:\n cpu: 80\n ephemeral-storage: 440625980Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 196518304Ki\n pods: 110\nAllocatable:\n cpu: 79550m\n ephemeral-storage: 406080902496\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 195629472Ki\n pods: 110\nSystem Info:\n Machine ID: 3d668405f73a457bb0bcb4df5f4edac8\n System UUID: 00ACFB60-0631-E711-906E-0017A4403562\n Boot ID: c08279e3-a5cb-4f4d-b9f0-f2cde655469f\n Kernel Version: 3.10.0-1160.66.1.el7.x86_64\n OS Image: CentOS Linux 7 (Core)\n Operating System: linux\n Architecture: amd64\n Container Runtime Version: docker://20.10.16\n Kubelet Version: v1.21.1\n Kube-Proxy Version: v1.21.1\nPodCIDR: 10.244.0.0/24\nPodCIDRs: 10.244.0.0/24\nNon-terminated Pods: (10 in total)\n Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits Age\n --------- ---- ------------ ---------- --------------- ------------- ---\n kube-system container-registry-65d7c44b96-2nzvn 0 (0%) 0 (0%) 0 (0%) 0 (0%) 112m\n kube-system coredns-8474476ff8-rvc4v 100m (0%) 0 (0%) 70Mi (0%) 170Mi (0%) 116m\n kube-system dns-autoscaler-7df78bfcfb-vdtpl 20m (0%) 0 (0%) 10Mi (0%) 0 (0%) 116m\n kube-system kube-apiserver-master1 250m (0%) 0 (0%) 0 (0%) 0 (0%) 110m\n kube-system kube-controller-manager-master1 200m (0%) 0 (0%) 0 (0%) 0 (0%) 118m\n kube-system kube-flannel-m8sj7 150m (0%) 300m (0%) 64M (0%) 500M (0%) 117m\n kube-system kube-multus-ds-amd64-n58qk 100m (0%) 100m (0%) 90Mi (0%) 90Mi (0%) 116m\n kube-system kube-proxy-zgchh 0 (0%) 0 (0%) 0 (0%) 0 (0%) 117m\n kube-system kube-scheduler-master1 100m (0%) 0 (0%) 0 (0%) 0 (0%) 100m\n monitoring node-exporter-45rhg 112m (0%) 270m (0%) 200Mi (0%) 220Mi (0%) 104m\nAllocated resources:\n (Total limits may be over 100 percent, i.e., overcommitted.)\n Resource Requests Limits\n -------- -------- ------\n cpu 1032m (1%) 670m (0%)\n memory 441380Ki (0%) 1003316480 (0%)\n ephemeral-storage 0 (0%) 0 (0%)\n hugepages-1Gi 0 (0%) 0 (0%)\n hugepages-2Mi 0 (0%) 0 (0%)\nEvents: \n" Jun 3 21:57:31.531: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-9995 describe namespace kubectl-9995' Jun 3 21:57:31.716: INFO: stderr: "" Jun 3 21:57:31.716: INFO: stdout: "Name: kubectl-9995\nLabels: e2e-framework=kubectl\n e2e-run=04eedd33-b1d8-49af-ad12-1ddd714de1cc\n kubernetes.io/metadata.name=kubectl-9995\nAnnotations: \nStatus: Active\n\nNo resource quota.\n\nNo LimitRange resource.\n" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 3 21:57:31.716: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9995" for this suite. • [SLOW TEST:5.786 seconds] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl describe /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1084 should check if kubectl describe prints relevant information for rc and pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance]","total":-1,"completed":18,"skipped":266,"failed":0} SSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 3 21:57:30.238: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating secret with name secret-test-cf036214-3127-4a5b-a7f9-b60f2d88c1f1 STEP: Creating a pod to test consume secrets Jun 3 21:57:30.277: INFO: Waiting up to 5m0s for pod "pod-secrets-e28c01a1-3d15-4df9-ac26-8bc33b32862e" in namespace "secrets-7055" to be "Succeeded or Failed" Jun 3 21:57:30.279: INFO: Pod "pod-secrets-e28c01a1-3d15-4df9-ac26-8bc33b32862e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.329926ms Jun 3 21:57:32.284: INFO: Pod "pod-secrets-e28c01a1-3d15-4df9-ac26-8bc33b32862e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007307476s Jun 3 21:57:34.289: INFO: Pod "pod-secrets-e28c01a1-3d15-4df9-ac26-8bc33b32862e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012324862s STEP: Saw pod success Jun 3 21:57:34.289: INFO: Pod "pod-secrets-e28c01a1-3d15-4df9-ac26-8bc33b32862e" satisfied condition "Succeeded or Failed" Jun 3 21:57:34.293: INFO: Trying to get logs from node node1 pod pod-secrets-e28c01a1-3d15-4df9-ac26-8bc33b32862e container secret-volume-test: STEP: delete the pod Jun 3 21:57:34.308: INFO: Waiting for pod pod-secrets-e28c01a1-3d15-4df9-ac26-8bc33b32862e to disappear Jun 3 21:57:34.310: INFO: Pod pod-secrets-e28c01a1-3d15-4df9-ac26-8bc33b32862e no longer exists [AfterEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 3 21:57:34.310: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-7055" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":10,"skipped":125,"failed":0} SSSSSSSSSSS ------------------------------ [BeforeEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 3 21:57:31.761: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/rc.go:54 [It] should surface a failure condition on a common issue like exceeded quota [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Jun 3 21:57:31.783: INFO: Creating quota "condition-test" that allows only two pods to run in the current namespace STEP: Creating rc "condition-test" that asks for more than the allowed pod quota STEP: Checking rc "condition-test" has the desired failure condition set STEP: Scaling down rc "condition-test" to satisfy pod quota Jun 3 21:57:33.809: INFO: Updating replication controller "condition-test" STEP: Checking rc "condition-test" has no failure condition set [AfterEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 3 21:57:34.815: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-3884" for this suite. • ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance]","total":-1,"completed":19,"skipped":280,"failed":0} SSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 3 21:57:34.344: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test downward API volume plugin Jun 3 21:57:34.381: INFO: Waiting up to 5m0s for pod "downwardapi-volume-41557b4b-7b0a-4720-af43-cf0f88d96cb6" in namespace "downward-api-4447" to be "Succeeded or Failed" Jun 3 21:57:34.385: INFO: Pod "downwardapi-volume-41557b4b-7b0a-4720-af43-cf0f88d96cb6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.530987ms Jun 3 21:57:36.388: INFO: Pod "downwardapi-volume-41557b4b-7b0a-4720-af43-cf0f88d96cb6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007553694s Jun 3 21:57:38.395: INFO: Pod "downwardapi-volume-41557b4b-7b0a-4720-af43-cf0f88d96cb6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013953938s STEP: Saw pod success Jun 3 21:57:38.395: INFO: Pod "downwardapi-volume-41557b4b-7b0a-4720-af43-cf0f88d96cb6" satisfied condition "Succeeded or Failed" Jun 3 21:57:38.397: INFO: Trying to get logs from node node2 pod downwardapi-volume-41557b4b-7b0a-4720-af43-cf0f88d96cb6 container client-container: STEP: delete the pod Jun 3 21:57:38.410: INFO: Waiting for pod downwardapi-volume-41557b4b-7b0a-4720-af43-cf0f88d96cb6 to disappear Jun 3 21:57:38.412: INFO: Pod downwardapi-volume-41557b4b-7b0a-4720-af43-cf0f88d96cb6 no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 3 21:57:38.412: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-4447" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":-1,"completed":11,"skipped":136,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 3 21:57:38.489: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating secret with name s-test-opt-del-f9196c5d-b7e3-4483-9b4e-3e93852ddb51 STEP: Creating secret with name s-test-opt-upd-0462251a-a5df-4aaa-a70f-e34a34043684 STEP: Creating the pod Jun 3 21:57:38.539: INFO: The status of Pod pod-secrets-856299e9-3251-4911-8df3-cbdd298df659 is Pending, waiting for it to be Running (with Ready = true) Jun 3 21:57:40.543: INFO: The status of Pod pod-secrets-856299e9-3251-4911-8df3-cbdd298df659 is Pending, waiting for it to be Running (with Ready = true) Jun 3 21:57:42.543: INFO: The status of Pod pod-secrets-856299e9-3251-4911-8df3-cbdd298df659 is Running (Ready = true) STEP: Deleting secret s-test-opt-del-f9196c5d-b7e3-4483-9b4e-3e93852ddb51 STEP: Updating secret s-test-opt-upd-0462251a-a5df-4aaa-a70f-e34a34043684 STEP: Creating secret with name s-test-opt-create-1ac3f3bd-6979-481a-aad9-9c05069b713e STEP: waiting to observe update in volume [AfterEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 3 21:57:46.612: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-7369" for this suite. • [SLOW TEST:8.131 seconds] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance]","total":-1,"completed":12,"skipped":164,"failed":0} SSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 3 21:57:34.863: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a pod. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Pod that fits quota STEP: Ensuring ResourceQuota status captures the pod usage STEP: Not allowing a pod to be created that exceeds remaining quota STEP: Not allowing a pod to be created that exceeds remaining quota(validation on extended resources) STEP: Ensuring a pod cannot update its resource requirements STEP: Ensuring attempts to update pod resource requirements did not change quota usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 3 21:57:47.955: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-2749" for this suite. • [SLOW TEST:13.102 seconds] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a pod. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance]","total":-1,"completed":20,"skipped":293,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 3 21:57:48.017: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated [AfterEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 3 21:57:55.050: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-2245" for this suite. • [SLOW TEST:7.046 seconds] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]","total":-1,"completed":21,"skipped":313,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 3 21:57:46.639: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for pods for Subdomain [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-9095.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-querier-2.dns-test-service-2.dns-9095.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-9095.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-querier-2.dns-test-service-2.dns-9095.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-9095.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service-2.dns-9095.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-9095.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service-2.dns-9095.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-9095.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-9095.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-querier-2.dns-test-service-2.dns-9095.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-9095.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-querier-2.dns-test-service-2.dns-9095.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-9095.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service-2.dns-9095.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-9095.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service-2.dns-9095.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-9095.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Jun 3 21:57:50.727: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-9095.svc.cluster.local from pod dns-9095/dns-test-d96f2f2c-b03b-44b1-855f-5155b0eacef6: the server could not find the requested resource (get pods dns-test-d96f2f2c-b03b-44b1-855f-5155b0eacef6) Jun 3 21:57:50.746: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-9095.svc.cluster.local from pod dns-9095/dns-test-d96f2f2c-b03b-44b1-855f-5155b0eacef6: the server could not find the requested resource (get pods dns-test-d96f2f2c-b03b-44b1-855f-5155b0eacef6) Jun 3 21:57:50.756: INFO: Lookups using dns-9095/dns-test-d96f2f2c-b03b-44b1-855f-5155b0eacef6 failed for: [wheezy_tcp@dns-querier-2.dns-test-service-2.dns-9095.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-9095.svc.cluster.local] Jun 3 21:57:55.788: INFO: DNS probes using dns-9095/dns-test-d96f2f2c-b03b-44b1-855f-5155b0eacef6 succeeded STEP: deleting the pod STEP: deleting the test headless service [AfterEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 3 21:57:55.800: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-9095" for this suite. • [SLOW TEST:9.173 seconds] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should provide DNS for pods for Subdomain [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for pods for Subdomain [Conformance]","total":-1,"completed":13,"skipped":170,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-instrumentation] Events API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 3 21:57:55.917: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-instrumentation] Events API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/instrumentation/events.go:81 [It] should delete a collection of events [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Create set of events STEP: get a list of Events with a label in the current namespace STEP: delete a list of events Jun 3 21:57:55.949: INFO: requesting DeleteCollection of events STEP: check that the list of events matches the requested quantity [AfterEach] [sig-instrumentation] Events API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 3 21:57:55.981: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-1697" for this suite. • ------------------------------ {"msg":"PASSED [sig-instrumentation] Events API should delete a collection of events [Conformance]","total":-1,"completed":14,"skipped":219,"failed":0} SSSS ------------------------------ [BeforeEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 3 21:57:56.001: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be immutable if `immutable` field is set [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [AfterEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 3 21:57:56.052: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-2234" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be immutable if `immutable` field is set [Conformance]","total":-1,"completed":15,"skipped":223,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-apps] DisruptionController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 3 21:57:56.095: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename disruption STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] DisruptionController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/disruption.go:69 [It] should create a PodDisruptionBudget [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating the pdb STEP: Waiting for the pdb to be processed STEP: updating the pdb STEP: Waiting for the pdb to be processed STEP: patching the pdb STEP: Waiting for the pdb to be processed STEP: Waiting for the pdb to be deleted [AfterEach] [sig-apps] DisruptionController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 3 21:58:02.162: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "disruption-4061" for this suite. • [SLOW TEST:6.074 seconds] [sig-apps] DisruptionController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should create a PodDisruptionBudget [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] DisruptionController should create a PodDisruptionBudget [Conformance]","total":-1,"completed":16,"skipped":239,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 3 21:57:55.124: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/lifecycle_hook.go:52 STEP: create the container to handle the HTTPGet hook request. Jun 3 21:57:55.162: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Jun 3 21:57:57.166: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Jun 3 21:57:59.168: INFO: The status of Pod pod-handle-http-request is Running (Ready = true) [It] should execute prestop http hook properly [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: create the pod with lifecycle hook Jun 3 21:57:59.187: INFO: The status of Pod pod-with-prestop-http-hook is Pending, waiting for it to be Running (with Ready = true) Jun 3 21:58:01.190: INFO: The status of Pod pod-with-prestop-http-hook is Pending, waiting for it to be Running (with Ready = true) Jun 3 21:58:03.192: INFO: The status of Pod pod-with-prestop-http-hook is Running (Ready = true) STEP: delete the pod with lifecycle hook Jun 3 21:58:03.200: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Jun 3 21:58:03.202: INFO: Pod pod-with-prestop-http-hook still exists Jun 3 21:58:05.203: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Jun 3 21:58:05.208: INFO: Pod pod-with-prestop-http-hook still exists Jun 3 21:58:07.204: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Jun 3 21:58:07.206: INFO: Pod pod-with-prestop-http-hook still exists Jun 3 21:58:09.204: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Jun 3 21:58:09.208: INFO: Pod pod-with-prestop-http-hook still exists Jun 3 21:58:11.203: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Jun 3 21:58:11.206: INFO: Pod pod-with-prestop-http-hook no longer exists STEP: check prestop hook [AfterEach] [sig-node] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 3 21:58:11.212: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-9785" for this suite. • [SLOW TEST:16.097 seconds] [sig-node] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 when create a pod with lifecycle hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/lifecycle_hook.go:43 should execute prestop http hook properly [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance]","total":-1,"completed":22,"skipped":337,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 3 21:58:02.268: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746 [It] should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating service in namespace services-8748 STEP: creating service affinity-clusterip in namespace services-8748 STEP: creating replication controller affinity-clusterip in namespace services-8748 I0603 21:58:02.303487 34 runners.go:190] Created replication controller with name: affinity-clusterip, namespace: services-8748, replica count: 3 I0603 21:58:05.354859 34 runners.go:190] affinity-clusterip Pods: 3 out of 3 created, 2 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0603 21:58:08.357460 34 runners.go:190] affinity-clusterip Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jun 3 21:58:08.366: INFO: Creating new exec pod Jun 3 21:58:13.396: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8748 exec execpod-affinityhvh8h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80' Jun 3 21:58:13.651: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-clusterip 80\nConnection to affinity-clusterip 80 port [tcp/http] succeeded!\n" Jun 3 21:58:13.651: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" Jun 3 21:58:13.651: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8748 exec execpod-affinityhvh8h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.233.21.66 80' Jun 3 21:58:13.923: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 10.233.21.66 80\nConnection to 10.233.21.66 80 port [tcp/http] succeeded!\n" Jun 3 21:58:13.923: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" Jun 3 21:58:13.924: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8748 exec execpod-affinityhvh8h -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://10.233.21.66:80/ ; done' Jun 3 21:58:14.245: INFO: stderr: "+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.21.66:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.21.66:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.21.66:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.21.66:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.21.66:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.21.66:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.21.66:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.21.66:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.21.66:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.21.66:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.21.66:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.21.66:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.21.66:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.21.66:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.21.66:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.21.66:80/\n" Jun 3 21:58:14.245: INFO: stdout: "\naffinity-clusterip-rfjsq\naffinity-clusterip-rfjsq\naffinity-clusterip-rfjsq\naffinity-clusterip-rfjsq\naffinity-clusterip-rfjsq\naffinity-clusterip-rfjsq\naffinity-clusterip-rfjsq\naffinity-clusterip-rfjsq\naffinity-clusterip-rfjsq\naffinity-clusterip-rfjsq\naffinity-clusterip-rfjsq\naffinity-clusterip-rfjsq\naffinity-clusterip-rfjsq\naffinity-clusterip-rfjsq\naffinity-clusterip-rfjsq\naffinity-clusterip-rfjsq" Jun 3 21:58:14.245: INFO: Received response from host: affinity-clusterip-rfjsq Jun 3 21:58:14.245: INFO: Received response from host: affinity-clusterip-rfjsq Jun 3 21:58:14.245: INFO: Received response from host: affinity-clusterip-rfjsq Jun 3 21:58:14.245: INFO: Received response from host: affinity-clusterip-rfjsq Jun 3 21:58:14.245: INFO: Received response from host: affinity-clusterip-rfjsq Jun 3 21:58:14.245: INFO: Received response from host: affinity-clusterip-rfjsq Jun 3 21:58:14.245: INFO: Received response from host: affinity-clusterip-rfjsq Jun 3 21:58:14.245: INFO: Received response from host: affinity-clusterip-rfjsq Jun 3 21:58:14.245: INFO: Received response from host: affinity-clusterip-rfjsq Jun 3 21:58:14.245: INFO: Received response from host: affinity-clusterip-rfjsq Jun 3 21:58:14.245: INFO: Received response from host: affinity-clusterip-rfjsq Jun 3 21:58:14.245: INFO: Received response from host: affinity-clusterip-rfjsq Jun 3 21:58:14.245: INFO: Received response from host: affinity-clusterip-rfjsq Jun 3 21:58:14.245: INFO: Received response from host: affinity-clusterip-rfjsq Jun 3 21:58:14.245: INFO: Received response from host: affinity-clusterip-rfjsq Jun 3 21:58:14.245: INFO: Received response from host: affinity-clusterip-rfjsq Jun 3 21:58:14.245: INFO: Cleaning up the exec pod STEP: deleting ReplicationController affinity-clusterip in namespace services-8748, will wait for the garbage collector to delete the pods Jun 3 21:58:14.309: INFO: Deleting ReplicationController affinity-clusterip took: 4.473519ms Jun 3 21:58:14.410: INFO: Terminating ReplicationController affinity-clusterip pods took: 100.930499ms [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 3 21:58:30.222: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-8748" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750 • [SLOW TEST:27.962 seconds] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ [BeforeEach] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 3 21:55:52.271: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should verify that a failing subpath expansion can be modified during the lifecycle of a container [Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating the pod with failed condition STEP: updating the pod Jun 3 21:57:52.825: INFO: Successfully updated pod "var-expansion-c046e257-43e5-43df-b3ad-6a7fef02b93e" STEP: waiting for pod running STEP: deleting the pod gracefully Jun 3 21:57:54.831: INFO: Deleting pod "var-expansion-c046e257-43e5-43df-b3ad-6a7fef02b93e" in namespace "var-expansion-5263" Jun 3 21:57:54.836: INFO: Wait up to 5m0s for pod "var-expansion-c046e257-43e5-43df-b3ad-6a7fef02b93e" to be fully deleted [AfterEach] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 3 21:58:32.842: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-5263" for this suite. • [SLOW TEST:160.580 seconds] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should verify that a failing subpath expansion can be modified during the lifecycle of a container [Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Variable Expansion should verify that a failing subpath expansion can be modified during the lifecycle of a container [Slow] [Conformance]","total":-1,"completed":2,"skipped":9,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ {"msg":"PASSED [sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","total":-1,"completed":17,"skipped":285,"failed":0} [BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/sysctl.go:35 [BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 3 21:58:30.233: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sysctl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/sysctl.go:64 [It] should support sysctls [MinimumKubeletVersion:1.21] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod with the kernel.shm_rmid_forced sysctl STEP: Watching for error events or started pod STEP: Waiting for pod completion STEP: Checking that the pod succeeded STEP: Getting logs from the pod STEP: Checking that the sysctl is actually updated [AfterEach] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 3 21:58:34.294: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sysctl-1600" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] should support sysctls [MinimumKubeletVersion:1.21] [Conformance]","total":-1,"completed":18,"skipped":285,"failed":0} SSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 3 21:58:32.920: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test emptydir 0644 on node default medium Jun 3 21:58:32.956: INFO: Waiting up to 5m0s for pod "pod-1ef3292e-4dbd-423e-a0d5-526e0fab3479" in namespace "emptydir-8212" to be "Succeeded or Failed" Jun 3 21:58:32.961: INFO: Pod "pod-1ef3292e-4dbd-423e-a0d5-526e0fab3479": Phase="Pending", Reason="", readiness=false. Elapsed: 5.00522ms Jun 3 21:58:34.964: INFO: Pod "pod-1ef3292e-4dbd-423e-a0d5-526e0fab3479": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007729515s Jun 3 21:58:36.967: INFO: Pod "pod-1ef3292e-4dbd-423e-a0d5-526e0fab3479": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010949349s STEP: Saw pod success Jun 3 21:58:36.967: INFO: Pod "pod-1ef3292e-4dbd-423e-a0d5-526e0fab3479" satisfied condition "Succeeded or Failed" Jun 3 21:58:36.970: INFO: Trying to get logs from node node2 pod pod-1ef3292e-4dbd-423e-a0d5-526e0fab3479 container test-container: STEP: delete the pod Jun 3 21:58:36.983: INFO: Waiting for pod pod-1ef3292e-4dbd-423e-a0d5-526e0fab3479 to disappear Jun 3 21:58:36.985: INFO: Pod pod-1ef3292e-4dbd-423e-a0d5-526e0fab3479 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 3 21:58:36.985: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-8212" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":3,"skipped":35,"failed":0} SS ------------------------------ [BeforeEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 3 21:58:11.266: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Performing setup for networking test in namespace pod-network-test-9949 STEP: creating a selector STEP: Creating the service pods in kubernetes Jun 3 21:58:11.292: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Jun 3 21:58:11.328: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jun 3 21:58:13.333: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jun 3 21:58:15.332: INFO: The status of Pod netserver-0 is Running (Ready = false) Jun 3 21:58:17.332: INFO: The status of Pod netserver-0 is Running (Ready = false) Jun 3 21:58:19.332: INFO: The status of Pod netserver-0 is Running (Ready = false) Jun 3 21:58:21.332: INFO: The status of Pod netserver-0 is Running (Ready = false) Jun 3 21:58:23.333: INFO: The status of Pod netserver-0 is Running (Ready = false) Jun 3 21:58:25.332: INFO: The status of Pod netserver-0 is Running (Ready = false) Jun 3 21:58:27.333: INFO: The status of Pod netserver-0 is Running (Ready = false) Jun 3 21:58:29.331: INFO: The status of Pod netserver-0 is Running (Ready = false) Jun 3 21:58:31.332: INFO: The status of Pod netserver-0 is Running (Ready = false) Jun 3 21:58:33.332: INFO: The status of Pod netserver-0 is Running (Ready = true) Jun 3 21:58:33.338: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods Jun 3 21:58:37.374: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2 Jun 3 21:58:37.374: INFO: Going to poll 10.244.3.205 on port 8081 at least 0 times, with a maximum of 34 tries before failing Jun 3 21:58:37.377: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.3.205 8081 | grep -v '^\s*$'] Namespace:pod-network-test-9949 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 3 21:58:37.377: INFO: >>> kubeConfig: /root/.kube/config Jun 3 21:58:38.476: INFO: Found all 1 expected endpoints: [netserver-0] Jun 3 21:58:38.476: INFO: Going to poll 10.244.4.18 on port 8081 at least 0 times, with a maximum of 34 tries before failing Jun 3 21:58:38.480: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.4.18 8081 | grep -v '^\s*$'] Namespace:pod-network-test-9949 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 3 21:58:38.480: INFO: >>> kubeConfig: /root/.kube/config Jun 3 21:58:39.559: INFO: Found all 1 expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 3 21:58:39.559: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-9949" for this suite. • [SLOW TEST:28.302 seconds] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/network/framework.go:23 Granular Checks: Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/network/networking.go:30 should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":23,"skipped":354,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 3 21:58:34.331: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating configMap with name cm-test-opt-del-74d414ee-6faa-4c63-b0ec-7e94759dfa3c STEP: Creating configMap with name cm-test-opt-upd-86c2d60c-38e8-4a32-81b0-997a00ccaeaf STEP: Creating the pod Jun 3 21:58:34.377: INFO: The status of Pod pod-configmaps-8063a1e7-274c-4ada-a1b3-27074e893d4a is Pending, waiting for it to be Running (with Ready = true) Jun 3 21:58:36.381: INFO: The status of Pod pod-configmaps-8063a1e7-274c-4ada-a1b3-27074e893d4a is Pending, waiting for it to be Running (with Ready = true) Jun 3 21:58:38.383: INFO: The status of Pod pod-configmaps-8063a1e7-274c-4ada-a1b3-27074e893d4a is Running (Ready = true) STEP: Deleting configmap cm-test-opt-del-74d414ee-6faa-4c63-b0ec-7e94759dfa3c STEP: Updating configmap cm-test-opt-upd-86c2d60c-38e8-4a32-81b0-997a00ccaeaf STEP: Creating configMap with name cm-test-opt-create-4bd13997-b1b5-4e8f-826a-2d599c0531da STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 3 21:58:42.475: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-2748" for this suite. • [SLOW TEST:8.154 seconds] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":-1,"completed":19,"skipped":295,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-apps] DisruptionController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 3 21:58:37.000: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename disruption STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] DisruptionController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/disruption.go:69 [It] should update/patch PodDisruptionBudget status [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Waiting for the pdb to be processed STEP: Updating PodDisruptionBudget status STEP: Waiting for all pods to be running Jun 3 21:58:39.057: INFO: running pods: 0 < 1 Jun 3 21:58:41.061: INFO: running pods: 0 < 1 STEP: locating a running pod STEP: Waiting for the pdb to be processed STEP: Patching PodDisruptionBudget status STEP: Waiting for the pdb to be processed [AfterEach] [sig-apps] DisruptionController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 3 21:58:43.089: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "disruption-2326" for this suite. • [SLOW TEST:6.097 seconds] [sig-apps] DisruptionController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should update/patch PodDisruptionBudget status [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] DisruptionController should update/patch PodDisruptionBudget status [Conformance]","total":-1,"completed":4,"skipped":37,"failed":0} SSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 3 21:58:39.733: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test downward API volume plugin Jun 3 21:58:39.769: INFO: Waiting up to 5m0s for pod "downwardapi-volume-04f53eed-1c78-4614-b316-1a3f0bf701b1" in namespace "projected-374" to be "Succeeded or Failed" Jun 3 21:58:39.772: INFO: Pod "downwardapi-volume-04f53eed-1c78-4614-b316-1a3f0bf701b1": Phase="Pending", Reason="", readiness=false. Elapsed: 3.071217ms Jun 3 21:58:41.777: INFO: Pod "downwardapi-volume-04f53eed-1c78-4614-b316-1a3f0bf701b1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008086738s Jun 3 21:58:43.781: INFO: Pod "downwardapi-volume-04f53eed-1c78-4614-b316-1a3f0bf701b1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011695541s STEP: Saw pod success Jun 3 21:58:43.781: INFO: Pod "downwardapi-volume-04f53eed-1c78-4614-b316-1a3f0bf701b1" satisfied condition "Succeeded or Failed" Jun 3 21:58:43.784: INFO: Trying to get logs from node node2 pod downwardapi-volume-04f53eed-1c78-4614-b316-1a3f0bf701b1 container client-container: STEP: delete the pod Jun 3 21:58:43.797: INFO: Waiting for pod downwardapi-volume-04f53eed-1c78-4614-b316-1a3f0bf701b1 to disappear Jun 3 21:58:43.799: INFO: Pod downwardapi-volume-04f53eed-1c78-4614-b316-1a3f0bf701b1 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 3 21:58:43.800: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-374" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":24,"skipped":430,"failed":0} SSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 3 21:57:29.936: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating configMap with name cm-test-opt-del-516b9f65-c936-4a5b-9e53-8e60d0df55ac STEP: Creating configMap with name cm-test-opt-upd-d577adcd-afd8-4141-bd1b-49d9219a6133 STEP: Creating the pod Jun 3 21:57:29.989: INFO: The status of Pod pod-projected-configmaps-587c6dbd-1f45-425d-b287-6f93fb8f29d9 is Pending, waiting for it to be Running (with Ready = true) Jun 3 21:57:31.994: INFO: The status of Pod pod-projected-configmaps-587c6dbd-1f45-425d-b287-6f93fb8f29d9 is Pending, waiting for it to be Running (with Ready = true) Jun 3 21:57:33.995: INFO: The status of Pod pod-projected-configmaps-587c6dbd-1f45-425d-b287-6f93fb8f29d9 is Pending, waiting for it to be Running (with Ready = true) Jun 3 21:57:35.993: INFO: The status of Pod pod-projected-configmaps-587c6dbd-1f45-425d-b287-6f93fb8f29d9 is Running (Ready = true) STEP: Deleting configmap cm-test-opt-del-516b9f65-c936-4a5b-9e53-8e60d0df55ac STEP: Updating configmap cm-test-opt-upd-d577adcd-afd8-4141-bd1b-49d9219a6133 STEP: Creating configMap with name cm-test-opt-create-b4a35c2a-8e4e-4258-a43f-b0bfa6991c59 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 3 21:58:45.608: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6966" for this suite. • [SLOW TEST:75.681 seconds] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":-1,"completed":5,"skipped":119,"failed":0} SS ------------------------------ [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 3 21:58:42.572: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Jun 3 21:58:42.943: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Jun 3 21:58:44.951: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63789890322, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63789890322, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63789890322, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63789890322, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jun 3 21:58:47.962: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should deny crd creation [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Registering the crd webhook via the AdmissionRegistration API STEP: Creating a custom resource definition that should be denied by the webhook Jun 3 21:58:47.975: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 3 21:58:47.988: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-7096" for this suite. STEP: Destroying namespace "webhook-7096-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:5.450 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should deny crd creation [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","total":-1,"completed":20,"skipped":331,"failed":0} SSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 3 21:57:22.238: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating projection with configMap that has name projected-configmap-test-upd-4eaf3d8b-8a35-441b-8ae8-67bad4eada70 STEP: Creating the pod Jun 3 21:57:22.287: INFO: The status of Pod pod-projected-configmaps-780a05ab-1788-41ce-a14a-63fde23daeff is Pending, waiting for it to be Running (with Ready = true) Jun 3 21:57:24.291: INFO: The status of Pod pod-projected-configmaps-780a05ab-1788-41ce-a14a-63fde23daeff is Pending, waiting for it to be Running (with Ready = true) Jun 3 21:57:26.290: INFO: The status of Pod pod-projected-configmaps-780a05ab-1788-41ce-a14a-63fde23daeff is Running (Ready = true) STEP: Updating configmap projected-configmap-test-upd-4eaf3d8b-8a35-441b-8ae8-67bad4eada70 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 3 21:58:51.974: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6954" for this suite. • [SLOW TEST:89.746 seconds] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance]","total":-1,"completed":11,"skipped":150,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 3 21:58:52.073: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should allow opting out of API token automount [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: getting the auto-created API token Jun 3 21:58:52.619: INFO: created pod pod-service-account-defaultsa Jun 3 21:58:52.619: INFO: pod pod-service-account-defaultsa service account token volume mount: true Jun 3 21:58:52.629: INFO: created pod pod-service-account-mountsa Jun 3 21:58:52.629: INFO: pod pod-service-account-mountsa service account token volume mount: true Jun 3 21:58:52.638: INFO: created pod pod-service-account-nomountsa Jun 3 21:58:52.638: INFO: pod pod-service-account-nomountsa service account token volume mount: false Jun 3 21:58:52.647: INFO: created pod pod-service-account-defaultsa-mountspec Jun 3 21:58:52.647: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true Jun 3 21:58:52.656: INFO: created pod pod-service-account-mountsa-mountspec Jun 3 21:58:52.657: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true Jun 3 21:58:52.666: INFO: created pod pod-service-account-nomountsa-mountspec Jun 3 21:58:52.666: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true Jun 3 21:58:52.677: INFO: created pod pod-service-account-defaultsa-nomountspec Jun 3 21:58:52.677: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false Jun 3 21:58:52.686: INFO: created pod pod-service-account-mountsa-nomountspec Jun 3 21:58:52.686: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false Jun 3 21:58:52.695: INFO: created pod pod-service-account-nomountsa-nomountspec Jun 3 21:58:52.695: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false [AfterEach] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 3 21:58:52.695: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-6823" for this suite. • ------------------------------ {"msg":"PASSED [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance]","total":-1,"completed":12,"skipped":187,"failed":0} SSSSS ------------------------------ [BeforeEach] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 3 21:58:43.838: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should fail substituting values in a volume subpath with absolute path [Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Jun 3 21:58:47.884: INFO: Deleting pod "var-expansion-78a4cfdb-4dfd-48ca-9aad-ec17edb4e7d6" in namespace "var-expansion-1765" Jun 3 21:58:47.888: INFO: Wait up to 5m0s for pod "var-expansion-78a4cfdb-4dfd-48ca-9aad-ec17edb4e7d6" to be fully deleted [AfterEach] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 3 21:59:03.895: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-1765" for this suite. • [SLOW TEST:20.065 seconds] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should fail substituting values in a volume subpath with absolute path [Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Variable Expansion should fail substituting values in a volume subpath with absolute path [Slow] [Conformance]","total":-1,"completed":25,"skipped":439,"failed":0} SSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-apps] Job /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 3 21:58:48.050: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a job STEP: Ensuring job reaches completions [AfterEach] [sig-apps] Job /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 3 21:59:06.083: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-3569" for this suite. • [SLOW TEST:18.044 seconds] [sig-apps] Job /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]","total":-1,"completed":21,"skipped":340,"failed":0} SSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 3 21:58:52.719: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD preserving unknown fields in an embedded object [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Jun 3 21:58:52.739: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties Jun 3 21:59:00.913: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2324 --namespace=crd-publish-openapi-2324 create -f -' Jun 3 21:59:01.424: INFO: stderr: "" Jun 3 21:59:01.424: INFO: stdout: "e2e-test-crd-publish-openapi-2132-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n" Jun 3 21:59:01.424: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2324 --namespace=crd-publish-openapi-2324 delete e2e-test-crd-publish-openapi-2132-crds test-cr' Jun 3 21:59:01.609: INFO: stderr: "" Jun 3 21:59:01.609: INFO: stdout: "e2e-test-crd-publish-openapi-2132-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n" Jun 3 21:59:01.609: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2324 --namespace=crd-publish-openapi-2324 apply -f -' Jun 3 21:59:01.944: INFO: stderr: "" Jun 3 21:59:01.944: INFO: stdout: "e2e-test-crd-publish-openapi-2132-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n" Jun 3 21:59:01.944: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2324 --namespace=crd-publish-openapi-2324 delete e2e-test-crd-publish-openapi-2132-crds test-cr' Jun 3 21:59:02.114: INFO: stderr: "" Jun 3 21:59:02.114: INFO: stdout: "e2e-test-crd-publish-openapi-2132-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR Jun 3 21:59:02.114: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2324 explain e2e-test-crd-publish-openapi-2132-crds' Jun 3 21:59:02.487: INFO: stderr: "" Jun 3 21:59:02.487: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-2132-crd\nVERSION: crd-publish-openapi-test-unknown-in-nested.example.com/v1\n\nDESCRIPTION:\n preserve-unknown-properties in nested field for Testing\n\nFIELDS:\n apiVersion\t\n APIVersion defines the versioned schema of this representation of an\n object. Servers should convert recognized schemas to the latest internal\n value, and may reject unrecognized values. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n kind\t\n Kind is a string value representing the REST resource this object\n represents. Servers may infer this from the endpoint the client submits\n requests to. Cannot be updated. In CamelCase. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n metadata\t\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n spec\t<>\n Specification of Waldo\n\n status\t\n Status of Waldo\n\n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 3 21:59:06.162: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-2324" for this suite. • [SLOW TEST:13.468 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD preserving unknown fields in an embedded object [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance]","total":-1,"completed":13,"skipped":192,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 3 21:59:03.933: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46 [It] should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Jun 3 21:59:03.971: INFO: Waiting up to 5m0s for pod "busybox-user-65534-890a9c98-d48b-43dd-b342-b6c87ade2fe9" in namespace "security-context-test-3831" to be "Succeeded or Failed" Jun 3 21:59:03.982: INFO: Pod "busybox-user-65534-890a9c98-d48b-43dd-b342-b6c87ade2fe9": Phase="Pending", Reason="", readiness=false. Elapsed: 10.25205ms Jun 3 21:59:05.985: INFO: Pod "busybox-user-65534-890a9c98-d48b-43dd-b342-b6c87ade2fe9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013615176s Jun 3 21:59:07.988: INFO: Pod "busybox-user-65534-890a9c98-d48b-43dd-b342-b6c87ade2fe9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.016336308s Jun 3 21:59:07.988: INFO: Pod "busybox-user-65534-890a9c98-d48b-43dd-b342-b6c87ade2fe9" satisfied condition "Succeeded or Failed" [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 3 21:59:07.988: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-3831" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":26,"skipped":451,"failed":0} SSSSSSSS ------------------------------ [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 3 21:59:08.021: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test emptydir 0777 on tmpfs Jun 3 21:59:08.054: INFO: Waiting up to 5m0s for pod "pod-512036f0-cdb7-4851-8bb9-8aea3b32c236" in namespace "emptydir-6473" to be "Succeeded or Failed" Jun 3 21:59:08.061: INFO: Pod "pod-512036f0-cdb7-4851-8bb9-8aea3b32c236": Phase="Pending", Reason="", readiness=false. Elapsed: 6.737987ms Jun 3 21:59:10.064: INFO: Pod "pod-512036f0-cdb7-4851-8bb9-8aea3b32c236": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010060979s Jun 3 21:59:12.068: INFO: Pod "pod-512036f0-cdb7-4851-8bb9-8aea3b32c236": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.014089523s STEP: Saw pod success Jun 3 21:59:12.069: INFO: Pod "pod-512036f0-cdb7-4851-8bb9-8aea3b32c236" satisfied condition "Succeeded or Failed" Jun 3 21:59:12.071: INFO: Trying to get logs from node node2 pod pod-512036f0-cdb7-4851-8bb9-8aea3b32c236 container test-container: STEP: delete the pod Jun 3 21:59:12.086: INFO: Waiting for pod pod-512036f0-cdb7-4851-8bb9-8aea3b32c236 to disappear Jun 3 21:59:12.088: INFO: Pod pod-512036f0-cdb7-4851-8bb9-8aea3b32c236 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 3 21:59:12.088: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-6473" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":27,"skipped":459,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 3 21:59:06.122: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] pod should support shared volumes between containers [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating Pod STEP: Reading file content from the nginx-container Jun 3 21:59:12.169: INFO: ExecWithOptions {Command:[/bin/sh -c cat /usr/share/volumeshare/shareddata.txt] Namespace:emptydir-7462 PodName:pod-sharedvolume-96216673-fdfb-4794-8c15-d018de6316b8 ContainerName:busybox-main-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 3 21:59:12.169: INFO: >>> kubeConfig: /root/.kube/config Jun 3 21:59:12.265: INFO: Exec stderr: "" [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 3 21:59:12.265: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-7462" for this suite. • [SLOW TEST:6.152 seconds] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 pod should support shared volumes between containers [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance]","total":-1,"completed":22,"skipped":351,"failed":0} SSSSSSSSSS ------------------------------ [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 3 21:59:12.161: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:241 [It] should support proxy with --port 0 [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: starting the proxy server Jun 3 21:59:12.182: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-4083 proxy -p 0 --disable-filter' STEP: curling proxy /api/ output [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 3 21:59:12.293: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4083" for this suite. •S ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Proxy server should support proxy with --port 0 [Conformance]","total":-1,"completed":28,"skipped":483,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 3 21:58:45.626: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/lifecycle_hook.go:52 STEP: create the container to handle the HTTPGet hook request. Jun 3 21:58:45.664: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Jun 3 21:58:47.667: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Jun 3 21:58:49.667: INFO: The status of Pod pod-handle-http-request is Running (Ready = true) [It] should execute poststart http hook properly [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: create the pod with lifecycle hook Jun 3 21:58:49.686: INFO: The status of Pod pod-with-poststart-http-hook is Pending, waiting for it to be Running (with Ready = true) Jun 3 21:58:51.689: INFO: The status of Pod pod-with-poststart-http-hook is Pending, waiting for it to be Running (with Ready = true) Jun 3 21:58:53.691: INFO: The status of Pod pod-with-poststart-http-hook is Pending, waiting for it to be Running (with Ready = true) Jun 3 21:58:55.690: INFO: The status of Pod pod-with-poststart-http-hook is Pending, waiting for it to be Running (with Ready = true) Jun 3 21:58:57.690: INFO: The status of Pod pod-with-poststart-http-hook is Running (Ready = true) STEP: check poststart hook STEP: delete the pod with lifecycle hook Jun 3 21:58:57.705: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Jun 3 21:58:57.707: INFO: Pod pod-with-poststart-http-hook still exists Jun 3 21:58:59.709: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Jun 3 21:58:59.711: INFO: Pod pod-with-poststart-http-hook still exists Jun 3 21:59:01.708: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Jun 3 21:59:01.712: INFO: Pod pod-with-poststart-http-hook still exists Jun 3 21:59:03.708: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Jun 3 21:59:03.711: INFO: Pod pod-with-poststart-http-hook still exists Jun 3 21:59:05.709: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Jun 3 21:59:05.711: INFO: Pod pod-with-poststart-http-hook still exists Jun 3 21:59:07.708: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Jun 3 21:59:07.711: INFO: Pod pod-with-poststart-http-hook still exists Jun 3 21:59:09.708: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Jun 3 21:59:09.711: INFO: Pod pod-with-poststart-http-hook still exists Jun 3 21:59:11.710: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Jun 3 21:59:11.713: INFO: Pod pod-with-poststart-http-hook still exists Jun 3 21:59:13.709: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Jun 3 21:59:13.712: INFO: Pod pod-with-poststart-http-hook no longer exists [AfterEach] [sig-node] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 3 21:59:13.712: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-3179" for this suite. • [SLOW TEST:28.096 seconds] [sig-node] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 when create a pod with lifecycle hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/lifecycle_hook.go:43 should execute poststart http hook properly [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance]","total":-1,"completed":6,"skipped":121,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] Discovery /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 3 21:59:13.770: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename discovery STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Discovery /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/discovery.go:39 STEP: Setting up server cert [It] should validate PreferredVersion for each APIGroup [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Jun 3 21:59:14.097: INFO: Checking APIGroup: apiregistration.k8s.io Jun 3 21:59:14.098: INFO: PreferredVersion.GroupVersion: apiregistration.k8s.io/v1 Jun 3 21:59:14.098: INFO: Versions found [{apiregistration.k8s.io/v1 v1} {apiregistration.k8s.io/v1beta1 v1beta1}] Jun 3 21:59:14.098: INFO: apiregistration.k8s.io/v1 matches apiregistration.k8s.io/v1 Jun 3 21:59:14.098: INFO: Checking APIGroup: apps Jun 3 21:59:14.099: INFO: PreferredVersion.GroupVersion: apps/v1 Jun 3 21:59:14.099: INFO: Versions found [{apps/v1 v1}] Jun 3 21:59:14.099: INFO: apps/v1 matches apps/v1 Jun 3 21:59:14.099: INFO: Checking APIGroup: events.k8s.io Jun 3 21:59:14.100: INFO: PreferredVersion.GroupVersion: events.k8s.io/v1 Jun 3 21:59:14.100: INFO: Versions found [{events.k8s.io/v1 v1} {events.k8s.io/v1beta1 v1beta1}] Jun 3 21:59:14.100: INFO: events.k8s.io/v1 matches events.k8s.io/v1 Jun 3 21:59:14.100: INFO: Checking APIGroup: authentication.k8s.io Jun 3 21:59:14.101: INFO: PreferredVersion.GroupVersion: authentication.k8s.io/v1 Jun 3 21:59:14.101: INFO: Versions found [{authentication.k8s.io/v1 v1} {authentication.k8s.io/v1beta1 v1beta1}] Jun 3 21:59:14.101: INFO: authentication.k8s.io/v1 matches authentication.k8s.io/v1 Jun 3 21:59:14.101: INFO: Checking APIGroup: authorization.k8s.io Jun 3 21:59:14.102: INFO: PreferredVersion.GroupVersion: authorization.k8s.io/v1 Jun 3 21:59:14.102: INFO: Versions found [{authorization.k8s.io/v1 v1} {authorization.k8s.io/v1beta1 v1beta1}] Jun 3 21:59:14.103: INFO: authorization.k8s.io/v1 matches authorization.k8s.io/v1 Jun 3 21:59:14.103: INFO: Checking APIGroup: autoscaling Jun 3 21:59:14.103: INFO: PreferredVersion.GroupVersion: autoscaling/v1 Jun 3 21:59:14.103: INFO: Versions found [{autoscaling/v1 v1} {autoscaling/v2beta1 v2beta1} {autoscaling/v2beta2 v2beta2}] Jun 3 21:59:14.103: INFO: autoscaling/v1 matches autoscaling/v1 Jun 3 21:59:14.103: INFO: Checking APIGroup: batch Jun 3 21:59:14.104: INFO: PreferredVersion.GroupVersion: batch/v1 Jun 3 21:59:14.104: INFO: Versions found [{batch/v1 v1} {batch/v1beta1 v1beta1}] Jun 3 21:59:14.104: INFO: batch/v1 matches batch/v1 Jun 3 21:59:14.104: INFO: Checking APIGroup: certificates.k8s.io Jun 3 21:59:14.105: INFO: PreferredVersion.GroupVersion: certificates.k8s.io/v1 Jun 3 21:59:14.105: INFO: Versions found [{certificates.k8s.io/v1 v1} {certificates.k8s.io/v1beta1 v1beta1}] Jun 3 21:59:14.105: INFO: certificates.k8s.io/v1 matches certificates.k8s.io/v1 Jun 3 21:59:14.105: INFO: Checking APIGroup: networking.k8s.io Jun 3 21:59:14.106: INFO: PreferredVersion.GroupVersion: networking.k8s.io/v1 Jun 3 21:59:14.106: INFO: Versions found [{networking.k8s.io/v1 v1} {networking.k8s.io/v1beta1 v1beta1}] Jun 3 21:59:14.106: INFO: networking.k8s.io/v1 matches networking.k8s.io/v1 Jun 3 21:59:14.106: INFO: Checking APIGroup: extensions Jun 3 21:59:14.107: INFO: PreferredVersion.GroupVersion: extensions/v1beta1 Jun 3 21:59:14.107: INFO: Versions found [{extensions/v1beta1 v1beta1}] Jun 3 21:59:14.107: INFO: extensions/v1beta1 matches extensions/v1beta1 Jun 3 21:59:14.107: INFO: Checking APIGroup: policy Jun 3 21:59:14.108: INFO: PreferredVersion.GroupVersion: policy/v1 Jun 3 21:59:14.108: INFO: Versions found [{policy/v1 v1} {policy/v1beta1 v1beta1}] Jun 3 21:59:14.108: INFO: policy/v1 matches policy/v1 Jun 3 21:59:14.108: INFO: Checking APIGroup: rbac.authorization.k8s.io Jun 3 21:59:14.108: INFO: PreferredVersion.GroupVersion: rbac.authorization.k8s.io/v1 Jun 3 21:59:14.109: INFO: Versions found [{rbac.authorization.k8s.io/v1 v1} {rbac.authorization.k8s.io/v1beta1 v1beta1}] Jun 3 21:59:14.109: INFO: rbac.authorization.k8s.io/v1 matches rbac.authorization.k8s.io/v1 Jun 3 21:59:14.109: INFO: Checking APIGroup: storage.k8s.io Jun 3 21:59:14.109: INFO: PreferredVersion.GroupVersion: storage.k8s.io/v1 Jun 3 21:59:14.109: INFO: Versions found [{storage.k8s.io/v1 v1} {storage.k8s.io/v1beta1 v1beta1}] Jun 3 21:59:14.109: INFO: storage.k8s.io/v1 matches storage.k8s.io/v1 Jun 3 21:59:14.109: INFO: Checking APIGroup: admissionregistration.k8s.io Jun 3 21:59:14.110: INFO: PreferredVersion.GroupVersion: admissionregistration.k8s.io/v1 Jun 3 21:59:14.110: INFO: Versions found [{admissionregistration.k8s.io/v1 v1} {admissionregistration.k8s.io/v1beta1 v1beta1}] Jun 3 21:59:14.110: INFO: admissionregistration.k8s.io/v1 matches admissionregistration.k8s.io/v1 Jun 3 21:59:14.110: INFO: Checking APIGroup: apiextensions.k8s.io Jun 3 21:59:14.111: INFO: PreferredVersion.GroupVersion: apiextensions.k8s.io/v1 Jun 3 21:59:14.111: INFO: Versions found [{apiextensions.k8s.io/v1 v1} {apiextensions.k8s.io/v1beta1 v1beta1}] Jun 3 21:59:14.111: INFO: apiextensions.k8s.io/v1 matches apiextensions.k8s.io/v1 Jun 3 21:59:14.111: INFO: Checking APIGroup: scheduling.k8s.io Jun 3 21:59:14.112: INFO: PreferredVersion.GroupVersion: scheduling.k8s.io/v1 Jun 3 21:59:14.112: INFO: Versions found [{scheduling.k8s.io/v1 v1} {scheduling.k8s.io/v1beta1 v1beta1}] Jun 3 21:59:14.112: INFO: scheduling.k8s.io/v1 matches scheduling.k8s.io/v1 Jun 3 21:59:14.112: INFO: Checking APIGroup: coordination.k8s.io Jun 3 21:59:14.113: INFO: PreferredVersion.GroupVersion: coordination.k8s.io/v1 Jun 3 21:59:14.113: INFO: Versions found [{coordination.k8s.io/v1 v1} {coordination.k8s.io/v1beta1 v1beta1}] Jun 3 21:59:14.113: INFO: coordination.k8s.io/v1 matches coordination.k8s.io/v1 Jun 3 21:59:14.113: INFO: Checking APIGroup: node.k8s.io Jun 3 21:59:14.114: INFO: PreferredVersion.GroupVersion: node.k8s.io/v1 Jun 3 21:59:14.114: INFO: Versions found [{node.k8s.io/v1 v1} {node.k8s.io/v1beta1 v1beta1}] Jun 3 21:59:14.114: INFO: node.k8s.io/v1 matches node.k8s.io/v1 Jun 3 21:59:14.114: INFO: Checking APIGroup: discovery.k8s.io Jun 3 21:59:14.115: INFO: PreferredVersion.GroupVersion: discovery.k8s.io/v1 Jun 3 21:59:14.115: INFO: Versions found [{discovery.k8s.io/v1 v1} {discovery.k8s.io/v1beta1 v1beta1}] Jun 3 21:59:14.115: INFO: discovery.k8s.io/v1 matches discovery.k8s.io/v1 Jun 3 21:59:14.115: INFO: Checking APIGroup: flowcontrol.apiserver.k8s.io Jun 3 21:59:14.116: INFO: PreferredVersion.GroupVersion: flowcontrol.apiserver.k8s.io/v1beta1 Jun 3 21:59:14.116: INFO: Versions found [{flowcontrol.apiserver.k8s.io/v1beta1 v1beta1}] Jun 3 21:59:14.116: INFO: flowcontrol.apiserver.k8s.io/v1beta1 matches flowcontrol.apiserver.k8s.io/v1beta1 Jun 3 21:59:14.116: INFO: Checking APIGroup: intel.com Jun 3 21:59:14.117: INFO: PreferredVersion.GroupVersion: intel.com/v1 Jun 3 21:59:14.117: INFO: Versions found [{intel.com/v1 v1}] Jun 3 21:59:14.117: INFO: intel.com/v1 matches intel.com/v1 Jun 3 21:59:14.117: INFO: Checking APIGroup: k8s.cni.cncf.io Jun 3 21:59:14.117: INFO: PreferredVersion.GroupVersion: k8s.cni.cncf.io/v1 Jun 3 21:59:14.117: INFO: Versions found [{k8s.cni.cncf.io/v1 v1}] Jun 3 21:59:14.117: INFO: k8s.cni.cncf.io/v1 matches k8s.cni.cncf.io/v1 Jun 3 21:59:14.117: INFO: Checking APIGroup: monitoring.coreos.com Jun 3 21:59:14.119: INFO: PreferredVersion.GroupVersion: monitoring.coreos.com/v1 Jun 3 21:59:14.119: INFO: Versions found [{monitoring.coreos.com/v1 v1} {monitoring.coreos.com/v1alpha1 v1alpha1}] Jun 3 21:59:14.119: INFO: monitoring.coreos.com/v1 matches monitoring.coreos.com/v1 Jun 3 21:59:14.119: INFO: Checking APIGroup: telemetry.intel.com Jun 3 21:59:14.120: INFO: PreferredVersion.GroupVersion: telemetry.intel.com/v1alpha1 Jun 3 21:59:14.120: INFO: Versions found [{telemetry.intel.com/v1alpha1 v1alpha1}] Jun 3 21:59:14.120: INFO: telemetry.intel.com/v1alpha1 matches telemetry.intel.com/v1alpha1 Jun 3 21:59:14.120: INFO: Checking APIGroup: custom.metrics.k8s.io Jun 3 21:59:14.121: INFO: PreferredVersion.GroupVersion: custom.metrics.k8s.io/v1beta1 Jun 3 21:59:14.121: INFO: Versions found [{custom.metrics.k8s.io/v1beta1 v1beta1}] Jun 3 21:59:14.121: INFO: custom.metrics.k8s.io/v1beta1 matches custom.metrics.k8s.io/v1beta1 [AfterEach] [sig-api-machinery] Discovery /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 3 21:59:14.121: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "discovery-7287" for this suite. • ------------------------------ {"msg":"PASSED [sig-api-machinery] Discovery should validate PreferredVersion for each APIGroup [Conformance]","total":-1,"completed":7,"skipped":140,"failed":0} SSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 3 21:59:12.424: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test emptydir 0777 on node default medium Jun 3 21:59:12.454: INFO: Waiting up to 5m0s for pod "pod-4615e3e0-41b0-4fec-8e93-8c56c78f3160" in namespace "emptydir-987" to be "Succeeded or Failed" Jun 3 21:59:12.456: INFO: Pod "pod-4615e3e0-41b0-4fec-8e93-8c56c78f3160": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015048ms Jun 3 21:59:14.460: INFO: Pod "pod-4615e3e0-41b0-4fec-8e93-8c56c78f3160": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005554967s Jun 3 21:59:16.463: INFO: Pod "pod-4615e3e0-41b0-4fec-8e93-8c56c78f3160": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.008565769s STEP: Saw pod success Jun 3 21:59:16.463: INFO: Pod "pod-4615e3e0-41b0-4fec-8e93-8c56c78f3160" satisfied condition "Succeeded or Failed" Jun 3 21:59:16.466: INFO: Trying to get logs from node node2 pod pod-4615e3e0-41b0-4fec-8e93-8c56c78f3160 container test-container: STEP: delete the pod Jun 3 21:59:16.479: INFO: Waiting for pod pod-4615e3e0-41b0-4fec-8e93-8c56c78f3160 to disappear Jun 3 21:59:16.480: INFO: Pod pod-4615e3e0-41b0-4fec-8e93-8c56c78f3160 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 3 21:59:16.480: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-987" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":29,"skipped":537,"failed":0} S ------------------------------ [BeforeEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 3 21:59:14.161: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating secret with name secret-test-071a10c8-c5e9-4d03-a2d0-0f9c2628887b STEP: Creating a pod to test consume secrets Jun 3 21:59:14.196: INFO: Waiting up to 5m0s for pod "pod-secrets-f8385e7c-ffba-43b6-82cf-78b02bc0933a" in namespace "secrets-1292" to be "Succeeded or Failed" Jun 3 21:59:14.198: INFO: Pod "pod-secrets-f8385e7c-ffba-43b6-82cf-78b02bc0933a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.37087ms Jun 3 21:59:16.202: INFO: Pod "pod-secrets-f8385e7c-ffba-43b6-82cf-78b02bc0933a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005992505s Jun 3 21:59:18.206: INFO: Pod "pod-secrets-f8385e7c-ffba-43b6-82cf-78b02bc0933a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.009994224s STEP: Saw pod success Jun 3 21:59:18.206: INFO: Pod "pod-secrets-f8385e7c-ffba-43b6-82cf-78b02bc0933a" satisfied condition "Succeeded or Failed" Jun 3 21:59:18.209: INFO: Trying to get logs from node node2 pod pod-secrets-f8385e7c-ffba-43b6-82cf-78b02bc0933a container secret-volume-test: STEP: delete the pod Jun 3 21:59:18.221: INFO: Waiting for pod pod-secrets-f8385e7c-ffba-43b6-82cf-78b02bc0933a to disappear Jun 3 21:59:18.223: INFO: Pod pod-secrets-f8385e7c-ffba-43b6-82cf-78b02bc0933a no longer exists [AfterEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 3 21:59:18.223: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-1292" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":-1,"completed":8,"skipped":153,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 3 21:59:06.236: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Jun 3 21:59:06.510: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Jun 3 21:59:08.520: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63789890346, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63789890346, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63789890346, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63789890346, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jun 3 21:59:11.530: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource with different stored version [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Jun 3 21:59:11.533: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-7895-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource while v1 is storage version STEP: Patching Custom Resource Definition to set v2 as storage STEP: Patching the custom resource while v2 is storage version [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 3 21:59:19.661: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-4378" for this suite. STEP: Destroying namespace "webhook-4378-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:13.459 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource with different stored version [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 3 21:59:12.335: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication Jun 3 21:59:12.832: INFO: role binding webhook-auth-reader already exists STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Jun 3 21:59:12.845: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Jun 3 21:59:14.853: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63789890352, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63789890352, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63789890352, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63789890352, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} Jun 3 21:59:16.856: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63789890352, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63789890352, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63789890352, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63789890352, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jun 3 21:59:19.866: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] patching/updating a validating webhook should work [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a validating webhook configuration Jun 3 21:59:19.879: INFO: Waiting for webhook configuration to be ready... STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Updating a validating webhook configuration's rules to not include the create operation STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Patching a validating webhook configuration's rules to include the create operation STEP: Creating a configMap that does not comply to the validation webhook rules [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 3 21:59:20.017: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-5186" for this suite. STEP: Destroying namespace "webhook-5186-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:7.710 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 patching/updating a validating webhook should work [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]","total":-1,"completed":23,"skipped":371,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 3 21:56:53.623: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746 [It] should have session affinity work for NodePort service [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating service in namespace services-7837 STEP: creating service affinity-nodeport in namespace services-7837 STEP: creating replication controller affinity-nodeport in namespace services-7837 I0603 21:56:53.656313 31 runners.go:190] Created replication controller with name: affinity-nodeport, namespace: services-7837, replica count: 3 I0603 21:56:56.707340 31 runners.go:190] affinity-nodeport Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0603 21:56:59.708352 31 runners.go:190] affinity-nodeport Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jun 3 21:56:59.717: INFO: Creating new exec pod Jun 3 21:57:06.743: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7837 exec execpod-affinitybsmqd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport 80' Jun 3 21:57:07.018: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-nodeport 80\nConnection to affinity-nodeport 80 port [tcp/http] succeeded!\n" Jun 3 21:57:07.018: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" Jun 3 21:57:07.018: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7837 exec execpod-affinitybsmqd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.233.26.148 80' Jun 3 21:57:07.266: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 10.233.26.148 80\nConnection to 10.233.26.148 80 port [tcp/http] succeeded!\n" Jun 3 21:57:07.266: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" Jun 3 21:57:07.267: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7837 exec execpod-affinitybsmqd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32368' Jun 3 21:57:07.515: INFO: rc: 1 Jun 3 21:57:07.516: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7837 exec execpod-affinitybsmqd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32368: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32368 nc: connect to 10.10.190.207 port 32368 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 21:57:08.516: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7837 exec execpod-affinitybsmqd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32368' Jun 3 21:57:08.765: INFO: rc: 1 Jun 3 21:57:08.765: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7837 exec execpod-affinitybsmqd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32368: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32368 + echo hostName nc: connect to 10.10.190.207 port 32368 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 21:57:09.516: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7837 exec execpod-affinitybsmqd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32368' Jun 3 21:57:09.752: INFO: rc: 1 Jun 3 21:57:09.752: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7837 exec execpod-affinitybsmqd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32368: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32368 nc: connect to 10.10.190.207 port 32368 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 21:57:10.518: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7837 exec execpod-affinitybsmqd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32368' Jun 3 21:57:10.767: INFO: rc: 1 Jun 3 21:57:10.767: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7837 exec execpod-affinitybsmqd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32368: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32368 nc: connect to 10.10.190.207 port 32368 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 21:57:11.518: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7837 exec execpod-affinitybsmqd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32368' Jun 3 21:57:11.762: INFO: rc: 1 Jun 3 21:57:11.762: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7837 exec execpod-affinitybsmqd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32368: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32368 nc: connect to 10.10.190.207 port 32368 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 21:57:12.517: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7837 exec execpod-affinitybsmqd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32368' Jun 3 21:57:12.777: INFO: rc: 1 Jun 3 21:57:12.777: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7837 exec execpod-affinitybsmqd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32368: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32368 nc: connect to 10.10.190.207 port 32368 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 21:57:13.516: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7837 exec execpod-affinitybsmqd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32368' Jun 3 21:57:13.772: INFO: rc: 1 Jun 3 21:57:13.772: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7837 exec execpod-affinitybsmqd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32368: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32368 nc: connect to 10.10.190.207 port 32368 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 21:57:14.516: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7837 exec execpod-affinitybsmqd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32368' Jun 3 21:57:14.742: INFO: rc: 1 Jun 3 21:57:14.742: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7837 exec execpod-affinitybsmqd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32368: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32368 nc: connect to 10.10.190.207 port 32368 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 21:57:15.518: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7837 exec execpod-affinitybsmqd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32368' Jun 3 21:57:15.769: INFO: rc: 1 Jun 3 21:57:15.769: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7837 exec execpod-affinitybsmqd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32368: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32368 nc: connect to 10.10.190.207 port 32368 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 21:57:16.516: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7837 exec execpod-affinitybsmqd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32368' Jun 3 21:57:16.835: INFO: rc: 1 Jun 3 21:57:16.835: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7837 exec execpod-affinitybsmqd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32368: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32368 nc: connect to 10.10.190.207 port 32368 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 21:57:17.517: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7837 exec execpod-affinitybsmqd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32368' Jun 3 21:57:17.752: INFO: rc: 1 Jun 3 21:57:17.752: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7837 exec execpod-affinitybsmqd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32368: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32368 nc: connect to 10.10.190.207 port 32368 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 21:57:18.516: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7837 exec execpod-affinitybsmqd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32368' Jun 3 21:57:18.768: INFO: rc: 1 Jun 3 21:57:18.768: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7837 exec execpod-affinitybsmqd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32368: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32368 nc: connect to 10.10.190.207 port 32368 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 21:57:19.516: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7837 exec execpod-affinitybsmqd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32368' Jun 3 21:57:19.851: INFO: rc: 1 Jun 3 21:57:19.851: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7837 exec execpod-affinitybsmqd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32368: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32368 nc: connect to 10.10.190.207 port 32368 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 21:57:20.516: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7837 exec execpod-affinitybsmqd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32368' Jun 3 21:57:20.763: INFO: rc: 1 Jun 3 21:57:20.763: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7837 exec execpod-affinitybsmqd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32368: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32368 nc: connect to 10.10.190.207 port 32368 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 21:57:21.516: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7837 exec execpod-affinitybsmqd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32368' Jun 3 21:57:22.164: INFO: rc: 1 Jun 3 21:57:22.164: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7837 exec execpod-affinitybsmqd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32368: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32368 nc: connect to 10.10.190.207 port 32368 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 21:57:22.516: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7837 exec execpod-affinitybsmqd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32368' Jun 3 21:57:22.765: INFO: rc: 1 Jun 3 21:57:22.765: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7837 exec execpod-affinitybsmqd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32368: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32368 nc: connect to 10.10.190.207 port 32368 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 21:57:23.518: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7837 exec execpod-affinitybsmqd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32368' Jun 3 21:57:23.747: INFO: rc: 1 Jun 3 21:57:23.747: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7837 exec execpod-affinitybsmqd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32368: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32368 nc: connect to 10.10.190.207 port 32368 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 21:57:24.516: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7837 exec execpod-affinitybsmqd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32368' Jun 3 21:57:24.759: INFO: rc: 1 Jun 3 21:57:24.759: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7837 exec execpod-affinitybsmqd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32368: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32368 nc: connect to 10.10.190.207 port 32368 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 21:57:25.517: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7837 exec execpod-affinitybsmqd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32368' Jun 3 21:57:25.779: INFO: rc: 1 Jun 3 21:57:25.779: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7837 exec execpod-affinitybsmqd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32368: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32368 nc: connect to 10.10.190.207 port 32368 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 21:57:26.516: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7837 exec execpod-affinitybsmqd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32368' Jun 3 21:57:26.834: INFO: rc: 1 Jun 3 21:57:26.834: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7837 exec execpod-affinitybsmqd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32368: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32368 nc: connect to 10.10.190.207 port 32368 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 21:57:27.518: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7837 exec execpod-affinitybsmqd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32368' Jun 3 21:57:27.839: INFO: rc: 1 Jun 3 21:57:27.839: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7837 exec execpod-affinitybsmqd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32368: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32368 nc: connect to 10.10.190.207 port 32368 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 21:57:28.516: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7837 exec execpod-affinitybsmqd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32368' Jun 3 21:57:28.772: INFO: rc: 1 Jun 3 21:57:28.772: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7837 exec execpod-affinitybsmqd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32368: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32368 nc: connect to 10.10.190.207 port 32368 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 21:57:29.517: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7837 exec execpod-affinitybsmqd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32368' Jun 3 21:57:29.765: INFO: rc: 1 Jun 3 21:57:29.765: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7837 exec execpod-affinitybsmqd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32368: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32368 nc: connect to 10.10.190.207 port 32368 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 21:57:30.516: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7837 exec execpod-affinitybsmqd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32368' Jun 3 21:57:31.865: INFO: rc: 1 Jun 3 21:57:31.865: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7837 exec execpod-affinitybsmqd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32368: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32368 nc: connect to 10.10.190.207 port 32368 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 21:57:32.516: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7837 exec execpod-affinitybsmqd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32368' Jun 3 21:57:32.940: INFO: rc: 1 Jun 3 21:57:32.940: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7837 exec execpod-affinitybsmqd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32368: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32368 + echo hostName nc: connect to 10.10.190.207 port 32368 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 21:57:33.516: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7837 exec execpod-affinitybsmqd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32368' Jun 3 21:57:33.826: INFO: rc: 1 Jun 3 21:57:33.826: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7837 exec execpod-affinitybsmqd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32368: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32368 nc: connect to 10.10.190.207 port 32368 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 21:57:34.517: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7837 exec execpod-affinitybsmqd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32368' Jun 3 21:57:34.798: INFO: rc: 1 Jun 3 21:57:34.798: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7837 exec execpod-affinitybsmqd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32368: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32368 nc: connect to 10.10.190.207 port 32368 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 21:57:35.518: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7837 exec execpod-affinitybsmqd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32368' Jun 3 21:57:35.940: INFO: rc: 1 Jun 3 21:57:35.940: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7837 exec execpod-affinitybsmqd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32368: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32368 nc: connect to 10.10.190.207 port 32368 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 21:57:36.516: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7837 exec execpod-affinitybsmqd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32368' Jun 3 21:57:36.792: INFO: rc: 1 Jun 3 21:57:36.792: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7837 exec execpod-affinitybsmqd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32368: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32368 nc: connect to 10.10.190.207 port 32368 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 21:57:37.516: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7837 exec execpod-affinitybsmqd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32368' Jun 3 21:57:37.768: INFO: rc: 1 Jun 3 21:57:37.768: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7837 exec execpod-affinitybsmqd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32368: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32368 nc: connect to 10.10.190.207 port 32368 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 21:57:38.518: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7837 exec execpod-affinitybsmqd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32368' Jun 3 21:57:38.749: INFO: rc: 1 Jun 3 21:57:38.749: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7837 exec execpod-affinitybsmqd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32368: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32368 nc: connect to 10.10.190.207 port 32368 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 21:57:39.517: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7837 exec execpod-affinitybsmqd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32368' Jun 3 21:57:39.941: INFO: rc: 1 Jun 3 21:57:39.941: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7837 exec execpod-affinitybsmqd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32368: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32368 nc: connect to 10.10.190.207 port 32368 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 21:57:40.518: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7837 exec execpod-affinitybsmqd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32368' Jun 3 21:57:40.824: INFO: rc: 1 Jun 3 21:57:40.824: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7837 exec execpod-affinitybsmqd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32368: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32368 nc: connect to 10.10.190.207 port 32368 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 21:57:41.517: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7837 exec execpod-affinitybsmqd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32368' Jun 3 21:57:41.755: INFO: rc: 1 Jun 3 21:57:41.755: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7837 exec execpod-affinitybsmqd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32368: Command stdout: stderr: + + ncecho -v hostName -t -w 2 10.10.190.207 32368 nc: connect to 10.10.190.207 port 32368 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 21:57:42.517: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7837 exec execpod-affinitybsmqd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32368' Jun 3 21:57:42.760: INFO: rc: 1 Jun 3 21:57:42.760: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7837 exec execpod-affinitybsmqd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32368: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32368 nc: connect to 10.10.190.207 port 32368 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 21:57:43.518: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7837 exec execpod-affinitybsmqd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32368' Jun 3 21:57:43.753: INFO: rc: 1 Jun 3 21:57:43.753: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7837 exec execpod-affinitybsmqd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32368: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32368 nc: connect to 10.10.190.207 port 32368 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 21:57:44.517: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7837 exec execpod-affinitybsmqd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32368' Jun 3 21:57:44.760: INFO: rc: 1 Jun 3 21:57:44.760: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7837 exec execpod-affinitybsmqd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32368: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32368 nc: connect to 10.10.190.207 port 32368 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 21:57:45.517: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7837 exec execpod-affinitybsmqd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32368' Jun 3 21:57:45.755: INFO: rc: 1 Jun 3 21:57:45.755: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7837 exec execpod-affinitybsmqd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32368: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32368 nc: connect to 10.10.190.207 port 32368 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 21:57:46.517: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7837 exec execpod-affinitybsmqd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32368' Jun 3 21:57:46.761: INFO: rc: 1 Jun 3 21:57:46.761: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7837 exec execpod-affinitybsmqd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32368: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32368 nc: connect to 10.10.190.207 port 32368 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 21:57:47.518: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7837 exec execpod-affinitybsmqd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32368' Jun 3 21:57:47.764: INFO: rc: 1 Jun 3 21:57:47.764: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7837 exec execpod-affinitybsmqd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32368: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32368 + echo hostName nc: connect to 10.10.190.207 port 32368 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 21:57:48.517: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7837 exec execpod-affinitybsmqd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32368' Jun 3 21:57:48.759: INFO: rc: 1 Jun 3 21:57:48.759: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7837 exec execpod-affinitybsmqd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32368: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32368 nc: connect to 10.10.190.207 port 32368 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 21:57:49.516: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7837 exec execpod-affinitybsmqd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32368' Jun 3 21:57:49.758: INFO: rc: 1 Jun 3 21:57:49.758: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7837 exec execpod-affinitybsmqd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32368: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32368 + echo hostName nc: connect to 10.10.190.207 port 32368 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 21:57:50.518: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7837 exec execpod-affinitybsmqd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32368' Jun 3 21:57:50.757: INFO: rc: 1 Jun 3 21:57:50.757: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7837 exec execpod-affinitybsmqd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32368: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32368 nc: connect to 10.10.190.207 port 32368 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 21:57:51.517: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7837 exec execpod-affinitybsmqd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32368' Jun 3 21:57:51.749: INFO: rc: 1 Jun 3 21:57:51.749: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7837 exec execpod-affinitybsmqd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32368: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32368 nc: connect to 10.10.190.207 port 32368 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 21:57:52.518: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7837 exec execpod-affinitybsmqd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32368' Jun 3 21:57:52.785: INFO: rc: 1 Jun 3 21:57:52.786: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7837 exec execpod-affinitybsmqd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32368: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32368 nc: connect to 10.10.190.207 port 32368 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 21:57:53.517: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7837 exec execpod-affinitybsmqd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32368' Jun 3 21:57:53.746: INFO: rc: 1 Jun 3 21:57:53.746: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7837 exec execpod-affinitybsmqd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32368: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32368 nc: connect to 10.10.190.207 port 32368 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 21:57:54.517: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7837 exec execpod-affinitybsmqd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32368' Jun 3 21:57:54.773: INFO: rc: 1 Jun 3 21:57:54.773: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7837 exec execpod-affinitybsmqd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32368: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32368 nc: connect to 10.10.190.207 port 32368 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 21:57:55.516: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7837 exec execpod-affinitybsmqd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32368' Jun 3 21:57:55.757: INFO: rc: 1 Jun 3 21:57:55.757: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7837 exec execpod-affinitybsmqd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32368: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32368 nc: connect to 10.10.190.207 port 32368 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 21:57:56.517: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7837 exec execpod-affinitybsmqd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32368' Jun 3 21:57:56.775: INFO: rc: 1 Jun 3 21:57:56.776: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7837 exec execpod-affinitybsmqd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32368: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32368 nc: connect to 10.10.190.207 port 32368 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 21:57:57.516: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7837 exec execpod-affinitybsmqd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32368' Jun 3 21:57:57.763: INFO: rc: 1 Jun 3 21:57:57.764: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7837 exec execpod-affinitybsmqd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32368: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32368 nc: connect to 10.10.190.207 port 32368 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 21:57:58.517: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7837 exec execpod-affinitybsmqd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32368' Jun 3 21:57:58.736: INFO: rc: 1 Jun 3 21:57:58.736: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7837 exec execpod-affinitybsmqd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32368: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32368 nc: connect to 10.10.190.207 port 32368 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 21:57:59.516: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7837 exec execpod-affinitybsmqd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32368' Jun 3 21:57:59.880: INFO: rc: 1 Jun 3 21:57:59.880: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7837 exec execpod-affinitybsmqd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32368: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32368 nc: connect to 10.10.190.207 port 32368 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 21:58:00.518: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7837 exec execpod-affinitybsmqd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32368' Jun 3 21:58:01.826: INFO: rc: 1 Jun 3 21:58:01.826: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7837 exec execpod-affinitybsmqd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32368: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32368 nc: connect to 10.10.190.207 port 32368 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 21:58:02.517: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7837 exec execpod-affinitybsmqd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32368' Jun 3 21:58:02.882: INFO: rc: 1 Jun 3 21:58:02.882: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7837 exec execpod-affinitybsmqd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32368: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32368 nc: connect to 10.10.190.207 port 32368 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 21:58:03.517: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7837 exec execpod-affinitybsmqd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32368' Jun 3 21:58:03.960: INFO: rc: 1 Jun 3 21:58:03.960: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7837 exec execpod-affinitybsmqd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32368: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32368 nc: connect to 10.10.190.207 port 32368 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 21:58:04.517: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7837 exec execpod-affinitybsmqd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32368' Jun 3 21:58:04.780: INFO: rc: 1 Jun 3 21:58:04.780: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7837 exec execpod-affinitybsmqd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32368: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32368 nc: connect to 10.10.190.207 port 32368 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 21:58:05.517: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7837 exec execpod-affinitybsmqd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32368' Jun 3 21:58:05.764: INFO: rc: 1 Jun 3 21:58:05.764: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7837 exec execpod-affinitybsmqd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32368: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32368 nc: connect to 10.10.190.207 port 32368 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 21:58:06.516: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7837 exec execpod-affinitybsmqd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32368' Jun 3 21:58:06.761: INFO: rc: 1 Jun 3 21:58:06.762: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7837 exec execpod-affinitybsmqd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32368: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32368 nc: connect to 10.10.190.207 port 32368 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 21:58:07.516: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7837 exec execpod-affinitybsmqd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32368' Jun 3 21:58:07.774: INFO: rc: 1 Jun 3 21:58:07.774: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7837 exec execpod-affinitybsmqd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32368: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32368 nc: connect to 10.10.190.207 port 32368 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 21:58:08.517: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7837 exec execpod-affinitybsmqd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32368' Jun 3 21:58:08.760: INFO: rc: 1 Jun 3 21:58:08.760: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7837 exec execpod-affinitybsmqd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32368: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32368 nc: connect to 10.10.190.207 port 32368 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 21:58:09.516: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7837 exec execpod-affinitybsmqd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32368' Jun 3 21:58:09.756: INFO: rc: 1 Jun 3 21:58:09.756: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7837 exec execpod-affinitybsmqd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32368: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32368 nc: connect to 10.10.190.207 port 32368 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 21:58:10.516: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7837 exec execpod-affinitybsmqd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32368' Jun 3 21:58:10.765: INFO: rc: 1 Jun 3 21:58:10.765: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7837 exec execpod-affinitybsmqd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32368: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32368 nc: connect to 10.10.190.207 port 32368 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 21:58:11.517: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7837 exec execpod-affinitybsmqd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32368' Jun 3 21:58:11.771: INFO: rc: 1 Jun 3 21:58:11.771: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7837 exec execpod-affinitybsmqd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32368: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32368 nc: connect to 10.10.190.207 port 32368 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 21:58:12.517: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7837 exec execpod-affinitybsmqd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32368' Jun 3 21:58:12.774: INFO: rc: 1 Jun 3 21:58:12.774: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7837 exec execpod-affinitybsmqd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32368: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32368 nc: connect to 10.10.190.207 port 32368 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 21:58:13.516: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7837 exec execpod-affinitybsmqd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32368' Jun 3 21:58:13.758: INFO: rc: 1 Jun 3 21:58:13.758: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7837 exec execpod-affinitybsmqd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32368: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32368 nc: connect to 10.10.190.207 port 32368 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 21:58:14.516: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7837 exec execpod-affinitybsmqd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32368' Jun 3 21:58:14.775: INFO: rc: 1 Jun 3 21:58:14.775: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7837 exec execpod-affinitybsmqd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32368: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32368 nc: connect to 10.10.190.207 port 32368 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 21:58:15.516: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7837 exec execpod-affinitybsmqd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32368' Jun 3 21:58:15.762: INFO: rc: 1 Jun 3 21:58:15.762: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7837 exec execpod-affinitybsmqd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32368: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32368 nc: connect to 10.10.190.207 port 32368 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 21:58:16.516: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7837 exec execpod-affinitybsmqd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32368' Jun 3 21:58:16.875: INFO: rc: 1 Jun 3 21:58:16.875: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7837 exec execpod-affinitybsmqd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32368: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32368 nc: connect to 10.10.190.207 port 32368 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 21:58:17.516: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7837 exec execpod-affinitybsmqd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32368' Jun 3 21:58:17.838: INFO: rc: 1 Jun 3 21:58:17.839: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7837 exec execpod-affinitybsmqd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32368: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32368 nc: connect to 10.10.190.207 port 32368 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 21:58:18.518: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7837 exec execpod-affinitybsmqd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32368' Jun 3 21:58:18.745: INFO: rc: 1 Jun 3 21:58:18.745: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7837 exec execpod-affinitybsmqd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32368: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32368 nc: connect to 10.10.190.207 port 32368 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 21:58:19.517: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7837 exec execpod-affinitybsmqd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32368' Jun 3 21:58:19.754: INFO: rc: 1 Jun 3 21:58:19.754: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7837 exec execpod-affinitybsmqd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32368: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32368 nc: connect to 10.10.190.207 port 32368 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 21:58:20.518: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7837 exec execpod-affinitybsmqd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32368' Jun 3 21:58:20.764: INFO: rc: 1 Jun 3 21:58:20.764: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7837 exec execpod-affinitybsmqd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32368: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32368 nc: connect to 10.10.190.207 port 32368 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 21:58:21.516: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7837 exec execpod-affinitybsmqd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32368' Jun 3 21:58:21.755: INFO: rc: 1 Jun 3 21:58:21.755: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7837 exec execpod-affinitybsmqd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32368: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32368 nc: connect to 10.10.190.207 port 32368 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 21:58:22.518: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7837 exec execpod-affinitybsmqd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32368' Jun 3 21:58:22.752: INFO: rc: 1 Jun 3 21:58:22.752: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7837 exec execpod-affinitybsmqd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32368: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32368 nc: connect to 10.10.190.207 port 32368 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 21:58:23.517: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7837 exec execpod-affinitybsmqd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32368' Jun 3 21:58:23.769: INFO: rc: 1 Jun 3 21:58:23.769: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7837 exec execpod-affinitybsmqd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32368: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32368 nc: connect to 10.10.190.207 port 32368 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 21:58:24.517: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7837 exec execpod-affinitybsmqd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32368' Jun 3 21:58:24.754: INFO: rc: 1 Jun 3 21:58:24.754: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7837 exec execpod-affinitybsmqd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32368: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32368 nc: connect to 10.10.190.207 port 32368 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 21:58:25.518: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7837 exec execpod-affinitybsmqd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32368' Jun 3 21:58:25.773: INFO: rc: 1 Jun 3 21:58:25.773: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7837 exec execpod-affinitybsmqd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32368: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32368 nc: connect to 10.10.190.207 port 32368 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 21:58:26.517: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7837 exec execpod-affinitybsmqd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32368' Jun 3 21:58:26.762: INFO: rc: 1 Jun 3 21:58:26.762: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7837 exec execpod-affinitybsmqd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32368: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32368 nc: connect to 10.10.190.207 port 32368 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 21:58:27.518: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7837 exec execpod-affinitybsmqd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32368' Jun 3 21:58:27.762: INFO: rc: 1 Jun 3 21:58:27.762: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7837 exec execpod-affinitybsmqd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32368: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32368 nc: connect to 10.10.190.207 port 32368 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 21:58:28.518: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7837 exec execpod-affinitybsmqd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32368' Jun 3 21:58:28.782: INFO: rc: 1 Jun 3 21:58:28.782: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7837 exec execpod-affinitybsmqd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32368: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32368 nc: connect to 10.10.190.207 port 32368 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 21:58:29.516: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7837 exec execpod-affinitybsmqd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32368' Jun 3 21:58:29.765: INFO: rc: 1 Jun 3 21:58:29.765: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7837 exec execpod-affinitybsmqd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32368: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32368 nc: connect to 10.10.190.207 port 32368 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 21:58:30.517: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7837 exec execpod-affinitybsmqd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32368' Jun 3 21:58:31.520: INFO: rc: 1 Jun 3 21:58:31.520: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7837 exec execpod-affinitybsmqd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32368: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32368 nc: connect to 10.10.190.207 port 32368 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 21:58:32.518: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7837 exec execpod-affinitybsmqd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32368' Jun 3 21:58:32.775: INFO: rc: 1 Jun 3 21:58:32.775: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7837 exec execpod-affinitybsmqd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32368: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32368 nc: connect to 10.10.190.207 port 32368 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 21:58:33.516: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7837 exec execpod-affinitybsmqd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32368' Jun 3 21:58:33.804: INFO: rc: 1 Jun 3 21:58:33.804: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7837 exec execpod-affinitybsmqd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32368: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32368 nc: connect to 10.10.190.207 port 32368 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 21:58:34.517: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7837 exec execpod-affinitybsmqd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32368' Jun 3 21:58:34.759: INFO: rc: 1 Jun 3 21:58:34.759: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7837 exec execpod-affinitybsmqd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32368: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32368 nc: connect to 10.10.190.207 port 32368 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 21:58:35.518: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7837 exec execpod-affinitybsmqd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32368' Jun 3 21:58:36.021: INFO: rc: 1 Jun 3 21:58:36.021: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7837 exec execpod-affinitybsmqd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32368: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32368 nc: connect to 10.10.190.207 port 32368 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 21:58:36.516: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7837 exec execpod-affinitybsmqd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32368' Jun 3 21:58:36.796: INFO: rc: 1 Jun 3 21:58:36.796: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7837 exec execpod-affinitybsmqd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32368: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32368 nc: connect to 10.10.190.207 port 32368 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 21:58:37.517: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7837 exec execpod-affinitybsmqd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32368' Jun 3 21:58:37.751: INFO: rc: 1 Jun 3 21:58:37.751: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7837 exec execpod-affinitybsmqd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32368: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32368 nc: connect to 10.10.190.207 port 32368 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 21:58:38.516: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7837 exec execpod-affinitybsmqd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32368' Jun 3 21:58:38.754: INFO: rc: 1 Jun 3 21:58:38.755: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7837 exec execpod-affinitybsmqd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32368: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32368 nc: connect to 10.10.190.207 port 32368 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 21:58:39.516: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7837 exec execpod-affinitybsmqd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32368' Jun 3 21:58:39.757: INFO: rc: 1 Jun 3 21:58:39.757: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7837 exec execpod-affinitybsmqd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32368: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32368 nc: connect to 10.10.190.207 port 32368 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 21:58:40.516: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7837 exec execpod-affinitybsmqd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32368' Jun 3 21:58:40.804: INFO: rc: 1 Jun 3 21:58:40.804: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7837 exec execpod-affinitybsmqd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32368: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32368 nc: connect to 10.10.190.207 port 32368 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 21:58:41.517: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7837 exec execpod-affinitybsmqd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32368' Jun 3 21:58:41.762: INFO: rc: 1 Jun 3 21:58:41.762: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7837 exec execpod-affinitybsmqd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32368: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32368 nc: connect to 10.10.190.207 port 32368 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 21:58:42.518: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7837 exec execpod-affinitybsmqd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32368' Jun 3 21:58:42.902: INFO: rc: 1 Jun 3 21:58:42.902: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7837 exec execpod-affinitybsmqd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32368: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32368 nc: connect to 10.10.190.207 port 32368 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 21:58:43.517: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7837 exec execpod-affinitybsmqd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32368' Jun 3 21:58:43.770: INFO: rc: 1 Jun 3 21:58:43.770: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7837 exec execpod-affinitybsmqd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32368: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32368 nc: connect to 10.10.190.207 port 32368 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 21:58:44.516: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7837 exec execpod-affinitybsmqd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32368' Jun 3 21:58:45.076: INFO: rc: 1 Jun 3 21:58:45.076: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7837 exec execpod-affinitybsmqd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32368: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32368 nc: connect to 10.10.190.207 port 32368 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 21:58:45.516: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7837 exec execpod-affinitybsmqd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32368' Jun 3 21:58:46.094: INFO: rc: 1 Jun 3 21:58:46.094: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7837 exec execpod-affinitybsmqd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32368: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32368 nc: connect to 10.10.190.207 port 32368 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 21:58:46.517: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7837 exec execpod-affinitybsmqd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32368' Jun 3 21:58:46.816: INFO: rc: 1 Jun 3 21:58:46.817: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7837 exec execpod-affinitybsmqd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32368: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32368 nc: connect to 10.10.190.207 port 32368 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 21:58:47.516: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7837 exec execpod-affinitybsmqd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32368' Jun 3 21:58:47.826: INFO: rc: 1 Jun 3 21:58:47.826: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7837 exec execpod-affinitybsmqd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32368: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32368 nc: connect to 10.10.190.207 port 32368 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 21:58:48.518: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7837 exec execpod-affinitybsmqd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32368' Jun 3 21:58:48.787: INFO: rc: 1 Jun 3 21:58:48.787: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7837 exec execpod-affinitybsmqd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32368: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32368 nc: connect to 10.10.190.207 port 32368 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 21:58:49.516: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7837 exec execpod-affinitybsmqd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32368' Jun 3 21:58:49.973: INFO: rc: 1 Jun 3 21:58:49.973: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7837 exec execpod-affinitybsmqd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32368: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32368 nc: connect to 10.10.190.207 port 32368 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 21:58:50.517: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7837 exec execpod-affinitybsmqd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32368' Jun 3 21:58:50.831: INFO: rc: 1 Jun 3 21:58:50.831: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7837 exec execpod-affinitybsmqd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32368: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32368 nc: connect to 10.10.190.207 port 32368 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 21:58:51.517: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7837 exec execpod-affinitybsmqd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32368' Jun 3 21:58:51.751: INFO: rc: 1 Jun 3 21:58:51.752: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7837 exec execpod-affinitybsmqd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32368: Command stdout: stderr: + + ncecho -v hostName -t -w 2 10.10.190.207 32368 nc: connect to 10.10.190.207 port 32368 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 21:58:52.516: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7837 exec execpod-affinitybsmqd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32368' Jun 3 21:58:52.817: INFO: rc: 1 Jun 3 21:58:52.817: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7837 exec execpod-affinitybsmqd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32368: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32368 nc: connect to 10.10.190.207 port 32368 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 21:58:53.516: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7837 exec execpod-affinitybsmqd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32368' Jun 3 21:58:54.054: INFO: rc: 1 Jun 3 21:58:54.054: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7837 exec execpod-affinitybsmqd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32368: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32368 nc: connect to 10.10.190.207 port 32368 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 21:58:54.517: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7837 exec execpod-affinitybsmqd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32368' Jun 3 21:58:54.949: INFO: rc: 1 Jun 3 21:58:54.949: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7837 exec execpod-affinitybsmqd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32368: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32368 + echo hostName nc: connect to 10.10.190.207 port 32368 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 21:58:55.516: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7837 exec execpod-affinitybsmqd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32368' Jun 3 21:58:55.867: INFO: rc: 1 Jun 3 21:58:55.868: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7837 exec execpod-affinitybsmqd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32368: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32368 nc: connect to 10.10.190.207 port 32368 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 21:58:56.517: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7837 exec execpod-affinitybsmqd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32368' Jun 3 21:58:56.757: INFO: rc: 1 Jun 3 21:58:56.757: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7837 exec execpod-affinitybsmqd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32368: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32368 nc: connect to 10.10.190.207 port 32368 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 21:58:57.516: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7837 exec execpod-affinitybsmqd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32368' Jun 3 21:58:57.765: INFO: rc: 1 Jun 3 21:58:57.765: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7837 exec execpod-affinitybsmqd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32368: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32368 nc: connect to 10.10.190.207 port 32368 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 21:58:58.517: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7837 exec execpod-affinitybsmqd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32368' Jun 3 21:58:58.763: INFO: rc: 1 Jun 3 21:58:58.763: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7837 exec execpod-affinitybsmqd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32368: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32368 nc: connect to 10.10.190.207 port 32368 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 21:58:59.516: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7837 exec execpod-affinitybsmqd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32368' Jun 3 21:58:59.752: INFO: rc: 1 Jun 3 21:58:59.752: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7837 exec execpod-affinitybsmqd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32368: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32368 nc: connect to 10.10.190.207 port 32368 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 21:59:00.516: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7837 exec execpod-affinitybsmqd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32368' Jun 3 21:59:01.815: INFO: rc: 1 Jun 3 21:59:01.815: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7837 exec execpod-affinitybsmqd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32368: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32368 nc: connect to 10.10.190.207 port 32368 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 21:59:02.516: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7837 exec execpod-affinitybsmqd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32368' Jun 3 21:59:02.836: INFO: rc: 1 Jun 3 21:59:02.837: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7837 exec execpod-affinitybsmqd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32368: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32368 nc: connect to 10.10.190.207 port 32368 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 21:59:03.516: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7837 exec execpod-affinitybsmqd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32368' Jun 3 21:59:03.743: INFO: rc: 1 Jun 3 21:59:03.743: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7837 exec execpod-affinitybsmqd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32368: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32368 nc: connect to 10.10.190.207 port 32368 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 21:59:04.516: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7837 exec execpod-affinitybsmqd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32368' Jun 3 21:59:04.736: INFO: rc: 1 Jun 3 21:59:04.736: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7837 exec execpod-affinitybsmqd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32368: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32368 + echo hostName nc: connect to 10.10.190.207 port 32368 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 21:59:05.516: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7837 exec execpod-affinitybsmqd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32368' Jun 3 21:59:06.001: INFO: rc: 1 Jun 3 21:59:06.001: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7837 exec execpod-affinitybsmqd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32368: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32368 nc: connect to 10.10.190.207 port 32368 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 21:59:06.517: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7837 exec execpod-affinitybsmqd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32368' Jun 3 21:59:06.757: INFO: rc: 1 Jun 3 21:59:06.757: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7837 exec execpod-affinitybsmqd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32368: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32368 nc: connect to 10.10.190.207 port 32368 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 21:59:07.518: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7837 exec execpod-affinitybsmqd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32368' Jun 3 21:59:07.873: INFO: rc: 1 Jun 3 21:59:07.873: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7837 exec execpod-affinitybsmqd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32368: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32368 nc: connect to 10.10.190.207 port 32368 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 21:59:07.873: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7837 exec execpod-affinitybsmqd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32368' Jun 3 21:59:08.210: INFO: rc: 1 Jun 3 21:59:08.210: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7837 exec execpod-affinitybsmqd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32368: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32368 nc: connect to 10.10.190.207 port 32368 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 21:59:08.210: FAIL: Unexpected error: <*errors.errorString | 0xc000d6ef80>: { s: "service is not reachable within 2m0s timeout on endpoint 10.10.190.207:32368 over TCP protocol", } service is not reachable within 2m0s timeout on endpoint 10.10.190.207:32368 over TCP protocol occurred Full Stack Trace k8s.io/kubernetes/test/e2e/network.execAffinityTestForNonLBServiceWithOptionalTransition(0xc001172160, 0x77b33d8, 0xc000763080, 0xc001440a00, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:2576 +0x625 k8s.io/kubernetes/test/e2e/network.execAffinityTestForNonLBService(...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:2535 k8s.io/kubernetes/test/e2e/network.glob..func24.25() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1829 +0xa5 k8s.io/kubernetes/test/e2e.RunE2ETests(0xc000464780) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x36c k8s.io/kubernetes/test/e2e.TestE2E(0xc000464780) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:144 +0x2b testing.tRunner(0xc000464780, 0x70f99e8) /usr/local/go/src/testing/testing.go:1193 +0xef created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1238 +0x2b3 Jun 3 21:59:08.211: INFO: Cleaning up the exec pod STEP: deleting ReplicationController affinity-nodeport in namespace services-7837, will wait for the garbage collector to delete the pods Jun 3 21:59:08.286: INFO: Deleting ReplicationController affinity-nodeport took: 4.691697ms Jun 3 21:59:08.387: INFO: Terminating ReplicationController affinity-nodeport pods took: 100.78545ms [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 STEP: Collecting events from namespace "services-7837". STEP: Found 27 events. Jun 3 21:59:22.405: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for affinity-nodeport-52qgv: { } Scheduled: Successfully assigned services-7837/affinity-nodeport-52qgv to node1 Jun 3 21:59:22.405: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for affinity-nodeport-lbzh9: { } Scheduled: Successfully assigned services-7837/affinity-nodeport-lbzh9 to node1 Jun 3 21:59:22.405: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for affinity-nodeport-tvqzl: { } Scheduled: Successfully assigned services-7837/affinity-nodeport-tvqzl to node2 Jun 3 21:59:22.405: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for execpod-affinitybsmqd: { } Scheduled: Successfully assigned services-7837/execpod-affinitybsmqd to node2 Jun 3 21:59:22.405: INFO: At 2022-06-03 21:56:53 +0000 UTC - event for affinity-nodeport: {replication-controller } SuccessfulCreate: Created pod: affinity-nodeport-52qgv Jun 3 21:59:22.405: INFO: At 2022-06-03 21:56:53 +0000 UTC - event for affinity-nodeport: {replication-controller } SuccessfulCreate: Created pod: affinity-nodeport-tvqzl Jun 3 21:59:22.405: INFO: At 2022-06-03 21:56:53 +0000 UTC - event for affinity-nodeport: {replication-controller } SuccessfulCreate: Created pod: affinity-nodeport-lbzh9 Jun 3 21:59:22.405: INFO: At 2022-06-03 21:56:55 +0000 UTC - event for affinity-nodeport-tvqzl: {kubelet node2} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/agnhost:2.32" in 263.088231ms Jun 3 21:59:22.405: INFO: At 2022-06-03 21:56:55 +0000 UTC - event for affinity-nodeport-tvqzl: {kubelet node2} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/agnhost:2.32" Jun 3 21:59:22.405: INFO: At 2022-06-03 21:56:56 +0000 UTC - event for affinity-nodeport-52qgv: {kubelet node1} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/agnhost:2.32" Jun 3 21:59:22.405: INFO: At 2022-06-03 21:56:56 +0000 UTC - event for affinity-nodeport-52qgv: {kubelet node1} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/agnhost:2.32" in 251.456197ms Jun 3 21:59:22.405: INFO: At 2022-06-03 21:56:56 +0000 UTC - event for affinity-nodeport-lbzh9: {kubelet node1} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/agnhost:2.32" in 463.928472ms Jun 3 21:59:22.405: INFO: At 2022-06-03 21:56:56 +0000 UTC - event for affinity-nodeport-lbzh9: {kubelet node1} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/agnhost:2.32" Jun 3 21:59:22.405: INFO: At 2022-06-03 21:56:56 +0000 UTC - event for affinity-nodeport-tvqzl: {kubelet node2} Created: Created container affinity-nodeport Jun 3 21:59:22.405: INFO: At 2022-06-03 21:56:56 +0000 UTC - event for affinity-nodeport-tvqzl: {kubelet node2} Started: Started container affinity-nodeport Jun 3 21:59:22.405: INFO: At 2022-06-03 21:56:57 +0000 UTC - event for affinity-nodeport-52qgv: {kubelet node1} Started: Started container affinity-nodeport Jun 3 21:59:22.405: INFO: At 2022-06-03 21:56:57 +0000 UTC - event for affinity-nodeport-52qgv: {kubelet node1} Created: Created container affinity-nodeport Jun 3 21:59:22.405: INFO: At 2022-06-03 21:56:57 +0000 UTC - event for affinity-nodeport-lbzh9: {kubelet node1} Started: Started container affinity-nodeport Jun 3 21:59:22.405: INFO: At 2022-06-03 21:56:57 +0000 UTC - event for affinity-nodeport-lbzh9: {kubelet node1} Created: Created container affinity-nodeport Jun 3 21:59:22.405: INFO: At 2022-06-03 21:57:01 +0000 UTC - event for execpod-affinitybsmqd: {kubelet node2} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/agnhost:2.32" Jun 3 21:59:22.405: INFO: At 2022-06-03 21:57:01 +0000 UTC - event for execpod-affinitybsmqd: {kubelet node2} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/agnhost:2.32" in 274.38666ms Jun 3 21:59:22.405: INFO: At 2022-06-03 21:57:02 +0000 UTC - event for execpod-affinitybsmqd: {kubelet node2} Started: Started container agnhost-container Jun 3 21:59:22.405: INFO: At 2022-06-03 21:57:02 +0000 UTC - event for execpod-affinitybsmqd: {kubelet node2} Created: Created container agnhost-container Jun 3 21:59:22.405: INFO: At 2022-06-03 21:59:08 +0000 UTC - event for affinity-nodeport-52qgv: {kubelet node1} Killing: Stopping container affinity-nodeport Jun 3 21:59:22.405: INFO: At 2022-06-03 21:59:08 +0000 UTC - event for affinity-nodeport-lbzh9: {kubelet node1} Killing: Stopping container affinity-nodeport Jun 3 21:59:22.405: INFO: At 2022-06-03 21:59:08 +0000 UTC - event for affinity-nodeport-tvqzl: {kubelet node2} Killing: Stopping container affinity-nodeport Jun 3 21:59:22.405: INFO: At 2022-06-03 21:59:08 +0000 UTC - event for execpod-affinitybsmqd: {kubelet node2} Killing: Stopping container agnhost-container Jun 3 21:59:22.407: INFO: POD NODE PHASE GRACE CONDITIONS Jun 3 21:59:22.407: INFO: Jun 3 21:59:22.412: INFO: Logging node info for node master1 Jun 3 21:59:22.415: INFO: Node Info: &Node{ObjectMeta:{master1 4d289319-b343-4e96-a789-1a1cbeac007b 37353 0 2022-06-03 19:57:53 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master1 kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.202 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2022-06-03 19:57:56 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}}} {kube-controller-manager Update v1 2022-06-03 19:58:10 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.0.0/24\"":{}},"f:taints":{}}}} {flanneld Update v1 2022-06-03 20:00:37 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kubelet Update v1 2022-06-03 20:05:24 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{451201003520 0} {} 440625980Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234743296 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{406080902496 0} {} 406080902496 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324579328 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-06-03 20:03:30 +0000 UTC,LastTransitionTime:2022-06-03 20:03:30 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-06-03 21:59:21 +0000 UTC,LastTransitionTime:2022-06-03 19:57:50 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-06-03 21:59:21 +0000 UTC,LastTransitionTime:2022-06-03 19:57:50 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-06-03 21:59:21 +0000 UTC,LastTransitionTime:2022-06-03 19:57:50 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-06-03 21:59:21 +0000 UTC,LastTransitionTime:2022-06-03 20:00:47 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.202,},NodeAddress{Type:Hostname,Address:master1,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:3d668405f73a457bb0bcb4df5f4edac8,SystemUUID:00ACFB60-0631-E711-906E-0017A4403562,BootID:c08279e3-a5cb-4f4d-b9f0-f2cde655469f,KernelVersion:3.10.0-1160.66.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.16,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:727687199,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 sirot/netperf-latest:latest],SizeBytes:282025213,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e k8s.gcr.io/coredns/coredns:v1.8.0],SizeBytes:42454755,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[localhost:30500/tasextender@sha256:eddd5e176ac5f79e2e8ba9a1b7023bbf7200edfa835da39de54a6bf3568f9668 tasextender:latest localhost:30500/tasextender:0.4],SizeBytes:28910791,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[registry@sha256:1cd9409a311350c3072fe510b52046f104416376c126a479cef9a4dfe692cf57 registry:2.7.0],SizeBytes:24191168,},ContainerImage{Names:[nginx@sha256:b92d3b942c8b84da889ac3dc6e83bd20ffb8cd2d8298eba92c8b0bf88d52f03e nginx:1.20.1-alpine],SizeBytes:22721538,},ContainerImage{Names:[@ :],SizeBytes:5577654,},ContainerImage{Names:[alpine@sha256:c0e9560cda118f9ec63ddefb4a173a2b2a0347082d7dff7dc14272e7841a5b5a alpine:3.12.1],SizeBytes:5573013,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jun 3 21:59:22.416: INFO: Logging kubelet events for node master1 Jun 3 21:59:22.418: INFO: Logging pods the kubelet thinks is on node master1 Jun 3 21:59:22.448: INFO: kube-scheduler-master1 started at 2022-06-03 20:06:52 +0000 UTC (0+1 container statuses recorded) Jun 3 21:59:22.448: INFO: Container kube-scheduler ready: true, restart count 0 Jun 3 21:59:22.448: INFO: kube-proxy-zgchh started at 2022-06-03 19:59:36 +0000 UTC (0+1 container statuses recorded) Jun 3 21:59:22.448: INFO: Container kube-proxy ready: true, restart count 2 Jun 3 21:59:22.448: INFO: dns-autoscaler-7df78bfcfb-vdtpl started at 2022-06-03 20:01:09 +0000 UTC (0+1 container statuses recorded) Jun 3 21:59:22.448: INFO: Container autoscaler ready: true, restart count 2 Jun 3 21:59:22.448: INFO: coredns-8474476ff8-rvc4v started at 2022-06-03 20:01:12 +0000 UTC (0+1 container statuses recorded) Jun 3 21:59:22.448: INFO: Container coredns ready: true, restart count 1 Jun 3 21:59:22.448: INFO: container-registry-65d7c44b96-2nzvn started at 2022-06-03 20:05:02 +0000 UTC (0+2 container statuses recorded) Jun 3 21:59:22.448: INFO: Container docker-registry ready: true, restart count 0 Jun 3 21:59:22.448: INFO: Container nginx ready: true, restart count 0 Jun 3 21:59:22.448: INFO: kube-multus-ds-amd64-n58qk started at 2022-06-03 20:00:40 +0000 UTC (0+1 container statuses recorded) Jun 3 21:59:22.448: INFO: Container kube-multus ready: true, restart count 1 Jun 3 21:59:22.448: INFO: node-exporter-45rhg started at 2022-06-03 20:13:28 +0000 UTC (0+2 container statuses recorded) Jun 3 21:59:22.448: INFO: Container kube-rbac-proxy ready: true, restart count 0 Jun 3 21:59:22.448: INFO: Container node-exporter ready: true, restart count 0 Jun 3 21:59:22.448: INFO: kube-apiserver-master1 started at 2022-06-03 19:58:57 +0000 UTC (0+1 container statuses recorded) Jun 3 21:59:22.448: INFO: Container kube-apiserver ready: true, restart count 0 Jun 3 21:59:22.448: INFO: kube-controller-manager-master1 started at 2022-06-03 19:58:57 +0000 UTC (0+1 container statuses recorded) Jun 3 21:59:22.448: INFO: Container kube-controller-manager ready: true, restart count 1 Jun 3 21:59:22.448: INFO: kube-flannel-m8sj7 started at 2022-06-03 20:00:31 +0000 UTC (1+1 container statuses recorded) Jun 3 21:59:22.448: INFO: Init container install-cni ready: true, restart count 0 Jun 3 21:59:22.448: INFO: Container kube-flannel ready: true, restart count 1 Jun 3 21:59:22.545: INFO: Latency metrics for node master1 Jun 3 21:59:22.545: INFO: Logging node info for node master2 Jun 3 21:59:22.547: INFO: Node Info: &Node{ObjectMeta:{master2 a6ae2f0e-af0f-4dbb-a8e5-6d3a309310bc 37335 0 2022-06-03 19:58:21 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master2 kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.203 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2022-06-03 19:58:23 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}}} {flanneld Update v1 2022-06-03 20:00:37 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2022-06-03 20:00:45 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}},"f:taints":{}}}} {kubelet Update v1 2022-06-03 20:10:57 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:capacity":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.1.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{451201003520 0} {} 440625980Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234739200 0} {} 196518300Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{406080902496 0} {} 406080902496 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324575232 0} {} 195629468Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-06-03 20:03:28 +0000 UTC,LastTransitionTime:2022-06-03 20:03:28 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-06-03 21:59:21 +0000 UTC,LastTransitionTime:2022-06-03 19:58:21 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-06-03 21:59:21 +0000 UTC,LastTransitionTime:2022-06-03 19:58:21 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-06-03 21:59:21 +0000 UTC,LastTransitionTime:2022-06-03 19:58:21 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-06-03 21:59:21 +0000 UTC,LastTransitionTime:2022-06-03 20:00:45 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.203,},NodeAddress{Type:Hostname,Address:master2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:21e5c20b6e4a4d3fb07443d5575db572,SystemUUID:00A0DE53-E51D-E711-906E-0017A4403562,BootID:52401484-5222-49a3-a465-e7215ade9b1e,KernelVersion:3.10.0-1160.66.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.16,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:727687199,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 sirot/netperf-latest:latest],SizeBytes:282025213,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[quay.io/prometheus-operator/prometheus-operator@sha256:850c86bfeda4389bc9c757a9fd17ca5a090ea6b424968178d4467492cfa13921 quay.io/prometheus-operator/prometheus-operator:v0.44.1],SizeBytes:42617274,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e k8s.gcr.io/coredns/coredns:v1.8.0],SizeBytes:42454755,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jun 3 21:59:22.548: INFO: Logging kubelet events for node master2 Jun 3 21:59:22.550: INFO: Logging pods the kubelet thinks is on node master2 Jun 3 21:59:22.565: INFO: prometheus-operator-585ccfb458-xp2lz started at 2022-06-03 20:13:21 +0000 UTC (0+2 container statuses recorded) Jun 3 21:59:22.565: INFO: Container kube-rbac-proxy ready: true, restart count 0 Jun 3 21:59:22.565: INFO: Container prometheus-operator ready: true, restart count 0 Jun 3 21:59:22.565: INFO: node-exporter-2h6sb started at 2022-06-03 20:13:28 +0000 UTC (0+2 container statuses recorded) Jun 3 21:59:22.565: INFO: Container kube-rbac-proxy ready: true, restart count 0 Jun 3 21:59:22.565: INFO: Container node-exporter ready: true, restart count 0 Jun 3 21:59:22.565: INFO: kube-apiserver-master2 started at 2022-06-03 19:58:55 +0000 UTC (0+1 container statuses recorded) Jun 3 21:59:22.565: INFO: Container kube-apiserver ready: true, restart count 0 Jun 3 21:59:22.565: INFO: kube-controller-manager-master2 started at 2022-06-03 19:58:55 +0000 UTC (0+1 container statuses recorded) Jun 3 21:59:22.565: INFO: Container kube-controller-manager ready: true, restart count 2 Jun 3 21:59:22.565: INFO: kube-proxy-nlc58 started at 2022-06-03 19:59:36 +0000 UTC (0+1 container statuses recorded) Jun 3 21:59:22.565: INFO: Container kube-proxy ready: true, restart count 1 Jun 3 21:59:22.565: INFO: kube-scheduler-master2 started at 2022-06-03 19:58:55 +0000 UTC (0+1 container statuses recorded) Jun 3 21:59:22.565: INFO: Container kube-scheduler ready: true, restart count 3 Jun 3 21:59:22.565: INFO: kube-flannel-sbdcv started at 2022-06-03 20:00:32 +0000 UTC (1+1 container statuses recorded) Jun 3 21:59:22.565: INFO: Init container install-cni ready: true, restart count 2 Jun 3 21:59:22.565: INFO: Container kube-flannel ready: true, restart count 1 Jun 3 21:59:22.565: INFO: kube-multus-ds-amd64-ccvdq started at 2022-06-03 20:00:40 +0000 UTC (0+1 container statuses recorded) Jun 3 21:59:22.565: INFO: Container kube-multus ready: true, restart count 1 Jun 3 21:59:22.650: INFO: Latency metrics for node master2 Jun 3 21:59:22.650: INFO: Logging node info for node master3 Jun 3 21:59:22.653: INFO: Node Info: &Node{ObjectMeta:{master3 559b19e7-45b0-4589-9993-9bba259aae96 37348 0 2022-06-03 19:58:27 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master3 kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.204 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/master.version:v0.8.2 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2022-06-03 19:58:32 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}}} {flanneld Update v1 2022-06-03 20:00:37 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2022-06-03 20:00:47 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.2.0/24\"":{}},"f:taints":{}}}} {nfd-master Update v1 2022-06-03 20:08:15 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/master.version":{}}}}} {kubelet Update v1 2022-06-03 20:08:27 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:capacity":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.2.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{451201003520 0} {} 440625980Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234743296 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{406080902496 0} {} 406080902496 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324579328 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-06-03 20:03:22 +0000 UTC,LastTransitionTime:2022-06-03 20:03:22 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-06-03 21:59:21 +0000 UTC,LastTransitionTime:2022-06-03 19:58:27 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-06-03 21:59:21 +0000 UTC,LastTransitionTime:2022-06-03 19:58:27 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-06-03 21:59:21 +0000 UTC,LastTransitionTime:2022-06-03 19:58:27 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-06-03 21:59:21 +0000 UTC,LastTransitionTime:2022-06-03 20:03:18 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.204,},NodeAddress{Type:Hostname,Address:master3,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:5b399eed918a40dd8324debc1c0777a3,SystemUUID:008B1444-141E-E711-906E-0017A4403562,BootID:2fde35f0-2dc9-4531-9d2b-0bd4a6516b3a,KernelVersion:3.10.0-1160.66.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.16,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:727687199,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[k8s.gcr.io/nfd/node-feature-discovery@sha256:74a1cbd82354f148277e20cdce25d57816e355a896bc67f67a0f722164b16945 k8s.gcr.io/nfd/node-feature-discovery:v0.8.2],SizeBytes:108486428,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e k8s.gcr.io/coredns/coredns:v1.8.0],SizeBytes:42454755,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 aquasec/kube-bench:0.3.1],SizeBytes:19301876,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jun 3 21:59:22.653: INFO: Logging kubelet events for node master3 Jun 3 21:59:22.655: INFO: Logging pods the kubelet thinks is on node master3 Jun 3 21:59:22.669: INFO: kube-apiserver-master3 started at 2022-06-03 20:03:18 +0000 UTC (0+1 container statuses recorded) Jun 3 21:59:22.669: INFO: Container kube-apiserver ready: true, restart count 0 Jun 3 21:59:22.669: INFO: kube-flannel-nx64t started at 2022-06-03 20:00:32 +0000 UTC (1+1 container statuses recorded) Jun 3 21:59:22.669: INFO: Init container install-cni ready: true, restart count 2 Jun 3 21:59:22.669: INFO: Container kube-flannel ready: true, restart count 2 Jun 3 21:59:22.669: INFO: kube-multus-ds-amd64-gjv49 started at 2022-06-03 20:00:40 +0000 UTC (0+1 container statuses recorded) Jun 3 21:59:22.669: INFO: Container kube-multus ready: true, restart count 1 Jun 3 21:59:22.669: INFO: node-feature-discovery-controller-cff799f9f-8fbbp started at 2022-06-03 20:08:09 +0000 UTC (0+1 container statuses recorded) Jun 3 21:59:22.669: INFO: Container nfd-controller ready: true, restart count 0 Jun 3 21:59:22.669: INFO: kube-controller-manager-master3 started at 2022-06-03 20:03:18 +0000 UTC (0+1 container statuses recorded) Jun 3 21:59:22.669: INFO: Container kube-controller-manager ready: true, restart count 2 Jun 3 21:59:22.669: INFO: kube-scheduler-master3 started at 2022-06-03 19:58:27 +0000 UTC (0+1 container statuses recorded) Jun 3 21:59:22.669: INFO: Container kube-scheduler ready: true, restart count 3 Jun 3 21:59:22.669: INFO: kube-proxy-m8r9n started at 2022-06-03 19:59:36 +0000 UTC (0+1 container statuses recorded) Jun 3 21:59:22.669: INFO: Container kube-proxy ready: true, restart count 2 Jun 3 21:59:22.669: INFO: coredns-8474476ff8-dvwn7 started at 2022-06-03 20:01:07 +0000 UTC (0+1 container statuses recorded) Jun 3 21:59:22.669: INFO: Container coredns ready: true, restart count 1 Jun 3 21:59:22.669: INFO: node-exporter-jn8vv started at 2022-06-03 20:13:28 +0000 UTC (0+2 container statuses recorded) Jun 3 21:59:22.669: INFO: Container kube-rbac-proxy ready: true, restart count 0 Jun 3 21:59:22.669: INFO: Container node-exporter ready: true, restart count 0 Jun 3 21:59:22.764: INFO: Latency metrics for node master3 Jun 3 21:59:22.764: INFO: Logging node info for node node1 Jun 3 21:59:22.767: INFO: Node Info: &Node{ObjectMeta:{node1 482ecf0f-7f88-436d-a313-227096fe8b8d 37078 0 2022-06-03 19:59:31 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux cmk.intel.com/cmk-node:true feature.node.kubernetes.io/cpu-cpuid.ADX:true feature.node.kubernetes.io/cpu-cpuid.AESNI:true feature.node.kubernetes.io/cpu-cpuid.AVX:true feature.node.kubernetes.io/cpu-cpuid.AVX2:true feature.node.kubernetes.io/cpu-cpuid.AVX512BW:true feature.node.kubernetes.io/cpu-cpuid.AVX512CD:true feature.node.kubernetes.io/cpu-cpuid.AVX512DQ:true feature.node.kubernetes.io/cpu-cpuid.AVX512F:true feature.node.kubernetes.io/cpu-cpuid.AVX512VL:true feature.node.kubernetes.io/cpu-cpuid.FMA3:true feature.node.kubernetes.io/cpu-cpuid.HLE:true feature.node.kubernetes.io/cpu-cpuid.IBPB:true feature.node.kubernetes.io/cpu-cpuid.MPX:true feature.node.kubernetes.io/cpu-cpuid.RTM:true feature.node.kubernetes.io/cpu-cpuid.SSE4:true feature.node.kubernetes.io/cpu-cpuid.SSE42:true feature.node.kubernetes.io/cpu-cpuid.STIBP:true feature.node.kubernetes.io/cpu-cpuid.VMX:true feature.node.kubernetes.io/cpu-cstate.enabled:true feature.node.kubernetes.io/cpu-hardware_multithreading:true feature.node.kubernetes.io/cpu-pstate.status:active feature.node.kubernetes.io/cpu-pstate.turbo:true feature.node.kubernetes.io/cpu-rdt.RDTCMT:true feature.node.kubernetes.io/cpu-rdt.RDTL3CA:true feature.node.kubernetes.io/cpu-rdt.RDTMBA:true feature.node.kubernetes.io/cpu-rdt.RDTMBM:true feature.node.kubernetes.io/cpu-rdt.RDTMON:true feature.node.kubernetes.io/kernel-config.NO_HZ:true feature.node.kubernetes.io/kernel-config.NO_HZ_FULL:true feature.node.kubernetes.io/kernel-selinux.enabled:true feature.node.kubernetes.io/kernel-version.full:3.10.0-1160.66.1.el7.x86_64 feature.node.kubernetes.io/kernel-version.major:3 feature.node.kubernetes.io/kernel-version.minor:10 feature.node.kubernetes.io/kernel-version.revision:0 feature.node.kubernetes.io/memory-numa:true feature.node.kubernetes.io/network-sriov.capable:true feature.node.kubernetes.io/network-sriov.configured:true feature.node.kubernetes.io/pci-0300_1a03.present:true feature.node.kubernetes.io/storage-nonrotationaldisk:true feature.node.kubernetes.io/system-os_release.ID:centos feature.node.kubernetes.io/system-os_release.VERSION_ID:7 feature.node.kubernetes.io/system-os_release.VERSION_ID.major:7 kubernetes.io/arch:amd64 kubernetes.io/hostname:node1 kubernetes.io/os:linux] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.207 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/extended-resources: nfd.node.kubernetes.io/feature-labels:cpu-cpuid.ADX,cpu-cpuid.AESNI,cpu-cpuid.AVX,cpu-cpuid.AVX2,cpu-cpuid.AVX512BW,cpu-cpuid.AVX512CD,cpu-cpuid.AVX512DQ,cpu-cpuid.AVX512F,cpu-cpuid.AVX512VL,cpu-cpuid.FMA3,cpu-cpuid.HLE,cpu-cpuid.IBPB,cpu-cpuid.MPX,cpu-cpuid.RTM,cpu-cpuid.SSE4,cpu-cpuid.SSE42,cpu-cpuid.STIBP,cpu-cpuid.VMX,cpu-cstate.enabled,cpu-hardware_multithreading,cpu-pstate.status,cpu-pstate.turbo,cpu-rdt.RDTCMT,cpu-rdt.RDTL3CA,cpu-rdt.RDTMBA,cpu-rdt.RDTMBM,cpu-rdt.RDTMON,kernel-config.NO_HZ,kernel-config.NO_HZ_FULL,kernel-selinux.enabled,kernel-version.full,kernel-version.major,kernel-version.minor,kernel-version.revision,memory-numa,network-sriov.capable,network-sriov.configured,pci-0300_1a03.present,storage-nonrotationaldisk,system-os_release.ID,system-os_release.VERSION_ID,system-os_release.VERSION_ID.major nfd.node.kubernetes.io/worker.version:v0.8.2 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2022-06-03 19:59:31 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.3.0/24\"":{}}}}} {kubeadm Update v1 2022-06-03 19:59:32 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {flanneld Update v1 2022-06-03 20:00:37 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {nfd-master Update v1 2022-06-03 20:08:16 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/extended-resources":{},"f:nfd.node.kubernetes.io/feature-labels":{},"f:nfd.node.kubernetes.io/worker.version":{}},"f:labels":{"f:feature.node.kubernetes.io/cpu-cpuid.ADX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AESNI":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX2":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512BW":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512CD":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512DQ":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512F":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512VL":{},"f:feature.node.kubernetes.io/cpu-cpuid.FMA3":{},"f:feature.node.kubernetes.io/cpu-cpuid.HLE":{},"f:feature.node.kubernetes.io/cpu-cpuid.IBPB":{},"f:feature.node.kubernetes.io/cpu-cpuid.MPX":{},"f:feature.node.kubernetes.io/cpu-cpuid.RTM":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE4":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE42":{},"f:feature.node.kubernetes.io/cpu-cpuid.STIBP":{},"f:feature.node.kubernetes.io/cpu-cpuid.VMX":{},"f:feature.node.kubernetes.io/cpu-cstate.enabled":{},"f:feature.node.kubernetes.io/cpu-hardware_multithreading":{},"f:feature.node.kubernetes.io/cpu-pstate.status":{},"f:feature.node.kubernetes.io/cpu-pstate.turbo":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTCMT":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTL3CA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBM":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMON":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ_FULL":{},"f:feature.node.kubernetes.io/kernel-selinux.enabled":{},"f:feature.node.kubernetes.io/kernel-version.full":{},"f:feature.node.kubernetes.io/kernel-version.major":{},"f:feature.node.kubernetes.io/kernel-version.minor":{},"f:feature.node.kubernetes.io/kernel-version.revision":{},"f:feature.node.kubernetes.io/memory-numa":{},"f:feature.node.kubernetes.io/network-sriov.capable":{},"f:feature.node.kubernetes.io/network-sriov.configured":{},"f:feature.node.kubernetes.io/pci-0300_1a03.present":{},"f:feature.node.kubernetes.io/storage-nonrotationaldisk":{},"f:feature.node.kubernetes.io/system-os_release.ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID.major":{}}}}} {Swagger-Codegen Update v1 2022-06-03 20:11:45 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:cmk.intel.com/cmk-node":{}}},"f:status":{"f:capacity":{"f:cmk.intel.com/exclusive-cores":{}}}}} {kubelet Update v1 2022-06-03 20:11:51 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:cmk.intel.com/exclusive-cores":{},"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:capacity":{"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.3.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.3.0/24],},Status:NodeStatus{Capacity:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{451201003520 0} {} 440625980Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{201269608448 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{77 0} {} 77 DecimalSI},ephemeral-storage: {{406080902496 0} {} 406080902496 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{178884608000 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-06-03 20:03:39 +0000 UTC,LastTransitionTime:2022-06-03 20:03:39 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-06-03 21:59:16 +0000 UTC,LastTransitionTime:2022-06-03 19:59:31 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-06-03 21:59:16 +0000 UTC,LastTransitionTime:2022-06-03 19:59:31 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-06-03 21:59:16 +0000 UTC,LastTransitionTime:2022-06-03 19:59:31 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-06-03 21:59:16 +0000 UTC,LastTransitionTime:2022-06-03 20:00:42 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.207,},NodeAddress{Type:Hostname,Address:node1,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:d7b1fa7572024d5cac9eec5f4f2a75d3,SystemUUID:00CDA902-D022-E711-906E-0017A4403562,BootID:a1aa46cd-ec2c-417b-ae44-b808bdc04113,KernelVersion:3.10.0-1160.66.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.16,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[opnfv/barometer-collectd@sha256:f30e965aa6195e6ac4ca2410f5a15e3704c92e4afa5208178ca22a7911975d66],SizeBytes:1075575763,},ContainerImage{Names:[@ :],SizeBytes:1003977815,},ContainerImage{Names:[localhost:30500/cmk@sha256:196eade72a7e16bdb2d709d29fdec354c8a3dbbb68e384608929b41c5ec41520 cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:727687199,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[golang@sha256:db2475a1dbb2149508e5db31d7d77a75e6600d54be645f37681f03f2762169ba golang:alpine3.12],SizeBytes:301186719,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:702a992280fb7c3303e84a5801acbb4c9c7fcf48cffe0e9c8be3f0c60f74cf89 k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.4],SizeBytes:253371792,},ContainerImage{Names:[grafana/grafana@sha256:ba39bf5131dcc0464134a3ff0e26e8c6380415249fa725e5f619176601255172 grafana/grafana:7.5.4],SizeBytes:203572842,},ContainerImage{Names:[quay.io/prometheus/prometheus@sha256:b899dbd1b9017b9a379f76ce5b40eead01a62762c4f2057eacef945c3c22d210 quay.io/prometheus/prometheus:v2.22.1],SizeBytes:168344243,},ContainerImage{Names:[nginx@sha256:a05b0cdd4fc1be3b224ba9662ebdf98fe44c09c0c9215b45f84344c12867002e nginx:1.21.1],SizeBytes:133175493,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:125930239,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[k8s.gcr.io/nfd/node-feature-discovery@sha256:74a1cbd82354f148277e20cdce25d57816e355a896bc67f67a0f722164b16945 k8s.gcr.io/nfd/node-feature-discovery:v0.8.2],SizeBytes:108486428,},ContainerImage{Names:[directxman12/k8s-prometheus-adapter@sha256:2b09a571757a12c0245f2f1a74db4d1b9386ff901cf57f5ce48a0a682bd0e3af directxman12/k8s-prometheus-adapter:v0.8.2],SizeBytes:68230450,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[localhost:30500/sriov-device-plugin@sha256:bec5a478455b8244d18398355b5ec18540557180ddc029404300ca241638521b nfvpe/sriov-device-plugin:latest localhost:30500/sriov-device-plugin:v3.3.2],SizeBytes:42676189,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[prom/collectd-exporter@sha256:73fbda4d24421bff3b741c27efc36f1b6fbe7c57c378d56d4ff78101cd556654],SizeBytes:17463681,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:16032814,},ContainerImage{Names:[quay.io/prometheus-operator/prometheus-config-reloader@sha256:4dee0fcf1820355ddd6986c1317b555693776c731315544a99d6cc59a7e34ce9 quay.io/prometheus-operator/prometheus-config-reloader:v0.44.1],SizeBytes:13433274,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:11443478,},ContainerImage{Names:[appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172cead6502fec989198b98d8bb appropriate/curl:edge],SizeBytes:5654234,},ContainerImage{Names:[alpine@sha256:c75ac27b49326926b803b9ed43bf088bc220d22556de1bc5f72d742c91398f69 alpine:3.12],SizeBytes:5581590,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:1154361,},ContainerImage{Names:[busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 busybox:1.28],SizeBytes:1146369,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jun 3 21:59:22.768: INFO: Logging kubelet events for node node1 Jun 3 21:59:22.770: INFO: Logging pods the kubelet thinks is on node node1 Jun 3 21:59:22.787: INFO: frontend-685fc574d5-rvm4j started at 2022-06-03 21:59:19 +0000 UTC (0+1 container statuses recorded) Jun 3 21:59:22.787: INFO: Container guestbook-frontend ready: false, restart count 0 Jun 3 21:59:22.787: INFO: execpodc96mn started at 2022-06-03 21:57:32 +0000 UTC (0+1 container statuses recorded) Jun 3 21:59:22.787: INFO: Container agnhost-container ready: true, restart count 0 Jun 3 21:59:22.787: INFO: ss2-2 started at (0+0 container statuses recorded) Jun 3 21:59:22.787: INFO: kube-proxy-b6zlv started at 2022-06-03 19:59:36 +0000 UTC (0+1 container statuses recorded) Jun 3 21:59:22.787: INFO: Container kube-proxy ready: true, restart count 2 Jun 3 21:59:22.787: INFO: kube-flannel-hm6bh started at 2022-06-03 20:00:32 +0000 UTC (1+1 container statuses recorded) Jun 3 21:59:22.787: INFO: Init container install-cni ready: true, restart count 2 Jun 3 21:59:22.787: INFO: Container kube-flannel ready: true, restart count 3 Jun 3 21:59:22.787: INFO: nodeport-test-vd8hl started at 2022-06-03 21:57:26 +0000 UTC (0+1 container statuses recorded) Jun 3 21:59:22.787: INFO: Container nodeport-test ready: true, restart count 0 Jun 3 21:59:22.787: INFO: dns-test-70187e36-401d-4376-9f18-8b262879825c started at 2022-06-03 21:59:16 +0000 UTC (0+3 container statuses recorded) Jun 3 21:59:22.787: INFO: Container jessie-querier ready: false, restart count 0 Jun 3 21:59:22.787: INFO: Container querier ready: false, restart count 0 Jun 3 21:59:22.787: INFO: Container webserver ready: false, restart count 0 Jun 3 21:59:22.787: INFO: liveness-c3c83afb-b3bf-4ac7-849b-27de5e84cc08 started at 2022-06-03 21:57:17 +0000 UTC (0+1 container statuses recorded) Jun 3 21:59:22.787: INFO: Container agnhost-container ready: true, restart count 0 Jun 3 21:59:22.787: INFO: nginx-proxy-node1 started at 2022-06-03 19:59:31 +0000 UTC (0+1 container statuses recorded) Jun 3 21:59:22.787: INFO: Container nginx-proxy ready: true, restart count 2 Jun 3 21:59:22.787: INFO: cmk-init-discover-node1-n75dv started at 2022-06-03 20:11:42 +0000 UTC (0+3 container statuses recorded) Jun 3 21:59:22.787: INFO: Container discover ready: false, restart count 0 Jun 3 21:59:22.787: INFO: Container init ready: false, restart count 0 Jun 3 21:59:22.787: INFO: Container install ready: false, restart count 0 Jun 3 21:59:22.787: INFO: prometheus-k8s-0 started at 2022-06-03 20:13:45 +0000 UTC (0+4 container statuses recorded) Jun 3 21:59:22.787: INFO: Container config-reloader ready: true, restart count 0 Jun 3 21:59:22.787: INFO: Container custom-metrics-apiserver ready: true, restart count 0 Jun 3 21:59:22.787: INFO: Container grafana ready: true, restart count 0 Jun 3 21:59:22.787: INFO: Container prometheus ready: true, restart count 1 Jun 3 21:59:22.787: INFO: frontend-685fc574d5-qtg9r started at 2022-06-03 21:59:19 +0000 UTC (0+1 container statuses recorded) Jun 3 21:59:22.787: INFO: Container guestbook-frontend ready: false, restart count 0 Jun 3 21:59:22.787: INFO: agnhost-replica-6bcf79b489-clcxg started at 2022-06-03 21:59:20 +0000 UTC (0+1 container statuses recorded) Jun 3 21:59:22.787: INFO: Container replica ready: false, restart count 0 Jun 3 21:59:22.787: INFO: node-feature-discovery-worker-rg6tx started at 2022-06-03 20:08:09 +0000 UTC (0+1 container statuses recorded) Jun 3 21:59:22.787: INFO: Container nfd-worker ready: true, restart count 0 Jun 3 21:59:22.787: INFO: cmk-84nbw started at 2022-06-03 20:12:24 +0000 UTC (0+2 container statuses recorded) Jun 3 21:59:22.787: INFO: Container nodereport ready: true, restart count 0 Jun 3 21:59:22.787: INFO: Container reconcile ready: true, restart count 0 Jun 3 21:59:22.787: INFO: node-exporter-f5xkq started at 2022-06-03 20:13:28 +0000 UTC (0+2 container statuses recorded) Jun 3 21:59:22.787: INFO: Container kube-rbac-proxy ready: true, restart count 0 Jun 3 21:59:22.787: INFO: Container node-exporter ready: true, restart count 0 Jun 3 21:59:22.787: INFO: collectd-nbx5z started at 2022-06-03 20:17:32 +0000 UTC (0+3 container statuses recorded) Jun 3 21:59:22.787: INFO: Container collectd ready: true, restart count 0 Jun 3 21:59:22.787: INFO: Container collectd-exporter ready: true, restart count 0 Jun 3 21:59:22.787: INFO: Container rbac-proxy ready: true, restart count 0 Jun 3 21:59:22.787: INFO: test-pod started at 2022-06-03 21:57:06 +0000 UTC (0+1 container statuses recorded) Jun 3 21:59:22.787: INFO: Container webserver ready: true, restart count 0 Jun 3 21:59:22.787: INFO: kube-multus-ds-amd64-p7r6j started at 2022-06-03 20:00:40 +0000 UTC (0+1 container statuses recorded) Jun 3 21:59:22.787: INFO: Container kube-multus ready: true, restart count 1 Jun 3 21:59:22.787: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-qwqjx started at 2022-06-03 20:09:20 +0000 UTC (0+1 container statuses recorded) Jun 3 21:59:22.787: INFO: Container kube-sriovdp ready: true, restart count 0 Jun 3 21:59:22.787: INFO: cmk-webhook-6c9d5f8578-c927x started at 2022-06-03 20:12:25 +0000 UTC (0+1 container statuses recorded) Jun 3 21:59:22.787: INFO: Container cmk-webhook ready: true, restart count 0 Jun 3 21:59:24.478: INFO: Latency metrics for node node1 Jun 3 21:59:24.478: INFO: Logging node info for node node2 Jun 3 21:59:24.481: INFO: Node Info: &Node{ObjectMeta:{node2 bb95e261-57f4-4e78-b1f6-cbf8d9287d74 37083 0 2022-06-03 19:59:32 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux cmk.intel.com/cmk-node:true feature.node.kubernetes.io/cpu-cpuid.ADX:true feature.node.kubernetes.io/cpu-cpuid.AESNI:true feature.node.kubernetes.io/cpu-cpuid.AVX:true feature.node.kubernetes.io/cpu-cpuid.AVX2:true feature.node.kubernetes.io/cpu-cpuid.AVX512BW:true feature.node.kubernetes.io/cpu-cpuid.AVX512CD:true feature.node.kubernetes.io/cpu-cpuid.AVX512DQ:true feature.node.kubernetes.io/cpu-cpuid.AVX512F:true feature.node.kubernetes.io/cpu-cpuid.AVX512VL:true feature.node.kubernetes.io/cpu-cpuid.FMA3:true feature.node.kubernetes.io/cpu-cpuid.HLE:true feature.node.kubernetes.io/cpu-cpuid.IBPB:true feature.node.kubernetes.io/cpu-cpuid.MPX:true feature.node.kubernetes.io/cpu-cpuid.RTM:true feature.node.kubernetes.io/cpu-cpuid.SSE4:true feature.node.kubernetes.io/cpu-cpuid.SSE42:true feature.node.kubernetes.io/cpu-cpuid.STIBP:true feature.node.kubernetes.io/cpu-cpuid.VMX:true feature.node.kubernetes.io/cpu-cstate.enabled:true feature.node.kubernetes.io/cpu-hardware_multithreading:true feature.node.kubernetes.io/cpu-pstate.status:active feature.node.kubernetes.io/cpu-pstate.turbo:true feature.node.kubernetes.io/cpu-rdt.RDTCMT:true feature.node.kubernetes.io/cpu-rdt.RDTL3CA:true feature.node.kubernetes.io/cpu-rdt.RDTMBA:true feature.node.kubernetes.io/cpu-rdt.RDTMBM:true feature.node.kubernetes.io/cpu-rdt.RDTMON:true feature.node.kubernetes.io/kernel-config.NO_HZ:true feature.node.kubernetes.io/kernel-config.NO_HZ_FULL:true feature.node.kubernetes.io/kernel-selinux.enabled:true feature.node.kubernetes.io/kernel-version.full:3.10.0-1160.66.1.el7.x86_64 feature.node.kubernetes.io/kernel-version.major:3 feature.node.kubernetes.io/kernel-version.minor:10 feature.node.kubernetes.io/kernel-version.revision:0 feature.node.kubernetes.io/memory-numa:true feature.node.kubernetes.io/network-sriov.capable:true feature.node.kubernetes.io/network-sriov.configured:true feature.node.kubernetes.io/pci-0300_1a03.present:true feature.node.kubernetes.io/storage-nonrotationaldisk:true feature.node.kubernetes.io/system-os_release.ID:centos feature.node.kubernetes.io/system-os_release.VERSION_ID:7 feature.node.kubernetes.io/system-os_release.VERSION_ID.major:7 kubernetes.io/arch:amd64 kubernetes.io/hostname:node2 kubernetes.io/os:linux] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.208 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/extended-resources: nfd.node.kubernetes.io/feature-labels:cpu-cpuid.ADX,cpu-cpuid.AESNI,cpu-cpuid.AVX,cpu-cpuid.AVX2,cpu-cpuid.AVX512BW,cpu-cpuid.AVX512CD,cpu-cpuid.AVX512DQ,cpu-cpuid.AVX512F,cpu-cpuid.AVX512VL,cpu-cpuid.FMA3,cpu-cpuid.HLE,cpu-cpuid.IBPB,cpu-cpuid.MPX,cpu-cpuid.RTM,cpu-cpuid.SSE4,cpu-cpuid.SSE42,cpu-cpuid.STIBP,cpu-cpuid.VMX,cpu-cstate.enabled,cpu-hardware_multithreading,cpu-pstate.status,cpu-pstate.turbo,cpu-rdt.RDTCMT,cpu-rdt.RDTL3CA,cpu-rdt.RDTMBA,cpu-rdt.RDTMBM,cpu-rdt.RDTMON,kernel-config.NO_HZ,kernel-config.NO_HZ_FULL,kernel-selinux.enabled,kernel-version.full,kernel-version.major,kernel-version.minor,kernel-version.revision,memory-numa,network-sriov.capable,network-sriov.configured,pci-0300_1a03.present,storage-nonrotationaldisk,system-os_release.ID,system-os_release.VERSION_ID,system-os_release.VERSION_ID.major nfd.node.kubernetes.io/worker.version:v0.8.2 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2022-06-03 19:59:32 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.4.0/24\"":{}}}}} {kubeadm Update v1 2022-06-03 19:59:33 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {flanneld Update v1 2022-06-03 20:00:37 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {nfd-master Update v1 2022-06-03 20:08:16 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/extended-resources":{},"f:nfd.node.kubernetes.io/feature-labels":{},"f:nfd.node.kubernetes.io/worker.version":{}},"f:labels":{"f:feature.node.kubernetes.io/cpu-cpuid.ADX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AESNI":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX2":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512BW":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512CD":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512DQ":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512F":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512VL":{},"f:feature.node.kubernetes.io/cpu-cpuid.FMA3":{},"f:feature.node.kubernetes.io/cpu-cpuid.HLE":{},"f:feature.node.kubernetes.io/cpu-cpuid.IBPB":{},"f:feature.node.kubernetes.io/cpu-cpuid.MPX":{},"f:feature.node.kubernetes.io/cpu-cpuid.RTM":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE4":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE42":{},"f:feature.node.kubernetes.io/cpu-cpuid.STIBP":{},"f:feature.node.kubernetes.io/cpu-cpuid.VMX":{},"f:feature.node.kubernetes.io/cpu-cstate.enabled":{},"f:feature.node.kubernetes.io/cpu-hardware_multithreading":{},"f:feature.node.kubernetes.io/cpu-pstate.status":{},"f:feature.node.kubernetes.io/cpu-pstate.turbo":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTCMT":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTL3CA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBM":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMON":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ_FULL":{},"f:feature.node.kubernetes.io/kernel-selinux.enabled":{},"f:feature.node.kubernetes.io/kernel-version.full":{},"f:feature.node.kubernetes.io/kernel-version.major":{},"f:feature.node.kubernetes.io/kernel-version.minor":{},"f:feature.node.kubernetes.io/kernel-version.revision":{},"f:feature.node.kubernetes.io/memory-numa":{},"f:feature.node.kubernetes.io/network-sriov.capable":{},"f:feature.node.kubernetes.io/network-sriov.configured":{},"f:feature.node.kubernetes.io/pci-0300_1a03.present":{},"f:feature.node.kubernetes.io/storage-nonrotationaldisk":{},"f:feature.node.kubernetes.io/system-os_release.ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID.major":{}}}}} {Swagger-Codegen Update v1 2022-06-03 20:12:07 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:cmk.intel.com/cmk-node":{}}},"f:status":{"f:capacity":{"f:cmk.intel.com/exclusive-cores":{}}}}} {kubelet Update v1 2022-06-03 20:12:10 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:cmk.intel.com/exclusive-cores":{},"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:capacity":{"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.4.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.4.0/24],},Status:NodeStatus{Capacity:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{451201003520 0} {} 440625980Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{201269604352 0} {} 196552348Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{77 0} {} 77 DecimalSI},ephemeral-storage: {{406080902496 0} {} 406080902496 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{178884603904 0} {} 174691996Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-06-03 20:03:25 +0000 UTC,LastTransitionTime:2022-06-03 20:03:25 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-06-03 21:59:16 +0000 UTC,LastTransitionTime:2022-06-03 19:59:32 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-06-03 21:59:16 +0000 UTC,LastTransitionTime:2022-06-03 19:59:32 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-06-03 21:59:16 +0000 UTC,LastTransitionTime:2022-06-03 19:59:32 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-06-03 21:59:16 +0000 UTC,LastTransitionTime:2022-06-03 20:03:20 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.208,},NodeAddress{Type:Hostname,Address:node2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:73f6f7c4482d4ddfadf38b35a5d03575,SystemUUID:80B3CD56-852F-E711-906E-0017A4403562,BootID:14b04379-324d-413e-8b7f-b1dff077c955,KernelVersion:3.10.0-1160.66.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.16,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[opnfv/barometer-collectd@sha256:f30e965aa6195e6ac4ca2410f5a15e3704c92e4afa5208178ca22a7911975d66],SizeBytes:1075575763,},ContainerImage{Names:[localhost:30500/cmk@sha256:196eade72a7e16bdb2d709d29fdec354c8a3dbbb68e384608929b41c5ec41520 localhost:30500/cmk:v1.5.1],SizeBytes:727687199,},ContainerImage{Names:[cmk:v1.5.1],SizeBytes:727687199,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[aquasec/kube-hunter@sha256:2be6820bc1d7e0f57193a9a27d5a3e16b2fd93c53747b03ce8ca48c6fc323781 aquasec/kube-hunter:0.3.1],SizeBytes:347611549,},ContainerImage{Names:[sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 sirot/netperf-latest:latest],SizeBytes:282025213,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[nginx@sha256:a05b0cdd4fc1be3b224ba9662ebdf98fe44c09c0c9215b45f84344c12867002e nginx:1.21.1],SizeBytes:133175493,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:125930239,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[k8s.gcr.io/nfd/node-feature-discovery@sha256:74a1cbd82354f148277e20cdce25d57816e355a896bc67f67a0f722164b16945 k8s.gcr.io/nfd/node-feature-discovery:v0.8.2],SizeBytes:108486428,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[localhost:30500/sriov-device-plugin@sha256:bec5a478455b8244d18398355b5ec18540557180ddc029404300ca241638521b localhost:30500/sriov-device-plugin:v3.3.2],SizeBytes:42676189,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[localhost:30500/tasextender@sha256:eddd5e176ac5f79e2e8ba9a1b7023bbf7200edfa835da39de54a6bf3568f9668 localhost:30500/tasextender:0.4],SizeBytes:28910791,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 aquasec/kube-bench:0.3.1],SizeBytes:19301876,},ContainerImage{Names:[prom/collectd-exporter@sha256:73fbda4d24421bff3b741c27efc36f1b6fbe7c57c378d56d4ff78101cd556654],SizeBytes:17463681,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:16032814,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:11443478,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonewprivs@sha256:8ac1264691820febacf3aea5d152cbde6d10685731ec14966a9401c6f47a68ac k8s.gcr.io/e2e-test-images/nonewprivs:1.3],SizeBytes:7107254,},ContainerImage{Names:[appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172cead6502fec989198b98d8bb appropriate/curl:edge],SizeBytes:5654234,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:1154361,},ContainerImage{Names:[busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 busybox:1.28],SizeBytes:1146369,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jun 3 21:59:24.481: INFO: Logging kubelet events for node node2 Jun 3 21:59:24.484: INFO: Logging pods the kubelet thinks is on node node2 Jun 3 21:59:24.499: INFO: nginx-proxy-node2 started at 2022-06-03 19:59:32 +0000 UTC (0+1 container statuses recorded) Jun 3 21:59:24.499: INFO: Container nginx-proxy ready: true, restart count 2 Jun 3 21:59:24.499: INFO: cmk-init-discover-node2-xvf8p started at 2022-06-03 20:12:02 +0000 UTC (0+3 container statuses recorded) Jun 3 21:59:24.499: INFO: Container discover ready: false, restart count 0 Jun 3 21:59:24.499: INFO: Container init ready: false, restart count 0 Jun 3 21:59:24.499: INFO: Container install ready: false, restart count 0 Jun 3 21:59:24.499: INFO: nodeport-test-5k588 started at 2022-06-03 21:57:25 +0000 UTC (0+1 container statuses recorded) Jun 3 21:59:24.499: INFO: Container nodeport-test ready: true, restart count 0 Jun 3 21:59:24.499: INFO: agnhost-replica-6bcf79b489-r84gx started at 2022-06-03 21:59:20 +0000 UTC (0+1 container statuses recorded) Jun 3 21:59:24.499: INFO: Container replica ready: false, restart count 0 Jun 3 21:59:24.499: INFO: tas-telemetry-aware-scheduling-84ff454dfb-j2kg5 started at 2022-06-03 20:16:39 +0000 UTC (0+1 container statuses recorded) Jun 3 21:59:24.499: INFO: Container tas-extender ready: true, restart count 0 Jun 3 21:59:24.499: INFO: ss2-1 started at 2022-06-03 21:58:46 +0000 UTC (0+1 container statuses recorded) Jun 3 21:59:24.499: INFO: Container webserver ready: true, restart count 0 Jun 3 21:59:24.499: INFO: pod-service-account-defaultsa-nomountspec started at 2022-06-03 21:58:52 +0000 UTC (0+1 container statuses recorded) Jun 3 21:59:24.499: INFO: Container token-test ready: true, restart count 0 Jun 3 21:59:24.499: INFO: pod-secrets-053a3982-05a0-4655-8ec1-647ade6817ff started at 2022-06-03 21:59:20 +0000 UTC (0+1 container statuses recorded) Jun 3 21:59:24.499: INFO: Container secret-volume-test ready: false, restart count 0 Jun 3 21:59:24.499: INFO: kube-flannel-pc7wj started at 2022-06-03 20:00:32 +0000 UTC (1+1 container statuses recorded) Jun 3 21:59:24.499: INFO: Init container install-cni ready: true, restart count 0 Jun 3 21:59:24.499: INFO: Container kube-flannel ready: true, restart count 1 Jun 3 21:59:24.499: INFO: kube-multus-ds-amd64-n7spl started at 2022-06-03 20:00:40 +0000 UTC (0+1 container statuses recorded) Jun 3 21:59:24.499: INFO: Container kube-multus ready: true, restart count 1 Jun 3 21:59:24.499: INFO: frontend-685fc574d5-n4cdj started at 2022-06-03 21:59:19 +0000 UTC (0+1 container statuses recorded) Jun 3 21:59:24.499: INFO: Container guestbook-frontend ready: false, restart count 0 Jun 3 21:59:24.499: INFO: ss2-0 started at (0+0 container statuses recorded) Jun 3 21:59:24.499: INFO: kube-proxy-qmkcq started at 2022-06-03 19:59:36 +0000 UTC (0+1 container statuses recorded) Jun 3 21:59:24.499: INFO: Container kube-proxy ready: true, restart count 1 Jun 3 21:59:24.499: INFO: node-feature-discovery-worker-gn855 started at 2022-06-03 20:08:09 +0000 UTC (0+1 container statuses recorded) Jun 3 21:59:24.499: INFO: Container nfd-worker ready: true, restart count 0 Jun 3 21:59:24.499: INFO: collectd-q2l4t started at 2022-06-03 20:17:32 +0000 UTC (0+3 container statuses recorded) Jun 3 21:59:24.499: INFO: Container collectd ready: true, restart count 0 Jun 3 21:59:24.499: INFO: Container collectd-exporter ready: true, restart count 0 Jun 3 21:59:24.499: INFO: Container rbac-proxy ready: true, restart count 0 Jun 3 21:59:24.499: INFO: kubernetes-dashboard-785dcbb76d-25c95 started at 2022-06-03 20:01:12 +0000 UTC (0+1 container statuses recorded) Jun 3 21:59:24.499: INFO: Container kubernetes-dashboard ready: true, restart count 1 Jun 3 21:59:24.499: INFO: cmk-v446x started at 2022-06-03 20:12:24 +0000 UTC (0+2 container statuses recorded) Jun 3 21:59:24.499: INFO: Container nodereport ready: true, restart count 0 Jun 3 21:59:24.499: INFO: Container reconcile ready: true, restart count 0 Jun 3 21:59:24.499: INFO: node-exporter-g45bm started at 2022-06-03 20:13:28 +0000 UTC (0+2 container statuses recorded) Jun 3 21:59:24.499: INFO: Container kube-rbac-proxy ready: true, restart count 0 Jun 3 21:59:24.499: INFO: Container node-exporter ready: true, restart count 0 Jun 3 21:59:24.500: INFO: test-webserver-f9e25df1-fdee-4024-abe0-cd0a6e13c7ec started at 2022-06-03 21:56:50 +0000 UTC (0+1 container statuses recorded) Jun 3 21:59:24.500: INFO: Container test-webserver ready: true, restart count 0 Jun 3 21:59:24.500: INFO: logs-generator started at 2022-06-03 21:59:19 +0000 UTC (0+1 container statuses recorded) Jun 3 21:59:24.500: INFO: Container logs-generator ready: false, restart count 0 Jun 3 21:59:24.500: INFO: agnhost-primary-5db8ddd565-vgmbn started at 2022-06-03 21:59:20 +0000 UTC (0+1 container statuses recorded) Jun 3 21:59:24.500: INFO: Container primary ready: false, restart count 0 Jun 3 21:59:24.500: INFO: kubernetes-metrics-scraper-5558854cb-fz4kn started at 2022-06-03 20:01:12 +0000 UTC (0+1 container statuses recorded) Jun 3 21:59:24.500: INFO: Container kubernetes-metrics-scraper ready: true, restart count 1 Jun 3 21:59:24.500: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-49xzt started at 2022-06-03 20:09:20 +0000 UTC (0+1 container statuses recorded) Jun 3 21:59:24.500: INFO: Container kube-sriovdp ready: true, restart count 0 Jun 3 21:59:25.573: INFO: Latency metrics for node node2 Jun 3 21:59:25.573: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-7837" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750 • Failure [151.960 seconds] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should have session affinity work for NodePort service [LinuxOnly] [Conformance] [It] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Jun 3 21:59:08.210: Unexpected error: <*errors.errorString | 0xc000d6ef80>: { s: "service is not reachable within 2m0s timeout on endpoint 10.10.190.207:32368 over TCP protocol", } service is not reachable within 2m0s timeout on endpoint 10.10.190.207:32368 over TCP protocol occurred /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:2576 ------------------------------ {"msg":"FAILED [sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","total":-1,"completed":4,"skipped":73,"failed":1,"failures":["[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 3 21:59:20.082: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating secret with name secret-test-702e5ee8-e6a9-4604-b987-d7805444ff1c STEP: Creating a pod to test consume secrets Jun 3 21:59:20.136: INFO: Waiting up to 5m0s for pod "pod-secrets-053a3982-05a0-4655-8ec1-647ade6817ff" in namespace "secrets-1433" to be "Succeeded or Failed" Jun 3 21:59:20.138: INFO: Pod "pod-secrets-053a3982-05a0-4655-8ec1-647ade6817ff": Phase="Pending", Reason="", readiness=false. Elapsed: 2.435677ms Jun 3 21:59:22.142: INFO: Pod "pod-secrets-053a3982-05a0-4655-8ec1-647ade6817ff": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00647787s Jun 3 21:59:24.148: INFO: Pod "pod-secrets-053a3982-05a0-4655-8ec1-647ade6817ff": Phase="Pending", Reason="", readiness=false. Elapsed: 4.011701473s Jun 3 21:59:26.151: INFO: Pod "pod-secrets-053a3982-05a0-4655-8ec1-647ade6817ff": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.014792207s STEP: Saw pod success Jun 3 21:59:26.151: INFO: Pod "pod-secrets-053a3982-05a0-4655-8ec1-647ade6817ff" satisfied condition "Succeeded or Failed" Jun 3 21:59:26.153: INFO: Trying to get logs from node node2 pod pod-secrets-053a3982-05a0-4655-8ec1-647ade6817ff container secret-volume-test: STEP: delete the pod Jun 3 21:59:26.168: INFO: Waiting for pod pod-secrets-053a3982-05a0-4655-8ec1-647ade6817ff to disappear Jun 3 21:59:26.170: INFO: Pod pod-secrets-053a3982-05a0-4655-8ec1-647ade6817ff no longer exists [AfterEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 3 21:59:26.170: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-1433" for this suite. STEP: Destroying namespace "secret-namespace-8528" for this suite. • [SLOW TEST:6.102 seconds] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]","total":-1,"completed":24,"skipped":386,"failed":0} S ------------------------------ [BeforeEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 3 21:59:16.495: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for services [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-400.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-400.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-400.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-400.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-400.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-400.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-400.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-400.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-400.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-400.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-400.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-400.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-400.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 157.46.233.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.233.46.157_udp@PTR;check="$$(dig +tcp +noall +answer +search 157.46.233.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.233.46.157_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-400.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-400.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-400.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-400.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-400.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-400.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-400.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-400.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-400.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-400.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-400.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-400.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-400.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 157.46.233.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.233.46.157_udp@PTR;check="$$(dig +tcp +noall +answer +search 157.46.233.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.233.46.157_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Jun 3 21:59:24.554: INFO: Unable to read wheezy_udp@dns-test-service.dns-400.svc.cluster.local from pod dns-400/dns-test-70187e36-401d-4376-9f18-8b262879825c: the server could not find the requested resource (get pods dns-test-70187e36-401d-4376-9f18-8b262879825c) Jun 3 21:59:24.557: INFO: Unable to read wheezy_tcp@dns-test-service.dns-400.svc.cluster.local from pod dns-400/dns-test-70187e36-401d-4376-9f18-8b262879825c: the server could not find the requested resource (get pods dns-test-70187e36-401d-4376-9f18-8b262879825c) Jun 3 21:59:24.560: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-400.svc.cluster.local from pod dns-400/dns-test-70187e36-401d-4376-9f18-8b262879825c: the server could not find the requested resource (get pods dns-test-70187e36-401d-4376-9f18-8b262879825c) Jun 3 21:59:24.563: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-400.svc.cluster.local from pod dns-400/dns-test-70187e36-401d-4376-9f18-8b262879825c: the server could not find the requested resource (get pods dns-test-70187e36-401d-4376-9f18-8b262879825c) Jun 3 21:59:24.583: INFO: Unable to read jessie_udp@dns-test-service.dns-400.svc.cluster.local from pod dns-400/dns-test-70187e36-401d-4376-9f18-8b262879825c: the server could not find the requested resource (get pods dns-test-70187e36-401d-4376-9f18-8b262879825c) Jun 3 21:59:24.586: INFO: Unable to read jessie_tcp@dns-test-service.dns-400.svc.cluster.local from pod dns-400/dns-test-70187e36-401d-4376-9f18-8b262879825c: the server could not find the requested resource (get pods dns-test-70187e36-401d-4376-9f18-8b262879825c) Jun 3 21:59:24.611: INFO: Lookups using dns-400/dns-test-70187e36-401d-4376-9f18-8b262879825c failed for: [wheezy_udp@dns-test-service.dns-400.svc.cluster.local wheezy_tcp@dns-test-service.dns-400.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-400.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-400.svc.cluster.local jessie_udp@dns-test-service.dns-400.svc.cluster.local jessie_tcp@dns-test-service.dns-400.svc.cluster.local] Jun 3 21:59:29.666: INFO: DNS probes using dns-400/dns-test-70187e36-401d-4376-9f18-8b262879825c succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 3 21:59:29.690: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-400" for this suite. • [SLOW TEST:13.204 seconds] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should provide DNS for services [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for services [Conformance]","total":-1,"completed":30,"skipped":538,"failed":0} SSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 3 21:59:25.643: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating projection with secret that has name projected-secret-test-81638e22-89be-40d0-8561-c4711e487f1f STEP: Creating a pod to test consume secrets Jun 3 21:59:25.691: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-dfe15a2a-6c3d-4cbd-8b74-37a7a34ab351" in namespace "projected-4440" to be "Succeeded or Failed" Jun 3 21:59:25.694: INFO: Pod "pod-projected-secrets-dfe15a2a-6c3d-4cbd-8b74-37a7a34ab351": Phase="Pending", Reason="", readiness=false. Elapsed: 2.996986ms Jun 3 21:59:27.699: INFO: Pod "pod-projected-secrets-dfe15a2a-6c3d-4cbd-8b74-37a7a34ab351": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007715661s Jun 3 21:59:29.701: INFO: Pod "pod-projected-secrets-dfe15a2a-6c3d-4cbd-8b74-37a7a34ab351": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010528347s STEP: Saw pod success Jun 3 21:59:29.702: INFO: Pod "pod-projected-secrets-dfe15a2a-6c3d-4cbd-8b74-37a7a34ab351" satisfied condition "Succeeded or Failed" Jun 3 21:59:29.704: INFO: Trying to get logs from node node1 pod pod-projected-secrets-dfe15a2a-6c3d-4cbd-8b74-37a7a34ab351 container projected-secret-volume-test: STEP: delete the pod Jun 3 21:59:29.717: INFO: Waiting for pod pod-projected-secrets-dfe15a2a-6c3d-4cbd-8b74-37a7a34ab351 to disappear Jun 3 21:59:29.719: INFO: Pod pod-projected-secrets-dfe15a2a-6c3d-4cbd-8b74-37a7a34ab351 no longer exists [AfterEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 3 21:59:29.719: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4440" for this suite. •S ------------------------------ {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":5,"skipped":95,"failed":1,"failures":["[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 3 21:59:18.319: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:241 [It] should create and stop a working application [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating all guestbook components Jun 3 21:59:18.343: INFO: apiVersion: v1 kind: Service metadata: name: agnhost-replica labels: app: agnhost role: replica tier: backend spec: ports: - port: 6379 selector: app: agnhost role: replica tier: backend Jun 3 21:59:18.343: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-615 create -f -' Jun 3 21:59:18.747: INFO: stderr: "" Jun 3 21:59:18.747: INFO: stdout: "service/agnhost-replica created\n" Jun 3 21:59:18.747: INFO: apiVersion: v1 kind: Service metadata: name: agnhost-primary labels: app: agnhost role: primary tier: backend spec: ports: - port: 6379 targetPort: 6379 selector: app: agnhost role: primary tier: backend Jun 3 21:59:18.747: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-615 create -f -' Jun 3 21:59:19.106: INFO: stderr: "" Jun 3 21:59:19.106: INFO: stdout: "service/agnhost-primary created\n" Jun 3 21:59:19.106: INFO: apiVersion: v1 kind: Service metadata: name: frontend labels: app: guestbook tier: frontend spec: # if your cluster supports it, uncomment the following to automatically create # an external load-balanced IP for the frontend service. # type: LoadBalancer ports: - port: 80 selector: app: guestbook tier: frontend Jun 3 21:59:19.106: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-615 create -f -' Jun 3 21:59:19.476: INFO: stderr: "" Jun 3 21:59:19.476: INFO: stdout: "service/frontend created\n" Jun 3 21:59:19.476: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: frontend spec: replicas: 3 selector: matchLabels: app: guestbook tier: frontend template: metadata: labels: app: guestbook tier: frontend spec: containers: - name: guestbook-frontend image: k8s.gcr.io/e2e-test-images/agnhost:2.32 args: [ "guestbook", "--backend-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 80 Jun 3 21:59:19.477: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-615 create -f -' Jun 3 21:59:19.809: INFO: stderr: "" Jun 3 21:59:19.809: INFO: stdout: "deployment.apps/frontend created\n" Jun 3 21:59:19.809: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: agnhost-primary spec: replicas: 1 selector: matchLabels: app: agnhost role: primary tier: backend template: metadata: labels: app: agnhost role: primary tier: backend spec: containers: - name: primary image: k8s.gcr.io/e2e-test-images/agnhost:2.32 args: [ "guestbook", "--http-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 Jun 3 21:59:19.809: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-615 create -f -' Jun 3 21:59:20.156: INFO: stderr: "" Jun 3 21:59:20.156: INFO: stdout: "deployment.apps/agnhost-primary created\n" Jun 3 21:59:20.156: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: agnhost-replica spec: replicas: 2 selector: matchLabels: app: agnhost role: replica tier: backend template: metadata: labels: app: agnhost role: replica tier: backend spec: containers: - name: replica image: k8s.gcr.io/e2e-test-images/agnhost:2.32 args: [ "guestbook", "--replicaof", "agnhost-primary", "--http-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 Jun 3 21:59:20.156: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-615 create -f -' Jun 3 21:59:20.499: INFO: stderr: "" Jun 3 21:59:20.499: INFO: stdout: "deployment.apps/agnhost-replica created\n" STEP: validating guestbook app Jun 3 21:59:20.499: INFO: Waiting for all frontend pods to be Running. Jun 3 21:59:30.553: INFO: Waiting for frontend to serve content. Jun 3 21:59:30.560: INFO: Trying to add a new entry to the guestbook. Jun 3 21:59:30.569: INFO: Verifying that added entry can be retrieved. STEP: using delete to clean up resources Jun 3 21:59:30.574: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-615 delete --grace-period=0 --force -f -' Jun 3 21:59:30.723: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jun 3 21:59:30.723: INFO: stdout: "service \"agnhost-replica\" force deleted\n" STEP: using delete to clean up resources Jun 3 21:59:30.723: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-615 delete --grace-period=0 --force -f -' Jun 3 21:59:30.844: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jun 3 21:59:30.844: INFO: stdout: "service \"agnhost-primary\" force deleted\n" STEP: using delete to clean up resources Jun 3 21:59:30.844: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-615 delete --grace-period=0 --force -f -' Jun 3 21:59:30.974: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jun 3 21:59:30.974: INFO: stdout: "service \"frontend\" force deleted\n" STEP: using delete to clean up resources Jun 3 21:59:30.974: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-615 delete --grace-period=0 --force -f -' Jun 3 21:59:31.118: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jun 3 21:59:31.118: INFO: stdout: "deployment.apps \"frontend\" force deleted\n" STEP: using delete to clean up resources Jun 3 21:59:31.118: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-615 delete --grace-period=0 --force -f -' Jun 3 21:59:31.240: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jun 3 21:59:31.240: INFO: stdout: "deployment.apps \"agnhost-primary\" force deleted\n" STEP: using delete to clean up resources Jun 3 21:59:31.241: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-615 delete --grace-period=0 --force -f -' Jun 3 21:59:31.378: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jun 3 21:59:31.378: INFO: stdout: "deployment.apps \"agnhost-replica\" force deleted\n" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 3 21:59:31.378: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-615" for this suite. • [SLOW TEST:13.066 seconds] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Guestbook application /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:336 should create and stop a working application [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]","total":-1,"completed":9,"skipped":188,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 3 21:59:26.189: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:241 [BeforeEach] Kubectl label /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1308 STEP: creating the pod Jun 3 21:59:26.211: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-8071 create -f -' Jun 3 21:59:26.603: INFO: stderr: "" Jun 3 21:59:26.603: INFO: stdout: "pod/pause created\n" Jun 3 21:59:26.603: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause] Jun 3 21:59:26.604: INFO: Waiting up to 5m0s for pod "pause" in namespace "kubectl-8071" to be "running and ready" Jun 3 21:59:26.607: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.989177ms Jun 3 21:59:28.611: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007116817s Jun 3 21:59:30.620: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 4.016367984s Jun 3 21:59:30.620: INFO: Pod "pause" satisfied condition "running and ready" Jun 3 21:59:30.620: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause] [It] should update the label on a resource [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: adding the label testing-label with value testing-label-value to a pod Jun 3 21:59:30.620: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-8071 label pods pause testing-label=testing-label-value' Jun 3 21:59:30.803: INFO: stderr: "" Jun 3 21:59:30.803: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod has the label testing-label with the value testing-label-value Jun 3 21:59:30.803: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-8071 get pod pause -L testing-label' Jun 3 21:59:30.988: INFO: stderr: "" Jun 3 21:59:30.988: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 4s testing-label-value\n" STEP: removing the label testing-label of a pod Jun 3 21:59:30.988: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-8071 label pods pause testing-label-' Jun 3 21:59:31.166: INFO: stderr: "" Jun 3 21:59:31.166: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod doesn't have the label testing-label Jun 3 21:59:31.166: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-8071 get pod pause -L testing-label' Jun 3 21:59:31.322: INFO: stderr: "" Jun 3 21:59:31.322: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 5s \n" [AfterEach] Kubectl label /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1314 STEP: using delete to clean up resources Jun 3 21:59:31.322: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-8071 delete --grace-period=0 --force -f -' Jun 3 21:59:31.457: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jun 3 21:59:31.457: INFO: stdout: "pod \"pause\" force deleted\n" Jun 3 21:59:31.458: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-8071 get rc,svc -l name=pause --no-headers' Jun 3 21:59:31.668: INFO: stderr: "No resources found in kubectl-8071 namespace.\n" Jun 3 21:59:31.668: INFO: stdout: "" Jun 3 21:59:31.669: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-8071 get pods -l name=pause -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Jun 3 21:59:31.838: INFO: stderr: "" Jun 3 21:59:31.838: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 3 21:59:31.838: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8071" for this suite. • [SLOW TEST:5.657 seconds] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl label /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1306 should update the label on a resource [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl label should update the label on a resource [Conformance]","total":-1,"completed":25,"skipped":387,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 3 21:59:29.735: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating configMap with name configmap-test-volume-map-13a3a8f3-6ea0-47f6-9082-43fa27820f9e STEP: Creating a pod to test consume configMaps Jun 3 21:59:29.771: INFO: Waiting up to 5m0s for pod "pod-configmaps-661d5cf4-2733-4c23-bdac-ce3cc719a039" in namespace "configmap-2300" to be "Succeeded or Failed" Jun 3 21:59:29.776: INFO: Pod "pod-configmaps-661d5cf4-2733-4c23-bdac-ce3cc719a039": Phase="Pending", Reason="", readiness=false. Elapsed: 4.918862ms Jun 3 21:59:31.781: INFO: Pod "pod-configmaps-661d5cf4-2733-4c23-bdac-ce3cc719a039": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009431518s Jun 3 21:59:33.785: INFO: Pod "pod-configmaps-661d5cf4-2733-4c23-bdac-ce3cc719a039": Phase="Pending", Reason="", readiness=false. Elapsed: 4.01382334s Jun 3 21:59:35.789: INFO: Pod "pod-configmaps-661d5cf4-2733-4c23-bdac-ce3cc719a039": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.017327841s STEP: Saw pod success Jun 3 21:59:35.789: INFO: Pod "pod-configmaps-661d5cf4-2733-4c23-bdac-ce3cc719a039" satisfied condition "Succeeded or Failed" Jun 3 21:59:35.791: INFO: Trying to get logs from node node2 pod pod-configmaps-661d5cf4-2733-4c23-bdac-ce3cc719a039 container agnhost-container: STEP: delete the pod Jun 3 21:59:35.806: INFO: Waiting for pod pod-configmaps-661d5cf4-2733-4c23-bdac-ce3cc719a039 to disappear Jun 3 21:59:35.808: INFO: Pod pod-configmaps-661d5cf4-2733-4c23-bdac-ce3cc719a039 no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 3 21:59:35.808: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-2300" for this suite. • [SLOW TEST:6.081 seconds] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":-1,"completed":31,"skipped":551,"failed":0} S ------------------------------ [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 3 21:59:31.437: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test emptydir 0666 on node default medium Jun 3 21:59:31.469: INFO: Waiting up to 5m0s for pod "pod-7f8e0bb8-714a-48ee-bdb4-b09882fd6a1f" in namespace "emptydir-3831" to be "Succeeded or Failed" Jun 3 21:59:31.471: INFO: Pod "pod-7f8e0bb8-714a-48ee-bdb4-b09882fd6a1f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.336503ms Jun 3 21:59:33.475: INFO: Pod "pod-7f8e0bb8-714a-48ee-bdb4-b09882fd6a1f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006287506s Jun 3 21:59:35.482: INFO: Pod "pod-7f8e0bb8-714a-48ee-bdb4-b09882fd6a1f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.012572131s Jun 3 21:59:37.486: INFO: Pod "pod-7f8e0bb8-714a-48ee-bdb4-b09882fd6a1f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.016833875s STEP: Saw pod success Jun 3 21:59:37.486: INFO: Pod "pod-7f8e0bb8-714a-48ee-bdb4-b09882fd6a1f" satisfied condition "Succeeded or Failed" Jun 3 21:59:37.489: INFO: Trying to get logs from node node1 pod pod-7f8e0bb8-714a-48ee-bdb4-b09882fd6a1f container test-container: STEP: delete the pod Jun 3 21:59:37.502: INFO: Waiting for pod pod-7f8e0bb8-714a-48ee-bdb4-b09882fd6a1f to disappear Jun 3 21:59:37.504: INFO: Pod pod-7f8e0bb8-714a-48ee-bdb4-b09882fd6a1f no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 3 21:59:37.504: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-3831" for this suite. • [SLOW TEST:6.076 seconds] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":10,"skipped":214,"failed":0} SSSSSS ------------------------------ [BeforeEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 3 21:59:29.898: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-4059 A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-4059;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-4059 A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-4059;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-4059.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-4059.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-4059.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-4059.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-4059.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-4059.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-4059.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-4059.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-4059.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-4059.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-4059.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-4059.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-4059.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 200.5.233.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.233.5.200_udp@PTR;check="$$(dig +tcp +noall +answer +search 200.5.233.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.233.5.200_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-4059 A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-4059;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-4059 A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-4059;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-4059.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-4059.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-4059.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-4059.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-4059.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-4059.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-4059.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-4059.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-4059.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-4059.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-4059.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-4059.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-4059.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 200.5.233.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.233.5.200_udp@PTR;check="$$(dig +tcp +noall +answer +search 200.5.233.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.233.5.200_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Jun 3 21:59:35.959: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-4059/dns-test-e1b06db2-5e0d-4267-8f56-77c1e29e9909: the server could not find the requested resource (get pods dns-test-e1b06db2-5e0d-4267-8f56-77c1e29e9909) Jun 3 21:59:35.962: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-4059/dns-test-e1b06db2-5e0d-4267-8f56-77c1e29e9909: the server could not find the requested resource (get pods dns-test-e1b06db2-5e0d-4267-8f56-77c1e29e9909) Jun 3 21:59:35.965: INFO: Unable to read wheezy_udp@dns-test-service.dns-4059 from pod dns-4059/dns-test-e1b06db2-5e0d-4267-8f56-77c1e29e9909: the server could not find the requested resource (get pods dns-test-e1b06db2-5e0d-4267-8f56-77c1e29e9909) Jun 3 21:59:35.968: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4059 from pod dns-4059/dns-test-e1b06db2-5e0d-4267-8f56-77c1e29e9909: the server could not find the requested resource (get pods dns-test-e1b06db2-5e0d-4267-8f56-77c1e29e9909) Jun 3 21:59:35.971: INFO: Unable to read wheezy_udp@dns-test-service.dns-4059.svc from pod dns-4059/dns-test-e1b06db2-5e0d-4267-8f56-77c1e29e9909: the server could not find the requested resource (get pods dns-test-e1b06db2-5e0d-4267-8f56-77c1e29e9909) Jun 3 21:59:35.974: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4059.svc from pod dns-4059/dns-test-e1b06db2-5e0d-4267-8f56-77c1e29e9909: the server could not find the requested resource (get pods dns-test-e1b06db2-5e0d-4267-8f56-77c1e29e9909) Jun 3 21:59:35.977: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-4059.svc from pod dns-4059/dns-test-e1b06db2-5e0d-4267-8f56-77c1e29e9909: the server could not find the requested resource (get pods dns-test-e1b06db2-5e0d-4267-8f56-77c1e29e9909) Jun 3 21:59:35.980: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-4059.svc from pod dns-4059/dns-test-e1b06db2-5e0d-4267-8f56-77c1e29e9909: the server could not find the requested resource (get pods dns-test-e1b06db2-5e0d-4267-8f56-77c1e29e9909) Jun 3 21:59:36.001: INFO: Unable to read jessie_udp@dns-test-service from pod dns-4059/dns-test-e1b06db2-5e0d-4267-8f56-77c1e29e9909: the server could not find the requested resource (get pods dns-test-e1b06db2-5e0d-4267-8f56-77c1e29e9909) Jun 3 21:59:36.004: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-4059/dns-test-e1b06db2-5e0d-4267-8f56-77c1e29e9909: the server could not find the requested resource (get pods dns-test-e1b06db2-5e0d-4267-8f56-77c1e29e9909) Jun 3 21:59:36.006: INFO: Unable to read jessie_udp@dns-test-service.dns-4059 from pod dns-4059/dns-test-e1b06db2-5e0d-4267-8f56-77c1e29e9909: the server could not find the requested resource (get pods dns-test-e1b06db2-5e0d-4267-8f56-77c1e29e9909) Jun 3 21:59:36.010: INFO: Unable to read jessie_tcp@dns-test-service.dns-4059 from pod dns-4059/dns-test-e1b06db2-5e0d-4267-8f56-77c1e29e9909: the server could not find the requested resource (get pods dns-test-e1b06db2-5e0d-4267-8f56-77c1e29e9909) Jun 3 21:59:36.013: INFO: Unable to read jessie_udp@dns-test-service.dns-4059.svc from pod dns-4059/dns-test-e1b06db2-5e0d-4267-8f56-77c1e29e9909: the server could not find the requested resource (get pods dns-test-e1b06db2-5e0d-4267-8f56-77c1e29e9909) Jun 3 21:59:36.015: INFO: Unable to read jessie_tcp@dns-test-service.dns-4059.svc from pod dns-4059/dns-test-e1b06db2-5e0d-4267-8f56-77c1e29e9909: the server could not find the requested resource (get pods dns-test-e1b06db2-5e0d-4267-8f56-77c1e29e9909) Jun 3 21:59:36.017: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-4059.svc from pod dns-4059/dns-test-e1b06db2-5e0d-4267-8f56-77c1e29e9909: the server could not find the requested resource (get pods dns-test-e1b06db2-5e0d-4267-8f56-77c1e29e9909) Jun 3 21:59:36.019: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-4059.svc from pod dns-4059/dns-test-e1b06db2-5e0d-4267-8f56-77c1e29e9909: the server could not find the requested resource (get pods dns-test-e1b06db2-5e0d-4267-8f56-77c1e29e9909) Jun 3 21:59:36.034: INFO: Lookups using dns-4059/dns-test-e1b06db2-5e0d-4267-8f56-77c1e29e9909 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-4059 wheezy_tcp@dns-test-service.dns-4059 wheezy_udp@dns-test-service.dns-4059.svc wheezy_tcp@dns-test-service.dns-4059.svc wheezy_udp@_http._tcp.dns-test-service.dns-4059.svc wheezy_tcp@_http._tcp.dns-test-service.dns-4059.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-4059 jessie_tcp@dns-test-service.dns-4059 jessie_udp@dns-test-service.dns-4059.svc jessie_tcp@dns-test-service.dns-4059.svc jessie_udp@_http._tcp.dns-test-service.dns-4059.svc jessie_tcp@_http._tcp.dns-test-service.dns-4059.svc] Jun 3 21:59:41.104: INFO: DNS probes using dns-4059/dns-test-e1b06db2-5e0d-4267-8f56-77c1e29e9909 succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 3 21:59:41.137: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-4059" for this suite. • [SLOW TEST:11.247 seconds] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]","total":-1,"completed":6,"skipped":176,"failed":1,"failures":["[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]"]} SSS ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","total":-1,"completed":14,"skipped":211,"failed":0} [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 3 21:59:19.699: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:241 [BeforeEach] Kubectl logs /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1386 STEP: creating an pod Jun 3 21:59:19.719: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-1822 run logs-generator --image=k8s.gcr.io/e2e-test-images/agnhost:2.32 --restart=Never -- logs-generator --log-lines-total 100 --run-duration 20s' Jun 3 21:59:19.889: INFO: stderr: "" Jun 3 21:59:19.889: INFO: stdout: "pod/logs-generator created\n" [It] should be able to retrieve and filter logs [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Waiting for log generator to start. Jun 3 21:59:19.889: INFO: Waiting up to 5m0s for 1 pods to be running and ready, or succeeded: [logs-generator] Jun 3 21:59:19.890: INFO: Waiting up to 5m0s for pod "logs-generator" in namespace "kubectl-1822" to be "running and ready, or succeeded" Jun 3 21:59:19.894: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 3.970548ms Jun 3 21:59:21.897: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007579041s Jun 3 21:59:23.901: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 4.011841581s Jun 3 21:59:25.906: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 6.016291265s Jun 3 21:59:27.911: INFO: Pod "logs-generator": Phase="Running", Reason="", readiness=true. Elapsed: 8.021673154s Jun 3 21:59:27.911: INFO: Pod "logs-generator" satisfied condition "running and ready, or succeeded" Jun 3 21:59:27.911: INFO: Wanted all 1 pods to be running and ready, or succeeded. Result: true. Pods: [logs-generator] STEP: checking for a matching strings Jun 3 21:59:27.911: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-1822 logs logs-generator logs-generator' Jun 3 21:59:28.067: INFO: stderr: "" Jun 3 21:59:28.067: INFO: stdout: "I0603 21:59:25.308711 1 logs_generator.go:76] 0 PUT /api/v1/namespaces/ns/pods/7xhv 517\nI0603 21:59:25.508814 1 logs_generator.go:76] 1 PUT /api/v1/namespaces/ns/pods/v7m 281\nI0603 21:59:25.709700 1 logs_generator.go:76] 2 GET /api/v1/namespaces/default/pods/vzm 354\nI0603 21:59:25.908790 1 logs_generator.go:76] 3 PUT /api/v1/namespaces/kube-system/pods/vlm 212\nI0603 21:59:26.109130 1 logs_generator.go:76] 4 POST /api/v1/namespaces/default/pods/b67 258\nI0603 21:59:26.309355 1 logs_generator.go:76] 5 PUT /api/v1/namespaces/ns/pods/qr4 274\nI0603 21:59:26.509668 1 logs_generator.go:76] 6 POST /api/v1/namespaces/default/pods/xxv 539\nI0603 21:59:26.708881 1 logs_generator.go:76] 7 PUT /api/v1/namespaces/kube-system/pods/vll 210\nI0603 21:59:26.909410 1 logs_generator.go:76] 8 PUT /api/v1/namespaces/ns/pods/dqr 218\nI0603 21:59:27.109757 1 logs_generator.go:76] 9 POST /api/v1/namespaces/default/pods/nv4 241\nI0603 21:59:27.309138 1 logs_generator.go:76] 10 GET /api/v1/namespaces/default/pods/4x9 474\nI0603 21:59:27.509601 1 logs_generator.go:76] 11 POST /api/v1/namespaces/default/pods/lf6 294\nI0603 21:59:27.708860 1 logs_generator.go:76] 12 PUT /api/v1/namespaces/ns/pods/nvl 461\nI0603 21:59:27.909148 1 logs_generator.go:76] 13 PUT /api/v1/namespaces/default/pods/bvr 421\n" STEP: limiting log lines Jun 3 21:59:28.067: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-1822 logs logs-generator logs-generator --tail=1' Jun 3 21:59:28.220: INFO: stderr: "" Jun 3 21:59:28.220: INFO: stdout: "I0603 21:59:28.109528 1 logs_generator.go:76] 14 GET /api/v1/namespaces/ns/pods/hmt 499\n" Jun 3 21:59:28.220: INFO: got output "I0603 21:59:28.109528 1 logs_generator.go:76] 14 GET /api/v1/namespaces/ns/pods/hmt 499\n" STEP: limiting log bytes Jun 3 21:59:28.220: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-1822 logs logs-generator logs-generator --limit-bytes=1' Jun 3 21:59:28.398: INFO: stderr: "" Jun 3 21:59:28.399: INFO: stdout: "I" Jun 3 21:59:28.399: INFO: got output "I" STEP: exposing timestamps Jun 3 21:59:28.399: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-1822 logs logs-generator logs-generator --tail=1 --timestamps' Jun 3 21:59:28.596: INFO: stderr: "" Jun 3 21:59:28.596: INFO: stdout: "2022-06-03T21:59:28.509316482Z I0603 21:59:28.509158 1 logs_generator.go:76] 16 GET /api/v1/namespaces/default/pods/kmr 228\n" Jun 3 21:59:28.596: INFO: got output "2022-06-03T21:59:28.509316482Z I0603 21:59:28.509158 1 logs_generator.go:76] 16 GET /api/v1/namespaces/default/pods/kmr 228\n" STEP: restricting to a time range Jun 3 21:59:31.097: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-1822 logs logs-generator logs-generator --since=1s' Jun 3 21:59:31.351: INFO: stderr: "" Jun 3 21:59:31.351: INFO: stdout: "I0603 21:59:30.509100 1 logs_generator.go:76] 26 PUT /api/v1/namespaces/kube-system/pods/cxw 590\nI0603 21:59:30.709437 1 logs_generator.go:76] 27 GET /api/v1/namespaces/default/pods/c69 518\nI0603 21:59:30.909149 1 logs_generator.go:76] 28 GET /api/v1/namespaces/kube-system/pods/2wf 294\nI0603 21:59:31.109647 1 logs_generator.go:76] 29 PUT /api/v1/namespaces/default/pods/z5q 211\nI0603 21:59:31.308944 1 logs_generator.go:76] 30 POST /api/v1/namespaces/default/pods/5gk 372\n" Jun 3 21:59:31.351: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-1822 logs logs-generator logs-generator --since=24h' Jun 3 21:59:31.518: INFO: stderr: "" Jun 3 21:59:31.518: INFO: stdout: "I0603 21:59:25.308711 1 logs_generator.go:76] 0 PUT /api/v1/namespaces/ns/pods/7xhv 517\nI0603 21:59:25.508814 1 logs_generator.go:76] 1 PUT /api/v1/namespaces/ns/pods/v7m 281\nI0603 21:59:25.709700 1 logs_generator.go:76] 2 GET /api/v1/namespaces/default/pods/vzm 354\nI0603 21:59:25.908790 1 logs_generator.go:76] 3 PUT /api/v1/namespaces/kube-system/pods/vlm 212\nI0603 21:59:26.109130 1 logs_generator.go:76] 4 POST /api/v1/namespaces/default/pods/b67 258\nI0603 21:59:26.309355 1 logs_generator.go:76] 5 PUT /api/v1/namespaces/ns/pods/qr4 274\nI0603 21:59:26.509668 1 logs_generator.go:76] 6 POST /api/v1/namespaces/default/pods/xxv 539\nI0603 21:59:26.708881 1 logs_generator.go:76] 7 PUT /api/v1/namespaces/kube-system/pods/vll 210\nI0603 21:59:26.909410 1 logs_generator.go:76] 8 PUT /api/v1/namespaces/ns/pods/dqr 218\nI0603 21:59:27.109757 1 logs_generator.go:76] 9 POST /api/v1/namespaces/default/pods/nv4 241\nI0603 21:59:27.309138 1 logs_generator.go:76] 10 GET /api/v1/namespaces/default/pods/4x9 474\nI0603 21:59:27.509601 1 logs_generator.go:76] 11 POST /api/v1/namespaces/default/pods/lf6 294\nI0603 21:59:27.708860 1 logs_generator.go:76] 12 PUT /api/v1/namespaces/ns/pods/nvl 461\nI0603 21:59:27.909148 1 logs_generator.go:76] 13 PUT /api/v1/namespaces/default/pods/bvr 421\nI0603 21:59:28.109528 1 logs_generator.go:76] 14 GET /api/v1/namespaces/ns/pods/hmt 499\nI0603 21:59:28.308876 1 logs_generator.go:76] 15 POST /api/v1/namespaces/kube-system/pods/tzj 399\nI0603 21:59:28.509158 1 logs_generator.go:76] 16 GET /api/v1/namespaces/default/pods/kmr 228\nI0603 21:59:28.709624 1 logs_generator.go:76] 17 POST /api/v1/namespaces/default/pods/w8s6 382\nI0603 21:59:28.909010 1 logs_generator.go:76] 18 PUT /api/v1/namespaces/ns/pods/9xfv 227\nI0603 21:59:29.109473 1 logs_generator.go:76] 19 PUT /api/v1/namespaces/ns/pods/x22 488\nI0603 21:59:29.308857 1 logs_generator.go:76] 20 PUT /api/v1/namespaces/kube-system/pods/zq2 371\nI0603 21:59:29.509342 1 logs_generator.go:76] 21 PUT /api/v1/namespaces/default/pods/xplh 325\nI0603 21:59:29.709644 1 logs_generator.go:76] 22 POST /api/v1/namespaces/default/pods/wl4 436\nI0603 21:59:29.908921 1 logs_generator.go:76] 23 GET /api/v1/namespaces/kube-system/pods/bqtx 384\nI0603 21:59:30.109252 1 logs_generator.go:76] 24 GET /api/v1/namespaces/default/pods/tmvl 328\nI0603 21:59:30.308721 1 logs_generator.go:76] 25 GET /api/v1/namespaces/kube-system/pods/vv76 355\nI0603 21:59:30.509100 1 logs_generator.go:76] 26 PUT /api/v1/namespaces/kube-system/pods/cxw 590\nI0603 21:59:30.709437 1 logs_generator.go:76] 27 GET /api/v1/namespaces/default/pods/c69 518\nI0603 21:59:30.909149 1 logs_generator.go:76] 28 GET /api/v1/namespaces/kube-system/pods/2wf 294\nI0603 21:59:31.109647 1 logs_generator.go:76] 29 PUT /api/v1/namespaces/default/pods/z5q 211\nI0603 21:59:31.308944 1 logs_generator.go:76] 30 POST /api/v1/namespaces/default/pods/5gk 372\nI0603 21:59:31.509216 1 logs_generator.go:76] 31 PUT /api/v1/namespaces/kube-system/pods/jk8 594\n" [AfterEach] Kubectl logs /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1391 Jun 3 21:59:31.519: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-1822 delete pod logs-generator' Jun 3 21:59:41.552: INFO: stderr: "" Jun 3 21:59:41.552: INFO: stdout: "pod \"logs-generator\" deleted\n" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 3 21:59:41.552: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1822" for this suite. • [SLOW TEST:21.860 seconds] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl logs /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1383 should be able to retrieve and filter logs [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 3 21:57:25.814: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746 [It] should be able to create a functioning NodePort service [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating service nodeport-test with type=NodePort in namespace services-7321 STEP: creating replication controller nodeport-test in namespace services-7321 I0603 21:57:25.847044 40 runners.go:190] Created replication controller with name: nodeport-test, namespace: services-7321, replica count: 2 I0603 21:57:28.898088 40 runners.go:190] nodeport-test Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0603 21:57:31.898660 40 runners.go:190] nodeport-test Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jun 3 21:57:31.898: INFO: Creating new exec pod Jun 3 21:57:38.923: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7321 exec execpodc96mn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 nodeport-test 80' Jun 3 21:57:39.182: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 nodeport-test 80\nConnection to nodeport-test 80 port [tcp/http] succeeded!\n" Jun 3 21:57:39.182: INFO: stdout: "nodeport-test-vd8hl" Jun 3 21:57:39.182: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7321 exec execpodc96mn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.233.10.95 80' Jun 3 21:57:39.439: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 10.233.10.95 80\nConnection to 10.233.10.95 80 port [tcp/http] succeeded!\n" Jun 3 21:57:39.439: INFO: stdout: "nodeport-test-vd8hl" Jun 3 21:57:39.439: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7321 exec execpodc96mn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32697' Jun 3 21:57:39.670: INFO: rc: 1 Jun 3 21:57:39.670: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7321 exec execpodc96mn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32697: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32697 nc: connect to 10.10.190.207 port 32697 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 21:57:40.671: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7321 exec execpodc96mn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32697' Jun 3 21:57:40.986: INFO: rc: 1 Jun 3 21:57:40.987: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7321 exec execpodc96mn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32697: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32697 nc: connect to 10.10.190.207 port 32697 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 21:57:41.671: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7321 exec execpodc96mn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32697' Jun 3 21:57:41.917: INFO: rc: 1 Jun 3 21:57:41.917: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7321 exec execpodc96mn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32697: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32697 nc: connect to 10.10.190.207 port 32697 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 21:57:42.671: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7321 exec execpodc96mn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32697' Jun 3 21:57:42.933: INFO: rc: 1 Jun 3 21:57:42.933: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7321 exec execpodc96mn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32697: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32697 nc: connect to 10.10.190.207 port 32697 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 21:57:43.671: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7321 exec execpodc96mn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32697' Jun 3 21:57:43.925: INFO: rc: 1 Jun 3 21:57:43.925: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7321 exec execpodc96mn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32697: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32697 nc: connect to 10.10.190.207 port 32697 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 21:57:44.670: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7321 exec execpodc96mn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32697' Jun 3 21:57:44.925: INFO: rc: 1 Jun 3 21:57:44.925: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7321 exec execpodc96mn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32697: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32697 nc: connect to 10.10.190.207 port 32697 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 21:57:45.671: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7321 exec execpodc96mn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32697' Jun 3 21:57:45.913: INFO: rc: 1 Jun 3 21:57:45.913: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7321 exec execpodc96mn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32697: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32697 nc: connect to 10.10.190.207 port 32697 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 21:57:46.671: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7321 exec execpodc96mn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32697' Jun 3 21:57:46.921: INFO: rc: 1 Jun 3 21:57:46.921: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7321 exec execpodc96mn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32697: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32697 nc: connect to 10.10.190.207 port 32697 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 21:57:47.671: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7321 exec execpodc96mn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32697' Jun 3 21:57:48.105: INFO: rc: 1 Jun 3 21:57:48.105: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7321 exec execpodc96mn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32697: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32697 nc: connect to 10.10.190.207 port 32697 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 21:57:48.671: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7321 exec execpodc96mn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32697' Jun 3 21:57:48.926: INFO: rc: 1 Jun 3 21:57:48.926: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7321 exec execpodc96mn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32697: Command stdout: stderr: + + echonc hostName -v -t -w 2 10.10.190.207 32697 nc: connect to 10.10.190.207 port 32697 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 21:57:49.671: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7321 exec execpodc96mn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32697' Jun 3 21:57:49.921: INFO: rc: 1 Jun 3 21:57:49.921: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7321 exec execpodc96mn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32697: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32697 nc: connect to 10.10.190.207 port 32697 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 21:57:50.671: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7321 exec execpodc96mn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32697' Jun 3 21:57:50.908: INFO: rc: 1 Jun 3 21:57:50.908: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7321 exec execpodc96mn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32697: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32697 nc: connect to 10.10.190.207 port 32697 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 21:57:51.671: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7321 exec execpodc96mn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32697' Jun 3 21:57:51.918: INFO: rc: 1 Jun 3 21:57:51.918: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7321 exec execpodc96mn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32697: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32697 nc: connect to 10.10.190.207 port 32697 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 21:57:52.673: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7321 exec execpodc96mn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32697' Jun 3 21:57:52.918: INFO: rc: 1 Jun 3 21:57:52.918: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7321 exec execpodc96mn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32697: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32697 nc: connect to 10.10.190.207 port 32697 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 21:57:53.671: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7321 exec execpodc96mn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32697' Jun 3 21:57:53.923: INFO: rc: 1 Jun 3 21:57:53.923: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7321 exec execpodc96mn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32697: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32697 nc: connect to 10.10.190.207 port 32697 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 21:57:54.671: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7321 exec execpodc96mn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32697' Jun 3 21:57:54.919: INFO: rc: 1 Jun 3 21:57:54.920: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7321 exec execpodc96mn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32697: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32697 nc: connect to 10.10.190.207 port 32697 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 21:57:55.671: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7321 exec execpodc96mn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32697' Jun 3 21:57:55.952: INFO: rc: 1 Jun 3 21:57:55.952: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7321 exec execpodc96mn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32697: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32697 nc: connect to 10.10.190.207 port 32697 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 21:57:56.671: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7321 exec execpodc96mn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32697' Jun 3 21:57:56.980: INFO: rc: 1 Jun 3 21:57:56.980: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7321 exec execpodc96mn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32697: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32697 nc: connect to 10.10.190.207 port 32697 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 21:57:57.671: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7321 exec execpodc96mn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32697' Jun 3 21:57:58.008: INFO: rc: 1 Jun 3 21:57:58.008: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7321 exec execpodc96mn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32697: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32697 nc: connect to 10.10.190.207 port 32697 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 21:57:58.671: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7321 exec execpodc96mn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32697' Jun 3 21:57:59.368: INFO: rc: 1 Jun 3 21:57:59.368: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7321 exec execpodc96mn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32697: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32697 nc: connect to 10.10.190.207 port 32697 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 21:57:59.671: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7321 exec execpodc96mn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32697' Jun 3 21:57:59.919: INFO: rc: 1 Jun 3 21:57:59.919: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7321 exec execpodc96mn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32697: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32697 nc: connect to 10.10.190.207 port 32697 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 21:58:00.671: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7321 exec execpodc96mn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32697' Jun 3 21:58:00.901: INFO: rc: 1 Jun 3 21:58:00.901: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7321 exec execpodc96mn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32697: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32697 nc: connect to 10.10.190.207 port 32697 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 21:58:01.671: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7321 exec execpodc96mn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32697' Jun 3 21:58:01.940: INFO: rc: 1 Jun 3 21:58:01.940: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7321 exec execpodc96mn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32697: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32697 nc: connect to 10.10.190.207 port 32697 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 21:58:02.671: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7321 exec execpodc96mn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32697' Jun 3 21:58:03.041: INFO: rc: 1 Jun 3 21:58:03.041: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7321 exec execpodc96mn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32697: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32697 nc: connect to 10.10.190.207 port 32697 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 21:58:03.672: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7321 exec execpodc96mn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32697' Jun 3 21:58:03.918: INFO: rc: 1 Jun 3 21:58:03.918: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7321 exec execpodc96mn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32697: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32697 nc: connect to 10.10.190.207 port 32697 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 21:58:04.671: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7321 exec execpodc96mn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32697' Jun 3 21:58:04.946: INFO: rc: 1 Jun 3 21:58:04.946: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7321 exec execpodc96mn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32697: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32697 nc: connect to 10.10.190.207 port 32697 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 21:58:05.671: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7321 exec execpodc96mn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32697' Jun 3 21:58:05.925: INFO: rc: 1 Jun 3 21:58:05.925: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7321 exec execpodc96mn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32697: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32697 nc: connect to 10.10.190.207 port 32697 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 21:58:06.671: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7321 exec execpodc96mn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32697' Jun 3 21:58:06.910: INFO: rc: 1 Jun 3 21:58:06.910: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7321 exec execpodc96mn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32697: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32697 nc: connect to 10.10.190.207 port 32697 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 21:58:07.672: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7321 exec execpodc96mn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32697' Jun 3 21:58:07.920: INFO: rc: 1 Jun 3 21:58:07.920: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7321 exec execpodc96mn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32697: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32697 nc: connect to 10.10.190.207 port 32697 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 21:58:08.673: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7321 exec execpodc96mn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32697' Jun 3 21:58:08.891: INFO: rc: 1 Jun 3 21:58:08.891: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7321 exec execpodc96mn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32697: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32697 nc: connect to 10.10.190.207 port 32697 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 21:58:09.671: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7321 exec execpodc96mn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32697' Jun 3 21:58:09.927: INFO: rc: 1 Jun 3 21:58:09.927: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7321 exec execpodc96mn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32697: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32697 nc: connect to 10.10.190.207 port 32697 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 21:58:10.671: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7321 exec execpodc96mn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32697' Jun 3 21:58:10.934: INFO: rc: 1 Jun 3 21:58:10.934: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7321 exec execpodc96mn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32697: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32697 nc: connect to 10.10.190.207 port 32697 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 21:58:11.671: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7321 exec execpodc96mn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32697' Jun 3 21:58:11.998: INFO: rc: 1 Jun 3 21:58:11.998: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7321 exec execpodc96mn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32697: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32697 nc: connect to 10.10.190.207 port 32697 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 21:58:12.671: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7321 exec execpodc96mn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32697' Jun 3 21:58:12.922: INFO: rc: 1 Jun 3 21:58:12.922: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7321 exec execpodc96mn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32697: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32697 nc: connect to 10.10.190.207 port 32697 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 21:58:13.672: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7321 exec execpodc96mn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32697' Jun 3 21:58:13.921: INFO: rc: 1 Jun 3 21:58:13.921: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7321 exec execpodc96mn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32697: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32697 nc: connect to 10.10.190.207 port 32697 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 21:58:14.670: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7321 exec execpodc96mn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32697' Jun 3 21:58:15.036: INFO: rc: 1 Jun 3 21:58:15.036: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7321 exec execpodc96mn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32697: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32697 nc: connect to 10.10.190.207 port 32697 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 21:58:15.671: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7321 exec execpodc96mn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32697' Jun 3 21:58:15.915: INFO: rc: 1 Jun 3 21:58:15.915: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7321 exec execpodc96mn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32697: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32697 nc: connect to 10.10.190.207 port 32697 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 21:58:16.671: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7321 exec execpodc96mn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32697' Jun 3 21:58:17.010: INFO: rc: 1 Jun 3 21:58:17.010: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7321 exec execpodc96mn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32697: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32697 nc: connect to 10.10.190.207 port 32697 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 21:58:17.671: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7321 exec execpodc96mn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32697' Jun 3 21:58:17.927: INFO: rc: 1 Jun 3 21:58:17.927: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7321 exec execpodc96mn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32697: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32697 nc: connect to 10.10.190.207 port 32697 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 21:58:18.671: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7321 exec execpodc96mn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32697' Jun 3 21:58:18.933: INFO: rc: 1 Jun 3 21:58:18.933: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7321 exec execpodc96mn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32697: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32697 nc: connect to 10.10.190.207 port 32697 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 21:58:19.671: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7321 exec execpodc96mn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32697' Jun 3 21:58:19.891: INFO: rc: 1 Jun 3 21:58:19.891: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7321 exec execpodc96mn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32697: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32697 nc: connect to 10.10.190.207 port 32697 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 21:58:20.671: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7321 exec execpodc96mn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32697' Jun 3 21:58:20.939: INFO: rc: 1 Jun 3 21:58:20.939: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7321 exec execpodc96mn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32697: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32697 nc: connect to 10.10.190.207 port 32697 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 21:58:21.671: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7321 exec execpodc96mn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32697' Jun 3 21:58:21.940: INFO: rc: 1 Jun 3 21:58:21.940: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7321 exec execpodc96mn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32697: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32697 nc: connect to 10.10.190.207 port 32697 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 21:58:22.670: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7321 exec execpodc96mn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32697' Jun 3 21:58:22.917: INFO: rc: 1 Jun 3 21:58:22.917: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7321 exec execpodc96mn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32697: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32697 nc: connect to 10.10.190.207 port 32697 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 21:58:23.671: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7321 exec execpodc96mn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32697' Jun 3 21:58:23.926: INFO: rc: 1 Jun 3 21:58:23.926: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7321 exec execpodc96mn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32697: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32697 nc: connect to 10.10.190.207 port 32697 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 21:58:24.671: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7321 exec execpodc96mn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32697' Jun 3 21:58:24.895: INFO: rc: 1 Jun 3 21:58:24.896: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7321 exec execpodc96mn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32697: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32697 nc: connect to 10.10.190.207 port 32697 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 21:58:25.670: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7321 exec execpodc96mn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32697' Jun 3 21:58:25.927: INFO: rc: 1 Jun 3 21:58:25.928: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7321 exec execpodc96mn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32697: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32697 nc: connect to 10.10.190.207 port 32697 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 21:58:26.670: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7321 exec execpodc96mn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32697' Jun 3 21:58:26.914: INFO: rc: 1 Jun 3 21:58:26.914: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7321 exec execpodc96mn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32697: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32697 nc: connect to 10.10.190.207 port 32697 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 21:58:27.671: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7321 exec execpodc96mn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32697' Jun 3 21:58:27.914: INFO: rc: 1 Jun 3 21:58:27.914: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7321 exec execpodc96mn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32697: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32697 nc: connect to 10.10.190.207 port 32697 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 21:58:28.671: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7321 exec execpodc96mn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32697' Jun 3 21:58:29.206: INFO: rc: 1 Jun 3 21:58:29.206: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7321 exec execpodc96mn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32697: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32697 nc: connect to 10.10.190.207 port 32697 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 21:58:29.671: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7321 exec execpodc96mn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32697' Jun 3 21:58:30.073: INFO: rc: 1 Jun 3 21:58:30.073: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7321 exec execpodc96mn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32697: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32697 nc: connect to 10.10.190.207 port 32697 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 21:58:30.671: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7321 exec execpodc96mn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32697' Jun 3 21:58:30.920: INFO: rc: 1 Jun 3 21:58:30.920: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7321 exec execpodc96mn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32697: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32697 nc: connect to 10.10.190.207 port 32697 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 21:58:31.671: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7321 exec execpodc96mn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32697' Jun 3 21:58:31.985: INFO: rc: 1 Jun 3 21:58:31.985: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7321 exec execpodc96mn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32697: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32697 nc: connect to 10.10.190.207 port 32697 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 21:58:32.673: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7321 exec execpodc96mn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32697' Jun 3 21:58:33.366: INFO: rc: 1 Jun 3 21:58:33.366: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7321 exec execpodc96mn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32697: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32697 nc: connect to 10.10.190.207 port 32697 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 21:58:33.670: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7321 exec execpodc96mn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32697' Jun 3 21:58:33.957: INFO: rc: 1 Jun 3 21:58:33.957: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7321 exec execpodc96mn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32697: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32697 nc: connect to 10.10.190.207 port 32697 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 21:58:34.671: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7321 exec execpodc96mn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32697' Jun 3 21:58:35.121: INFO: rc: 1 Jun 3 21:58:35.121: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7321 exec execpodc96mn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32697: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32697 nc: connect to 10.10.190.207 port 32697 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 21:58:35.672: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7321 exec execpodc96mn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32697' Jun 3 21:58:35.900: INFO: rc: 1 Jun 3 21:58:35.900: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7321 exec execpodc96mn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32697: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32697 nc: connect to 10.10.190.207 port 32697 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 21:58:36.673: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7321 exec execpodc96mn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32697' Jun 3 21:58:36.928: INFO: rc: 1 Jun 3 21:58:36.929: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7321 exec execpodc96mn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32697: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32697 nc: connect to 10.10.190.207 port 32697 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 21:58:37.670: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7321 exec execpodc96mn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32697' Jun 3 21:58:37.927: INFO: rc: 1 Jun 3 21:58:37.927: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7321 exec execpodc96mn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32697: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32697 nc: connect to 10.10.190.207 port 32697 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 21:58:38.671: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7321 exec execpodc96mn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32697' Jun 3 21:58:38.919: INFO: rc: 1 Jun 3 21:58:38.919: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7321 exec execpodc96mn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32697: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32697 nc: connect to 10.10.190.207 port 32697 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 21:58:39.671: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7321 exec execpodc96mn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32697' Jun 3 21:58:39.946: INFO: rc: 1 Jun 3 21:58:39.946: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7321 exec execpodc96mn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32697: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32697 + echo hostName nc: connect to 10.10.190.207 port 32697 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 21:58:40.671: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7321 exec execpodc96mn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32697' Jun 3 21:58:40.923: INFO: rc: 1 Jun 3 21:58:40.923: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7321 exec execpodc96mn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32697: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32697 nc: connect to 10.10.190.207 port 32697 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 21:58:41.671: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7321 exec execpodc96mn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32697' Jun 3 21:58:41.920: INFO: rc: 1 Jun 3 21:58:41.921: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7321 exec execpodc96mn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32697: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32697 nc: connect to 10.10.190.207 port 32697 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 21:58:42.672: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7321 exec execpodc96mn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32697' Jun 3 21:58:42.894: INFO: rc: 1 Jun 3 21:58:42.894: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7321 exec execpodc96mn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32697: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32697 nc: connect to 10.10.190.207 port 32697 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 21:58:43.671: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7321 exec execpodc96mn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32697' Jun 3 21:58:44.285: INFO: rc: 1 Jun 3 21:58:44.285: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7321 exec execpodc96mn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32697: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32697 nc: connect to 10.10.190.207 port 32697 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 21:58:44.671: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7321 exec execpodc96mn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32697' Jun 3 21:58:45.041: INFO: rc: 1 Jun 3 21:58:45.041: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7321 exec execpodc96mn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32697: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32697 nc: connect to 10.10.190.207 port 32697 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 21:58:45.671: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7321 exec execpodc96mn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32697' Jun 3 21:58:45.936: INFO: rc: 1 Jun 3 21:58:45.936: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7321 exec execpodc96mn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32697: Command stdout: stderr: + echo+ hostNamenc -v -t -w 2 10.10.190.207 32697 nc: connect to 10.10.190.207 port 32697 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 21:58:46.671: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7321 exec execpodc96mn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32697' Jun 3 21:58:46.977: INFO: rc: 1 Jun 3 21:58:46.977: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7321 exec execpodc96mn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32697: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32697 nc: connect to 10.10.190.207 port 32697 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 21:58:47.671: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7321 exec execpodc96mn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32697' Jun 3 21:58:47.913: INFO: rc: 1 Jun 3 21:58:47.913: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7321 exec execpodc96mn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32697: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32697 nc: connect to 10.10.190.207 port 32697 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 21:58:48.671: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7321 exec execpodc96mn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32697' Jun 3 21:58:48.920: INFO: rc: 1 Jun 3 21:58:48.920: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7321 exec execpodc96mn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32697: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32697 nc: connect to 10.10.190.207 port 32697 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 21:58:49.671: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7321 exec execpodc96mn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32697' Jun 3 21:58:49.913: INFO: rc: 1 Jun 3 21:58:49.913: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7321 exec execpodc96mn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32697: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32697 nc: connect to 10.10.190.207 port 32697 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 21:58:50.671: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7321 exec execpodc96mn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32697' Jun 3 21:58:50.933: INFO: rc: 1 Jun 3 21:58:50.933: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7321 exec execpodc96mn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32697: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32697 nc: connect to 10.10.190.207 port 32697 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 21:58:51.671: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7321 exec execpodc96mn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32697' Jun 3 21:58:52.219: INFO: rc: 1 Jun 3 21:58:52.219: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7321 exec execpodc96mn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32697: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32697 nc: connect to 10.10.190.207 port 32697 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 21:58:52.671: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7321 exec execpodc96mn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32697' Jun 3 21:58:53.123: INFO: rc: 1 Jun 3 21:58:53.123: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7321 exec execpodc96mn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32697: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32697 nc: connect to 10.10.190.207 port 32697 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 21:58:53.671: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7321 exec execpodc96mn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32697' Jun 3 21:58:54.348: INFO: rc: 1 Jun 3 21:58:54.348: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7321 exec execpodc96mn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32697: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32697 nc: connect to 10.10.190.207 port 32697 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 21:58:54.671: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7321 exec execpodc96mn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32697' Jun 3 21:58:55.081: INFO: rc: 1 Jun 3 21:58:55.081: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7321 exec execpodc96mn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32697: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32697 nc: connect to 10.10.190.207 port 32697 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 21:58:55.670: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7321 exec execpodc96mn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32697' Jun 3 21:58:55.916: INFO: rc: 1 Jun 3 21:58:55.916: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7321 exec execpodc96mn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32697: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32697 nc: connect to 10.10.190.207 port 32697 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 21:58:56.671: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7321 exec execpodc96mn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32697' Jun 3 21:58:56.916: INFO: rc: 1 Jun 3 21:58:56.916: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7321 exec execpodc96mn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32697: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32697 nc: connect to 10.10.190.207 port 32697 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 21:58:57.671: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7321 exec execpodc96mn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32697' Jun 3 21:58:57.936: INFO: rc: 1 Jun 3 21:58:57.936: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7321 exec execpodc96mn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32697: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32697 nc: connect to 10.10.190.207 port 32697 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 21:58:58.671: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7321 exec execpodc96mn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32697' Jun 3 21:58:59.112: INFO: rc: 1 Jun 3 21:58:59.112: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7321 exec execpodc96mn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32697: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32697 nc: connect to 10.10.190.207 port 32697 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 21:58:59.671: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7321 exec execpodc96mn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32697' Jun 3 21:58:59.995: INFO: rc: 1 Jun 3 21:58:59.996: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7321 exec execpodc96mn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32697: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32697 nc: connect to 10.10.190.207 port 32697 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 21:59:00.670: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7321 exec execpodc96mn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32697' Jun 3 21:59:00.971: INFO: rc: 1 Jun 3 21:59:00.972: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7321 exec execpodc96mn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32697: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32697 nc: connect to 10.10.190.207 port 32697 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 21:59:01.671: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7321 exec execpodc96mn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32697' Jun 3 21:59:01.931: INFO: rc: 1 Jun 3 21:59:01.931: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7321 exec execpodc96mn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32697: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32697 + echo hostName nc: connect to 10.10.190.207 port 32697 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 21:59:02.671: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7321 exec execpodc96mn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32697' Jun 3 21:59:02.941: INFO: rc: 1 Jun 3 21:59:02.941: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7321 exec execpodc96mn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32697: Command stdout: stderr: + + ncecho -v -t -w 2 10.10.190.207 32697 hostName nc: connect to 10.10.190.207 port 32697 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 21:59:03.671: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7321 exec execpodc96mn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32697' Jun 3 21:59:03.910: INFO: rc: 1 Jun 3 21:59:03.910: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7321 exec execpodc96mn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32697: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32697 nc: connect to 10.10.190.207 port 32697 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 21:59:04.671: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7321 exec execpodc96mn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32697' Jun 3 21:59:04.929: INFO: rc: 1 Jun 3 21:59:04.929: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7321 exec execpodc96mn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32697: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32697 nc: connect to 10.10.190.207 port 32697 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 21:59:05.671: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7321 exec execpodc96mn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32697' Jun 3 21:59:05.905: INFO: rc: 1 Jun 3 21:59:05.905: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7321 exec execpodc96mn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32697: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32697 nc: connect to 10.10.190.207 port 32697 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 21:59:06.671: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7321 exec execpodc96mn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32697' Jun 3 21:59:06.957: INFO: rc: 1 Jun 3 21:59:06.957: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7321 exec execpodc96mn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32697: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32697 nc: connect to 10.10.190.207 port 32697 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 21:59:07.671: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7321 exec execpodc96mn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32697' Jun 3 21:59:07.971: INFO: rc: 1 Jun 3 21:59:07.971: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7321 exec execpodc96mn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32697: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32697 nc: connect to 10.10.190.207 port 32697 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 21:59:08.672: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7321 exec execpodc96mn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32697' Jun 3 21:59:08.918: INFO: rc: 1 Jun 3 21:59:08.918: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7321 exec execpodc96mn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32697: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32697 nc: connect to 10.10.190.207 port 32697 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 21:59:09.671: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7321 exec execpodc96mn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32697' Jun 3 21:59:09.915: INFO: rc: 1 Jun 3 21:59:09.915: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7321 exec execpodc96mn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32697: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32697 nc: connect to 10.10.190.207 port 32697 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 21:59:10.671: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7321 exec execpodc96mn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32697' Jun 3 21:59:10.991: INFO: rc: 1 Jun 3 21:59:10.991: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7321 exec execpodc96mn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32697: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32697 nc: connect to 10.10.190.207 port 32697 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 21:59:11.671: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7321 exec execpodc96mn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32697' Jun 3 21:59:11.892: INFO: rc: 1 Jun 3 21:59:11.892: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7321 exec execpodc96mn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32697: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32697 nc: connect to 10.10.190.207 port 32697 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 21:59:12.671: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7321 exec execpodc96mn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32697' Jun 3 21:59:12.930: INFO: rc: 1 Jun 3 21:59:12.930: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7321 exec execpodc96mn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32697: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32697 nc: connect to 10.10.190.207 port 32697 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 21:59:13.671: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7321 exec execpodc96mn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32697' Jun 3 21:59:13.935: INFO: rc: 1 Jun 3 21:59:13.935: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7321 exec execpodc96mn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32697: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32697 nc: connect to 10.10.190.207 port 32697 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 21:59:14.671: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7321 exec execpodc96mn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32697' Jun 3 21:59:14.942: INFO: rc: 1 Jun 3 21:59:14.942: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7321 exec execpodc96mn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32697: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32697 nc: connect to 10.10.190.207 port 32697 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 21:59:15.672: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7321 exec execpodc96mn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32697' Jun 3 21:59:15.918: INFO: rc: 1 Jun 3 21:59:15.918: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7321 exec execpodc96mn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32697: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32697 nc: connect to 10.10.190.207 port 32697 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 21:59:16.671: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7321 exec execpodc96mn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32697' Jun 3 21:59:16.974: INFO: rc: 1 Jun 3 21:59:16.974: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7321 exec execpodc96mn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32697: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32697 nc: connect to 10.10.190.207 port 32697 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 21:59:17.671: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7321 exec execpodc96mn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32697' Jun 3 21:59:18.045: INFO: rc: 1 Jun 3 21:59:18.045: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7321 exec execpodc96mn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32697: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32697 nc: connect to 10.10.190.207 port 32697 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 21:59:18.671: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7321 exec execpodc96mn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32697' Jun 3 21:59:18.950: INFO: rc: 1 Jun 3 21:59:18.950: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7321 exec execpodc96mn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32697: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32697 nc: connect to 10.10.190.207 port 32697 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 21:59:19.671: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7321 exec execpodc96mn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32697' Jun 3 21:59:19.960: INFO: rc: 1 Jun 3 21:59:19.960: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7321 exec execpodc96mn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32697: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32697 nc: connect to 10.10.190.207 port 32697 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 21:59:20.670: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7321 exec execpodc96mn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32697' Jun 3 21:59:20.995: INFO: rc: 1 Jun 3 21:59:20.995: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7321 exec execpodc96mn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32697: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32697 nc: connect to 10.10.190.207 port 32697 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 21:59:21.671: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7321 exec execpodc96mn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32697' Jun 3 21:59:21.961: INFO: rc: 1 Jun 3 21:59:21.961: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7321 exec execpodc96mn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32697: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32697 nc: connect to 10.10.190.207 port 32697 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 21:59:22.671: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7321 exec execpodc96mn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32697' Jun 3 21:59:23.009: INFO: rc: 1 Jun 3 21:59:23.009: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7321 exec execpodc96mn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32697: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32697 nc: connect to 10.10.190.207 port 32697 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 21:59:23.671: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7321 exec execpodc96mn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32697' Jun 3 21:59:23.948: INFO: rc: 1 Jun 3 21:59:23.948: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7321 exec execpodc96mn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32697: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32697 nc: connect to 10.10.190.207 port 32697 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 21:59:24.672: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7321 exec execpodc96mn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32697' Jun 3 21:59:24.926: INFO: rc: 1 Jun 3 21:59:24.927: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7321 exec execpodc96mn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32697: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32697 nc: connect to 10.10.190.207 port 32697 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 21:59:25.670: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7321 exec execpodc96mn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32697' Jun 3 21:59:25.922: INFO: rc: 1 Jun 3 21:59:25.922: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7321 exec execpodc96mn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32697: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32697 nc: connect to 10.10.190.207 port 32697 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 21:59:26.671: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7321 exec execpodc96mn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32697' Jun 3 21:59:26.922: INFO: rc: 1 Jun 3 21:59:26.922: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7321 exec execpodc96mn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32697: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32697 nc: connect to 10.10.190.207 port 32697 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 21:59:27.671: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7321 exec execpodc96mn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32697' Jun 3 21:59:27.938: INFO: rc: 1 Jun 3 21:59:27.938: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7321 exec execpodc96mn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32697: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32697 nc: connect to 10.10.190.207 port 32697 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 21:59:28.671: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7321 exec execpodc96mn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32697' Jun 3 21:59:29.698: INFO: rc: 1 Jun 3 21:59:29.698: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7321 exec execpodc96mn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32697: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32697 nc: connect to 10.10.190.207 port 32697 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 21:59:30.673: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7321 exec execpodc96mn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32697' Jun 3 21:59:30.916: INFO: rc: 1 Jun 3 21:59:30.916: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7321 exec execpodc96mn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32697: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32697 nc: connect to 10.10.190.207 port 32697 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 21:59:31.671: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7321 exec execpodc96mn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32697' Jun 3 21:59:31.997: INFO: rc: 1 Jun 3 21:59:31.997: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7321 exec execpodc96mn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32697: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32697 nc: connect to 10.10.190.207 port 32697 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 21:59:32.670: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7321 exec execpodc96mn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32697' Jun 3 21:59:33.343: INFO: rc: 1 Jun 3 21:59:33.343: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7321 exec execpodc96mn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32697: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32697 nc: connect to 10.10.190.207 port 32697 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 21:59:33.671: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7321 exec execpodc96mn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32697' Jun 3 21:59:34.058: INFO: rc: 1 Jun 3 21:59:34.058: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7321 exec execpodc96mn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32697: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32697 nc: connect to 10.10.190.207 port 32697 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 21:59:34.671: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7321 exec execpodc96mn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32697' Jun 3 21:59:35.736: INFO: rc: 1 Jun 3 21:59:35.736: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7321 exec execpodc96mn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32697: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32697 nc: connect to 10.10.190.207 port 32697 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 21:59:36.671: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7321 exec execpodc96mn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32697' Jun 3 21:59:37.038: INFO: rc: 1 Jun 3 21:59:37.039: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7321 exec execpodc96mn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32697: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32697 nc: connect to 10.10.190.207 port 32697 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 21:59:37.671: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7321 exec execpodc96mn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32697' Jun 3 21:59:37.918: INFO: rc: 1 Jun 3 21:59:37.918: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7321 exec execpodc96mn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32697: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32697 nc: connect to 10.10.190.207 port 32697 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 21:59:38.671: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7321 exec execpodc96mn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32697' Jun 3 21:59:38.930: INFO: rc: 1 Jun 3 21:59:38.930: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7321 exec execpodc96mn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32697: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32697 nc: connect to 10.10.190.207 port 32697 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 21:59:39.671: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7321 exec execpodc96mn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32697' Jun 3 21:59:40.589: INFO: rc: 1 Jun 3 21:59:40.589: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7321 exec execpodc96mn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32697: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32697 nc: connect to 10.10.190.207 port 32697 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 21:59:40.589: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7321 exec execpodc96mn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32697' Jun 3 21:59:41.095: INFO: rc: 1 Jun 3 21:59:41.095: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7321 exec execpodc96mn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32697: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32697 nc: connect to 10.10.190.207 port 32697 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 21:59:41.096: FAIL: Unexpected error: <*errors.errorString | 0xc004990560>: { s: "service is not reachable within 2m0s timeout on endpoint 10.10.190.207:32697 over TCP protocol", } service is not reachable within 2m0s timeout on endpoint 10.10.190.207:32697 over TCP protocol occurred Full Stack Trace k8s.io/kubernetes/test/e2e/network.glob..func24.11() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1169 +0x265 k8s.io/kubernetes/test/e2e.RunE2ETests(0xc000703200) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x36c k8s.io/kubernetes/test/e2e.TestE2E(0xc000703200) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:144 +0x2b testing.tRunner(0xc000703200, 0x70f99e8) /usr/local/go/src/testing/testing.go:1193 +0xef created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1238 +0x2b3 [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 STEP: Collecting events from namespace "services-7321". STEP: Found 17 events. Jun 3 21:59:41.100: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for execpodc96mn: { } Scheduled: Successfully assigned services-7321/execpodc96mn to node1 Jun 3 21:59:41.100: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for nodeport-test-5k588: { } Scheduled: Successfully assigned services-7321/nodeport-test-5k588 to node2 Jun 3 21:59:41.100: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for nodeport-test-vd8hl: { } Scheduled: Successfully assigned services-7321/nodeport-test-vd8hl to node1 Jun 3 21:59:41.100: INFO: At 2022-06-03 21:57:25 +0000 UTC - event for nodeport-test: {replication-controller } SuccessfulCreate: Created pod: nodeport-test-vd8hl Jun 3 21:59:41.100: INFO: At 2022-06-03 21:57:25 +0000 UTC - event for nodeport-test: {replication-controller } SuccessfulCreate: Created pod: nodeport-test-5k588 Jun 3 21:59:41.100: INFO: At 2022-06-03 21:57:27 +0000 UTC - event for nodeport-test-5k588: {kubelet node2} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/agnhost:2.32" Jun 3 21:59:41.100: INFO: At 2022-06-03 21:57:27 +0000 UTC - event for nodeport-test-5k588: {kubelet node2} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/agnhost:2.32" in 259.2337ms Jun 3 21:59:41.100: INFO: At 2022-06-03 21:57:27 +0000 UTC - event for nodeport-test-5k588: {kubelet node2} Created: Created container nodeport-test Jun 3 21:59:41.100: INFO: At 2022-06-03 21:57:28 +0000 UTC - event for nodeport-test-5k588: {kubelet node2} Started: Started container nodeport-test Jun 3 21:59:41.100: INFO: At 2022-06-03 21:57:28 +0000 UTC - event for nodeport-test-vd8hl: {kubelet node1} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/agnhost:2.32" Jun 3 21:59:41.100: INFO: At 2022-06-03 21:57:29 +0000 UTC - event for nodeport-test-vd8hl: {kubelet node1} Started: Started container nodeport-test Jun 3 21:59:41.100: INFO: At 2022-06-03 21:57:29 +0000 UTC - event for nodeport-test-vd8hl: {kubelet node1} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/agnhost:2.32" in 932.111153ms Jun 3 21:59:41.100: INFO: At 2022-06-03 21:57:29 +0000 UTC - event for nodeport-test-vd8hl: {kubelet node1} Created: Created container nodeport-test Jun 3 21:59:41.100: INFO: At 2022-06-03 21:57:34 +0000 UTC - event for execpodc96mn: {kubelet node1} Created: Created container agnhost-container Jun 3 21:59:41.100: INFO: At 2022-06-03 21:57:34 +0000 UTC - event for execpodc96mn: {kubelet node1} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/agnhost:2.32" Jun 3 21:59:41.100: INFO: At 2022-06-03 21:57:34 +0000 UTC - event for execpodc96mn: {kubelet node1} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/agnhost:2.32" in 250.859558ms Jun 3 21:59:41.100: INFO: At 2022-06-03 21:57:35 +0000 UTC - event for execpodc96mn: {kubelet node1} Started: Started container agnhost-container Jun 3 21:59:41.103: INFO: POD NODE PHASE GRACE CONDITIONS Jun 3 21:59:41.103: INFO: execpodc96mn node1 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-06-03 21:57:32 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2022-06-03 21:57:37 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-06-03 21:57:37 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-06-03 21:57:31 +0000 UTC }] Jun 3 21:59:41.103: INFO: nodeport-test-5k588 node2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-06-03 21:57:25 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2022-06-03 21:57:29 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-06-03 21:57:29 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-06-03 21:57:25 +0000 UTC }] Jun 3 21:59:41.103: INFO: nodeport-test-vd8hl node1 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-06-03 21:57:26 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2022-06-03 21:57:30 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-06-03 21:57:30 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-06-03 21:57:25 +0000 UTC }] Jun 3 21:59:41.103: INFO: Jun 3 21:59:41.107: INFO: Logging node info for node master1 Jun 3 21:59:41.109: INFO: Node Info: &Node{ObjectMeta:{master1 4d289319-b343-4e96-a789-1a1cbeac007b 37734 0 2022-06-03 19:57:53 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master1 kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.202 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2022-06-03 19:57:56 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}}} {kube-controller-manager Update v1 2022-06-03 19:58:10 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.0.0/24\"":{}},"f:taints":{}}}} {flanneld Update v1 2022-06-03 20:00:37 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kubelet Update v1 2022-06-03 20:05:24 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{451201003520 0} {} 440625980Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234743296 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{406080902496 0} {} 406080902496 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324579328 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-06-03 20:03:30 +0000 UTC,LastTransitionTime:2022-06-03 20:03:30 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-06-03 21:59:31 +0000 UTC,LastTransitionTime:2022-06-03 19:57:50 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-06-03 21:59:31 +0000 UTC,LastTransitionTime:2022-06-03 19:57:50 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-06-03 21:59:31 +0000 UTC,LastTransitionTime:2022-06-03 19:57:50 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-06-03 21:59:31 +0000 UTC,LastTransitionTime:2022-06-03 20:00:47 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.202,},NodeAddress{Type:Hostname,Address:master1,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:3d668405f73a457bb0bcb4df5f4edac8,SystemUUID:00ACFB60-0631-E711-906E-0017A4403562,BootID:c08279e3-a5cb-4f4d-b9f0-f2cde655469f,KernelVersion:3.10.0-1160.66.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.16,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:727687199,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 sirot/netperf-latest:latest],SizeBytes:282025213,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e k8s.gcr.io/coredns/coredns:v1.8.0],SizeBytes:42454755,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[localhost:30500/tasextender@sha256:eddd5e176ac5f79e2e8ba9a1b7023bbf7200edfa835da39de54a6bf3568f9668 tasextender:latest localhost:30500/tasextender:0.4],SizeBytes:28910791,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[registry@sha256:1cd9409a311350c3072fe510b52046f104416376c126a479cef9a4dfe692cf57 registry:2.7.0],SizeBytes:24191168,},ContainerImage{Names:[nginx@sha256:b92d3b942c8b84da889ac3dc6e83bd20ffb8cd2d8298eba92c8b0bf88d52f03e nginx:1.20.1-alpine],SizeBytes:22721538,},ContainerImage{Names:[@ :],SizeBytes:5577654,},ContainerImage{Names:[alpine@sha256:c0e9560cda118f9ec63ddefb4a173a2b2a0347082d7dff7dc14272e7841a5b5a alpine:3.12.1],SizeBytes:5573013,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jun 3 21:59:41.110: INFO: Logging kubelet events for node master1 Jun 3 21:59:41.114: INFO: Logging pods the kubelet thinks is on node master1 Jun 3 21:59:41.147: INFO: coredns-8474476ff8-rvc4v started at 2022-06-03 20:01:12 +0000 UTC (0+1 container statuses recorded) Jun 3 21:59:41.147: INFO: Container coredns ready: true, restart count 1 Jun 3 21:59:41.147: INFO: container-registry-65d7c44b96-2nzvn started at 2022-06-03 20:05:02 +0000 UTC (0+2 container statuses recorded) Jun 3 21:59:41.147: INFO: Container docker-registry ready: true, restart count 0 Jun 3 21:59:41.147: INFO: Container nginx ready: true, restart count 0 Jun 3 21:59:41.147: INFO: kube-scheduler-master1 started at 2022-06-03 20:06:52 +0000 UTC (0+1 container statuses recorded) Jun 3 21:59:41.147: INFO: Container kube-scheduler ready: true, restart count 0 Jun 3 21:59:41.147: INFO: kube-proxy-zgchh started at 2022-06-03 19:59:36 +0000 UTC (0+1 container statuses recorded) Jun 3 21:59:41.147: INFO: Container kube-proxy ready: true, restart count 2 Jun 3 21:59:41.147: INFO: dns-autoscaler-7df78bfcfb-vdtpl started at 2022-06-03 20:01:09 +0000 UTC (0+1 container statuses recorded) Jun 3 21:59:41.147: INFO: Container autoscaler ready: true, restart count 2 Jun 3 21:59:41.147: INFO: kube-flannel-m8sj7 started at 2022-06-03 20:00:31 +0000 UTC (1+1 container statuses recorded) Jun 3 21:59:41.147: INFO: Init container install-cni ready: true, restart count 0 Jun 3 21:59:41.147: INFO: Container kube-flannel ready: true, restart count 1 Jun 3 21:59:41.147: INFO: kube-multus-ds-amd64-n58qk started at 2022-06-03 20:00:40 +0000 UTC (0+1 container statuses recorded) Jun 3 21:59:41.147: INFO: Container kube-multus ready: true, restart count 1 Jun 3 21:59:41.147: INFO: node-exporter-45rhg started at 2022-06-03 20:13:28 +0000 UTC (0+2 container statuses recorded) Jun 3 21:59:41.147: INFO: Container kube-rbac-proxy ready: true, restart count 0 Jun 3 21:59:41.147: INFO: Container node-exporter ready: true, restart count 0 Jun 3 21:59:41.147: INFO: kube-apiserver-master1 started at 2022-06-03 19:58:57 +0000 UTC (0+1 container statuses recorded) Jun 3 21:59:41.147: INFO: Container kube-apiserver ready: true, restart count 0 Jun 3 21:59:41.147: INFO: kube-controller-manager-master1 started at 2022-06-03 19:58:57 +0000 UTC (0+1 container statuses recorded) Jun 3 21:59:41.147: INFO: Container kube-controller-manager ready: true, restart count 1 Jun 3 21:59:41.238: INFO: Latency metrics for node master1 Jun 3 21:59:41.238: INFO: Logging node info for node master2 Jun 3 21:59:41.241: INFO: Node Info: &Node{ObjectMeta:{master2 a6ae2f0e-af0f-4dbb-a8e5-6d3a309310bc 37714 0 2022-06-03 19:58:21 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master2 kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.203 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2022-06-03 19:58:23 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}}} {flanneld Update v1 2022-06-03 20:00:37 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2022-06-03 20:00:45 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}},"f:taints":{}}}} {kubelet Update v1 2022-06-03 20:10:57 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:capacity":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.1.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{451201003520 0} {} 440625980Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234739200 0} {} 196518300Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{406080902496 0} {} 406080902496 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324575232 0} {} 195629468Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-06-03 20:03:28 +0000 UTC,LastTransitionTime:2022-06-03 20:03:28 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-06-03 21:59:31 +0000 UTC,LastTransitionTime:2022-06-03 19:58:21 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-06-03 21:59:31 +0000 UTC,LastTransitionTime:2022-06-03 19:58:21 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-06-03 21:59:31 +0000 UTC,LastTransitionTime:2022-06-03 19:58:21 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-06-03 21:59:31 +0000 UTC,LastTransitionTime:2022-06-03 20:00:45 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.203,},NodeAddress{Type:Hostname,Address:master2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:21e5c20b6e4a4d3fb07443d5575db572,SystemUUID:00A0DE53-E51D-E711-906E-0017A4403562,BootID:52401484-5222-49a3-a465-e7215ade9b1e,KernelVersion:3.10.0-1160.66.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.16,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:727687199,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 sirot/netperf-latest:latest],SizeBytes:282025213,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[quay.io/prometheus-operator/prometheus-operator@sha256:850c86bfeda4389bc9c757a9fd17ca5a090ea6b424968178d4467492cfa13921 quay.io/prometheus-operator/prometheus-operator:v0.44.1],SizeBytes:42617274,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e k8s.gcr.io/coredns/coredns:v1.8.0],SizeBytes:42454755,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jun 3 21:59:41.241: INFO: Logging kubelet events for node master2 Jun 3 21:59:41.244: INFO: Logging pods the kubelet thinks is on node master2 Jun 3 21:59:41.253: INFO: node-exporter-2h6sb started at 2022-06-03 20:13:28 +0000 UTC (0+2 container statuses recorded) Jun 3 21:59:41.253: INFO: Container kube-rbac-proxy ready: true, restart count 0 Jun 3 21:59:41.253: INFO: Container node-exporter ready: true, restart count 0 Jun 3 21:59:41.253: INFO: kube-apiserver-master2 started at 2022-06-03 19:58:55 +0000 UTC (0+1 container statuses recorded) Jun 3 21:59:41.253: INFO: Container kube-apiserver ready: true, restart count 0 Jun 3 21:59:41.253: INFO: kube-controller-manager-master2 started at 2022-06-03 19:58:55 +0000 UTC (0+1 container statuses recorded) Jun 3 21:59:41.253: INFO: Container kube-controller-manager ready: true, restart count 2 Jun 3 21:59:41.253: INFO: kube-proxy-nlc58 started at 2022-06-03 19:59:36 +0000 UTC (0+1 container statuses recorded) Jun 3 21:59:41.253: INFO: Container kube-proxy ready: true, restart count 1 Jun 3 21:59:41.253: INFO: prometheus-operator-585ccfb458-xp2lz started at 2022-06-03 20:13:21 +0000 UTC (0+2 container statuses recorded) Jun 3 21:59:41.253: INFO: Container kube-rbac-proxy ready: true, restart count 0 Jun 3 21:59:41.253: INFO: Container prometheus-operator ready: true, restart count 0 Jun 3 21:59:41.253: INFO: kube-scheduler-master2 started at 2022-06-03 19:58:55 +0000 UTC (0+1 container statuses recorded) Jun 3 21:59:41.253: INFO: Container kube-scheduler ready: true, restart count 3 Jun 3 21:59:41.253: INFO: kube-flannel-sbdcv started at 2022-06-03 20:00:32 +0000 UTC (1+1 container statuses recorded) Jun 3 21:59:41.253: INFO: Init container install-cni ready: true, restart count 2 Jun 3 21:59:41.253: INFO: Container kube-flannel ready: true, restart count 1 Jun 3 21:59:41.253: INFO: kube-multus-ds-amd64-ccvdq started at 2022-06-03 20:00:40 +0000 UTC (0+1 container statuses recorded) Jun 3 21:59:41.253: INFO: Container kube-multus ready: true, restart count 1 Jun 3 21:59:41.328: INFO: Latency metrics for node master2 Jun 3 21:59:41.328: INFO: Logging node info for node master3 Jun 3 21:59:41.331: INFO: Node Info: &Node{ObjectMeta:{master3 559b19e7-45b0-4589-9993-9bba259aae96 37727 0 2022-06-03 19:58:27 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master3 kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.204 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/master.version:v0.8.2 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2022-06-03 19:58:32 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}}} {flanneld Update v1 2022-06-03 20:00:37 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2022-06-03 20:00:47 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.2.0/24\"":{}},"f:taints":{}}}} {nfd-master Update v1 2022-06-03 20:08:15 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/master.version":{}}}}} {kubelet Update v1 2022-06-03 20:08:27 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:capacity":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.2.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{451201003520 0} {} 440625980Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234743296 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{406080902496 0} {} 406080902496 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324579328 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-06-03 20:03:22 +0000 UTC,LastTransitionTime:2022-06-03 20:03:22 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-06-03 21:59:31 +0000 UTC,LastTransitionTime:2022-06-03 19:58:27 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-06-03 21:59:31 +0000 UTC,LastTransitionTime:2022-06-03 19:58:27 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-06-03 21:59:31 +0000 UTC,LastTransitionTime:2022-06-03 19:58:27 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-06-03 21:59:31 +0000 UTC,LastTransitionTime:2022-06-03 20:03:18 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.204,},NodeAddress{Type:Hostname,Address:master3,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:5b399eed918a40dd8324debc1c0777a3,SystemUUID:008B1444-141E-E711-906E-0017A4403562,BootID:2fde35f0-2dc9-4531-9d2b-0bd4a6516b3a,KernelVersion:3.10.0-1160.66.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.16,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:727687199,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[k8s.gcr.io/nfd/node-feature-discovery@sha256:74a1cbd82354f148277e20cdce25d57816e355a896bc67f67a0f722164b16945 k8s.gcr.io/nfd/node-feature-discovery:v0.8.2],SizeBytes:108486428,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e k8s.gcr.io/coredns/coredns:v1.8.0],SizeBytes:42454755,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 aquasec/kube-bench:0.3.1],SizeBytes:19301876,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jun 3 21:59:41.331: INFO: Logging kubelet events for node master3 Jun 3 21:59:41.335: INFO: Logging pods the kubelet thinks is on node master3 Jun 3 21:59:41.344: INFO: kube-scheduler-master3 started at 2022-06-03 19:58:27 +0000 UTC (0+1 container statuses recorded) Jun 3 21:59:41.345: INFO: Container kube-scheduler ready: true, restart count 3 Jun 3 21:59:41.345: INFO: kube-proxy-m8r9n started at 2022-06-03 19:59:36 +0000 UTC (0+1 container statuses recorded) Jun 3 21:59:41.345: INFO: Container kube-proxy ready: true, restart count 2 Jun 3 21:59:41.345: INFO: coredns-8474476ff8-dvwn7 started at 2022-06-03 20:01:07 +0000 UTC (0+1 container statuses recorded) Jun 3 21:59:41.345: INFO: Container coredns ready: true, restart count 1 Jun 3 21:59:41.345: INFO: node-exporter-jn8vv started at 2022-06-03 20:13:28 +0000 UTC (0+2 container statuses recorded) Jun 3 21:59:41.345: INFO: Container kube-rbac-proxy ready: true, restart count 0 Jun 3 21:59:41.345: INFO: Container node-exporter ready: true, restart count 0 Jun 3 21:59:41.345: INFO: kube-controller-manager-master3 started at 2022-06-03 20:03:18 +0000 UTC (0+1 container statuses recorded) Jun 3 21:59:41.345: INFO: Container kube-controller-manager ready: true, restart count 2 Jun 3 21:59:41.345: INFO: kube-flannel-nx64t started at 2022-06-03 20:00:32 +0000 UTC (1+1 container statuses recorded) Jun 3 21:59:41.345: INFO: Init container install-cni ready: true, restart count 2 Jun 3 21:59:41.345: INFO: Container kube-flannel ready: true, restart count 2 Jun 3 21:59:41.345: INFO: kube-multus-ds-amd64-gjv49 started at 2022-06-03 20:00:40 +0000 UTC (0+1 container statuses recorded) Jun 3 21:59:41.345: INFO: Container kube-multus ready: true, restart count 1 Jun 3 21:59:41.345: INFO: node-feature-discovery-controller-cff799f9f-8fbbp started at 2022-06-03 20:08:09 +0000 UTC (0+1 container statuses recorded) Jun 3 21:59:41.345: INFO: Container nfd-controller ready: true, restart count 0 Jun 3 21:59:41.345: INFO: kube-apiserver-master3 started at 2022-06-03 20:03:18 +0000 UTC (0+1 container statuses recorded) Jun 3 21:59:41.345: INFO: Container kube-apiserver ready: true, restart count 0 Jun 3 21:59:41.422: INFO: Latency metrics for node master3 Jun 3 21:59:41.422: INFO: Logging node info for node node1 Jun 3 21:59:41.425: INFO: Node Info: &Node{ObjectMeta:{node1 482ecf0f-7f88-436d-a313-227096fe8b8d 37860 0 2022-06-03 19:59:31 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux cmk.intel.com/cmk-node:true feature.node.kubernetes.io/cpu-cpuid.ADX:true feature.node.kubernetes.io/cpu-cpuid.AESNI:true feature.node.kubernetes.io/cpu-cpuid.AVX:true feature.node.kubernetes.io/cpu-cpuid.AVX2:true feature.node.kubernetes.io/cpu-cpuid.AVX512BW:true feature.node.kubernetes.io/cpu-cpuid.AVX512CD:true feature.node.kubernetes.io/cpu-cpuid.AVX512DQ:true feature.node.kubernetes.io/cpu-cpuid.AVX512F:true feature.node.kubernetes.io/cpu-cpuid.AVX512VL:true feature.node.kubernetes.io/cpu-cpuid.FMA3:true feature.node.kubernetes.io/cpu-cpuid.HLE:true feature.node.kubernetes.io/cpu-cpuid.IBPB:true feature.node.kubernetes.io/cpu-cpuid.MPX:true feature.node.kubernetes.io/cpu-cpuid.RTM:true feature.node.kubernetes.io/cpu-cpuid.SSE4:true feature.node.kubernetes.io/cpu-cpuid.SSE42:true feature.node.kubernetes.io/cpu-cpuid.STIBP:true feature.node.kubernetes.io/cpu-cpuid.VMX:true feature.node.kubernetes.io/cpu-cstate.enabled:true feature.node.kubernetes.io/cpu-hardware_multithreading:true feature.node.kubernetes.io/cpu-pstate.status:active feature.node.kubernetes.io/cpu-pstate.turbo:true feature.node.kubernetes.io/cpu-rdt.RDTCMT:true feature.node.kubernetes.io/cpu-rdt.RDTL3CA:true feature.node.kubernetes.io/cpu-rdt.RDTMBA:true feature.node.kubernetes.io/cpu-rdt.RDTMBM:true feature.node.kubernetes.io/cpu-rdt.RDTMON:true feature.node.kubernetes.io/kernel-config.NO_HZ:true feature.node.kubernetes.io/kernel-config.NO_HZ_FULL:true feature.node.kubernetes.io/kernel-selinux.enabled:true feature.node.kubernetes.io/kernel-version.full:3.10.0-1160.66.1.el7.x86_64 feature.node.kubernetes.io/kernel-version.major:3 feature.node.kubernetes.io/kernel-version.minor:10 feature.node.kubernetes.io/kernel-version.revision:0 feature.node.kubernetes.io/memory-numa:true feature.node.kubernetes.io/network-sriov.capable:true feature.node.kubernetes.io/network-sriov.configured:true feature.node.kubernetes.io/pci-0300_1a03.present:true feature.node.kubernetes.io/storage-nonrotationaldisk:true feature.node.kubernetes.io/system-os_release.ID:centos feature.node.kubernetes.io/system-os_release.VERSION_ID:7 feature.node.kubernetes.io/system-os_release.VERSION_ID.major:7 kubernetes.io/arch:amd64 kubernetes.io/hostname:node1 kubernetes.io/os:linux] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.207 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/extended-resources: nfd.node.kubernetes.io/feature-labels:cpu-cpuid.ADX,cpu-cpuid.AESNI,cpu-cpuid.AVX,cpu-cpuid.AVX2,cpu-cpuid.AVX512BW,cpu-cpuid.AVX512CD,cpu-cpuid.AVX512DQ,cpu-cpuid.AVX512F,cpu-cpuid.AVX512VL,cpu-cpuid.FMA3,cpu-cpuid.HLE,cpu-cpuid.IBPB,cpu-cpuid.MPX,cpu-cpuid.RTM,cpu-cpuid.SSE4,cpu-cpuid.SSE42,cpu-cpuid.STIBP,cpu-cpuid.VMX,cpu-cstate.enabled,cpu-hardware_multithreading,cpu-pstate.status,cpu-pstate.turbo,cpu-rdt.RDTCMT,cpu-rdt.RDTL3CA,cpu-rdt.RDTMBA,cpu-rdt.RDTMBM,cpu-rdt.RDTMON,kernel-config.NO_HZ,kernel-config.NO_HZ_FULL,kernel-selinux.enabled,kernel-version.full,kernel-version.major,kernel-version.minor,kernel-version.revision,memory-numa,network-sriov.capable,network-sriov.configured,pci-0300_1a03.present,storage-nonrotationaldisk,system-os_release.ID,system-os_release.VERSION_ID,system-os_release.VERSION_ID.major nfd.node.kubernetes.io/worker.version:v0.8.2 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2022-06-03 19:59:31 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.3.0/24\"":{}}}}} {kubeadm Update v1 2022-06-03 19:59:32 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {flanneld Update v1 2022-06-03 20:00:37 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {nfd-master Update v1 2022-06-03 20:08:16 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/extended-resources":{},"f:nfd.node.kubernetes.io/feature-labels":{},"f:nfd.node.kubernetes.io/worker.version":{}},"f:labels":{"f:feature.node.kubernetes.io/cpu-cpuid.ADX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AESNI":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX2":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512BW":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512CD":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512DQ":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512F":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512VL":{},"f:feature.node.kubernetes.io/cpu-cpuid.FMA3":{},"f:feature.node.kubernetes.io/cpu-cpuid.HLE":{},"f:feature.node.kubernetes.io/cpu-cpuid.IBPB":{},"f:feature.node.kubernetes.io/cpu-cpuid.MPX":{},"f:feature.node.kubernetes.io/cpu-cpuid.RTM":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE4":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE42":{},"f:feature.node.kubernetes.io/cpu-cpuid.STIBP":{},"f:feature.node.kubernetes.io/cpu-cpuid.VMX":{},"f:feature.node.kubernetes.io/cpu-cstate.enabled":{},"f:feature.node.kubernetes.io/cpu-hardware_multithreading":{},"f:feature.node.kubernetes.io/cpu-pstate.status":{},"f:feature.node.kubernetes.io/cpu-pstate.turbo":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTCMT":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTL3CA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBM":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMON":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ_FULL":{},"f:feature.node.kubernetes.io/kernel-selinux.enabled":{},"f:feature.node.kubernetes.io/kernel-version.full":{},"f:feature.node.kubernetes.io/kernel-version.major":{},"f:feature.node.kubernetes.io/kernel-version.minor":{},"f:feature.node.kubernetes.io/kernel-version.revision":{},"f:feature.node.kubernetes.io/memory-numa":{},"f:feature.node.kubernetes.io/network-sriov.capable":{},"f:feature.node.kubernetes.io/network-sriov.configured":{},"f:feature.node.kubernetes.io/pci-0300_1a03.present":{},"f:feature.node.kubernetes.io/storage-nonrotationaldisk":{},"f:feature.node.kubernetes.io/system-os_release.ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID.major":{}}}}} {Swagger-Codegen Update v1 2022-06-03 20:11:45 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:cmk.intel.com/cmk-node":{}}},"f:status":{"f:capacity":{"f:cmk.intel.com/exclusive-cores":{}}}}} {kubelet Update v1 2022-06-03 20:11:51 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:cmk.intel.com/exclusive-cores":{},"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:capacity":{"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.3.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.3.0/24],},Status:NodeStatus{Capacity:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{451201003520 0} {} 440625980Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{201269608448 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{77 0} {} 77 DecimalSI},ephemeral-storage: {{406080902496 0} {} 406080902496 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{178884608000 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-06-03 20:03:39 +0000 UTC,LastTransitionTime:2022-06-03 20:03:39 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-06-03 21:59:36 +0000 UTC,LastTransitionTime:2022-06-03 19:59:31 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-06-03 21:59:36 +0000 UTC,LastTransitionTime:2022-06-03 19:59:31 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-06-03 21:59:36 +0000 UTC,LastTransitionTime:2022-06-03 19:59:31 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-06-03 21:59:36 +0000 UTC,LastTransitionTime:2022-06-03 20:00:42 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.207,},NodeAddress{Type:Hostname,Address:node1,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:d7b1fa7572024d5cac9eec5f4f2a75d3,SystemUUID:00CDA902-D022-E711-906E-0017A4403562,BootID:a1aa46cd-ec2c-417b-ae44-b808bdc04113,KernelVersion:3.10.0-1160.66.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.16,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[opnfv/barometer-collectd@sha256:f30e965aa6195e6ac4ca2410f5a15e3704c92e4afa5208178ca22a7911975d66],SizeBytes:1075575763,},ContainerImage{Names:[@ :],SizeBytes:1003977815,},ContainerImage{Names:[localhost:30500/cmk@sha256:196eade72a7e16bdb2d709d29fdec354c8a3dbbb68e384608929b41c5ec41520 cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:727687199,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[golang@sha256:db2475a1dbb2149508e5db31d7d77a75e6600d54be645f37681f03f2762169ba golang:alpine3.12],SizeBytes:301186719,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:702a992280fb7c3303e84a5801acbb4c9c7fcf48cffe0e9c8be3f0c60f74cf89 k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.4],SizeBytes:253371792,},ContainerImage{Names:[grafana/grafana@sha256:ba39bf5131dcc0464134a3ff0e26e8c6380415249fa725e5f619176601255172 grafana/grafana:7.5.4],SizeBytes:203572842,},ContainerImage{Names:[quay.io/prometheus/prometheus@sha256:b899dbd1b9017b9a379f76ce5b40eead01a62762c4f2057eacef945c3c22d210 quay.io/prometheus/prometheus:v2.22.1],SizeBytes:168344243,},ContainerImage{Names:[nginx@sha256:a05b0cdd4fc1be3b224ba9662ebdf98fe44c09c0c9215b45f84344c12867002e nginx:1.21.1],SizeBytes:133175493,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:125930239,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[k8s.gcr.io/nfd/node-feature-discovery@sha256:74a1cbd82354f148277e20cdce25d57816e355a896bc67f67a0f722164b16945 k8s.gcr.io/nfd/node-feature-discovery:v0.8.2],SizeBytes:108486428,},ContainerImage{Names:[directxman12/k8s-prometheus-adapter@sha256:2b09a571757a12c0245f2f1a74db4d1b9386ff901cf57f5ce48a0a682bd0e3af directxman12/k8s-prometheus-adapter:v0.8.2],SizeBytes:68230450,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[localhost:30500/sriov-device-plugin@sha256:bec5a478455b8244d18398355b5ec18540557180ddc029404300ca241638521b nfvpe/sriov-device-plugin:latest localhost:30500/sriov-device-plugin:v3.3.2],SizeBytes:42676189,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[prom/collectd-exporter@sha256:73fbda4d24421bff3b741c27efc36f1b6fbe7c57c378d56d4ff78101cd556654],SizeBytes:17463681,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:16032814,},ContainerImage{Names:[quay.io/prometheus-operator/prometheus-config-reloader@sha256:4dee0fcf1820355ddd6986c1317b555693776c731315544a99d6cc59a7e34ce9 quay.io/prometheus-operator/prometheus-config-reloader:v0.44.1],SizeBytes:13433274,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:11443478,},ContainerImage{Names:[appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172cead6502fec989198b98d8bb appropriate/curl:edge],SizeBytes:5654234,},ContainerImage{Names:[alpine@sha256:c75ac27b49326926b803b9ed43bf088bc220d22556de1bc5f72d742c91398f69 alpine:3.12],SizeBytes:5581590,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:1154361,},ContainerImage{Names:[busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 busybox:1.28],SizeBytes:1146369,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jun 3 21:59:41.426: INFO: Logging kubelet events for node node1 Jun 3 21:59:41.428: INFO: Logging pods the kubelet thinks is on node node1 Jun 3 21:59:41.444: INFO: kube-proxy-b6zlv started at 2022-06-03 19:59:36 +0000 UTC (0+1 container statuses recorded) Jun 3 21:59:41.445: INFO: Container kube-proxy ready: true, restart count 2 Jun 3 21:59:41.445: INFO: kube-flannel-hm6bh started at 2022-06-03 20:00:32 +0000 UTC (1+1 container statuses recorded) Jun 3 21:59:41.445: INFO: Init container install-cni ready: true, restart count 2 Jun 3 21:59:41.445: INFO: Container kube-flannel ready: true, restart count 3 Jun 3 21:59:41.445: INFO: nodeport-test-vd8hl started at 2022-06-03 21:57:26 +0000 UTC (0+1 container statuses recorded) Jun 3 21:59:41.445: INFO: Container nodeport-test ready: true, restart count 0 Jun 3 21:59:41.445: INFO: frontend-685fc574d5-rvm4j started at 2022-06-03 21:59:19 +0000 UTC (0+1 container statuses recorded) Jun 3 21:59:41.445: INFO: Container guestbook-frontend ready: false, restart count 0 Jun 3 21:59:41.445: INFO: execpodc96mn started at 2022-06-03 21:57:32 +0000 UTC (0+1 container statuses recorded) Jun 3 21:59:41.445: INFO: Container agnhost-container ready: true, restart count 0 Jun 3 21:59:41.445: INFO: nginx-proxy-node1 started at 2022-06-03 19:59:31 +0000 UTC (0+1 container statuses recorded) Jun 3 21:59:41.445: INFO: Container nginx-proxy ready: true, restart count 2 Jun 3 21:59:41.445: INFO: cmk-init-discover-node1-n75dv started at 2022-06-03 20:11:42 +0000 UTC (0+3 container statuses recorded) Jun 3 21:59:41.445: INFO: Container discover ready: false, restart count 0 Jun 3 21:59:41.445: INFO: Container init ready: false, restart count 0 Jun 3 21:59:41.445: INFO: Container install ready: false, restart count 0 Jun 3 21:59:41.445: INFO: prometheus-k8s-0 started at 2022-06-03 20:13:45 +0000 UTC (0+4 container statuses recorded) Jun 3 21:59:41.445: INFO: Container config-reloader ready: true, restart count 0 Jun 3 21:59:41.445: INFO: Container custom-metrics-apiserver ready: true, restart count 0 Jun 3 21:59:41.445: INFO: Container grafana ready: true, restart count 0 Jun 3 21:59:41.445: INFO: Container prometheus ready: true, restart count 1 Jun 3 21:59:41.445: INFO: ss2-2 started at 2022-06-03 21:59:38 +0000 UTC (0+1 container statuses recorded) Jun 3 21:59:41.445: INFO: Container webserver ready: false, restart count 0 Jun 3 21:59:41.445: INFO: liveness-c3c83afb-b3bf-4ac7-849b-27de5e84cc08 started at 2022-06-03 21:57:17 +0000 UTC (0+1 container statuses recorded) Jun 3 21:59:41.445: INFO: Container agnhost-container ready: true, restart count 0 Jun 3 21:59:41.445: INFO: node-feature-discovery-worker-rg6tx started at 2022-06-03 20:08:09 +0000 UTC (0+1 container statuses recorded) Jun 3 21:59:41.445: INFO: Container nfd-worker ready: true, restart count 0 Jun 3 21:59:41.445: INFO: cmk-84nbw started at 2022-06-03 20:12:24 +0000 UTC (0+2 container statuses recorded) Jun 3 21:59:41.445: INFO: Container nodereport ready: true, restart count 0 Jun 3 21:59:41.445: INFO: Container reconcile ready: true, restart count 0 Jun 3 21:59:41.445: INFO: node-exporter-f5xkq started at 2022-06-03 20:13:28 +0000 UTC (0+2 container statuses recorded) Jun 3 21:59:41.445: INFO: Container kube-rbac-proxy ready: true, restart count 0 Jun 3 21:59:41.445: INFO: Container node-exporter ready: true, restart count 0 Jun 3 21:59:41.445: INFO: frontend-685fc574d5-qtg9r started at 2022-06-03 21:59:19 +0000 UTC (0+1 container statuses recorded) Jun 3 21:59:41.445: INFO: Container guestbook-frontend ready: false, restart count 0 Jun 3 21:59:41.445: INFO: affinity-clusterip-timeout-zmnx4 started at 2022-06-03 21:59:38 +0000 UTC (0+1 container statuses recorded) Jun 3 21:59:41.445: INFO: Container affinity-clusterip-timeout ready: false, restart count 0 Jun 3 21:59:41.445: INFO: agnhost-replica-6bcf79b489-clcxg started at 2022-06-03 21:59:20 +0000 UTC (0+1 container statuses recorded) Jun 3 21:59:41.445: INFO: Container replica ready: false, restart count 0 Jun 3 21:59:41.445: INFO: test-pod started at 2022-06-03 21:57:06 +0000 UTC (0+1 container statuses recorded) Jun 3 21:59:41.445: INFO: Container webserver ready: true, restart count 0 Jun 3 21:59:41.445: INFO: kube-multus-ds-amd64-p7r6j started at 2022-06-03 20:00:40 +0000 UTC (0+1 container statuses recorded) Jun 3 21:59:41.445: INFO: Container kube-multus ready: true, restart count 1 Jun 3 21:59:41.445: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-qwqjx started at 2022-06-03 20:09:20 +0000 UTC (0+1 container statuses recorded) Jun 3 21:59:41.445: INFO: Container kube-sriovdp ready: true, restart count 0 Jun 3 21:59:41.445: INFO: cmk-webhook-6c9d5f8578-c927x started at 2022-06-03 20:12:25 +0000 UTC (0+1 container statuses recorded) Jun 3 21:59:41.445: INFO: Container cmk-webhook ready: true, restart count 0 Jun 3 21:59:41.445: INFO: collectd-nbx5z started at 2022-06-03 20:17:32 +0000 UTC (0+3 container statuses recorded) Jun 3 21:59:41.445: INFO: Container collectd ready: true, restart count 0 Jun 3 21:59:41.445: INFO: Container collectd-exporter ready: true, restart count 0 Jun 3 21:59:41.445: INFO: Container rbac-proxy ready: true, restart count 0 Jun 3 21:59:41.771: INFO: Latency metrics for node node1 Jun 3 21:59:41.771: INFO: Logging node info for node node2 Jun 3 21:59:41.774: INFO: Node Info: &Node{ObjectMeta:{node2 bb95e261-57f4-4e78-b1f6-cbf8d9287d74 37891 0 2022-06-03 19:59:32 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux cmk.intel.com/cmk-node:true feature.node.kubernetes.io/cpu-cpuid.ADX:true feature.node.kubernetes.io/cpu-cpuid.AESNI:true feature.node.kubernetes.io/cpu-cpuid.AVX:true feature.node.kubernetes.io/cpu-cpuid.AVX2:true feature.node.kubernetes.io/cpu-cpuid.AVX512BW:true feature.node.kubernetes.io/cpu-cpuid.AVX512CD:true feature.node.kubernetes.io/cpu-cpuid.AVX512DQ:true feature.node.kubernetes.io/cpu-cpuid.AVX512F:true feature.node.kubernetes.io/cpu-cpuid.AVX512VL:true feature.node.kubernetes.io/cpu-cpuid.FMA3:true feature.node.kubernetes.io/cpu-cpuid.HLE:true feature.node.kubernetes.io/cpu-cpuid.IBPB:true feature.node.kubernetes.io/cpu-cpuid.MPX:true feature.node.kubernetes.io/cpu-cpuid.RTM:true feature.node.kubernetes.io/cpu-cpuid.SSE4:true feature.node.kubernetes.io/cpu-cpuid.SSE42:true feature.node.kubernetes.io/cpu-cpuid.STIBP:true feature.node.kubernetes.io/cpu-cpuid.VMX:true feature.node.kubernetes.io/cpu-cstate.enabled:true feature.node.kubernetes.io/cpu-hardware_multithreading:true feature.node.kubernetes.io/cpu-pstate.status:active feature.node.kubernetes.io/cpu-pstate.turbo:true feature.node.kubernetes.io/cpu-rdt.RDTCMT:true feature.node.kubernetes.io/cpu-rdt.RDTL3CA:true feature.node.kubernetes.io/cpu-rdt.RDTMBA:true feature.node.kubernetes.io/cpu-rdt.RDTMBM:true feature.node.kubernetes.io/cpu-rdt.RDTMON:true feature.node.kubernetes.io/kernel-config.NO_HZ:true feature.node.kubernetes.io/kernel-config.NO_HZ_FULL:true feature.node.kubernetes.io/kernel-selinux.enabled:true feature.node.kubernetes.io/kernel-version.full:3.10.0-1160.66.1.el7.x86_64 feature.node.kubernetes.io/kernel-version.major:3 feature.node.kubernetes.io/kernel-version.minor:10 feature.node.kubernetes.io/kernel-version.revision:0 feature.node.kubernetes.io/memory-numa:true feature.node.kubernetes.io/network-sriov.capable:true feature.node.kubernetes.io/network-sriov.configured:true feature.node.kubernetes.io/pci-0300_1a03.present:true feature.node.kubernetes.io/storage-nonrotationaldisk:true feature.node.kubernetes.io/system-os_release.ID:centos feature.node.kubernetes.io/system-os_release.VERSION_ID:7 feature.node.kubernetes.io/system-os_release.VERSION_ID.major:7 kubernetes.io/arch:amd64 kubernetes.io/hostname:node2 kubernetes.io/os:linux] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.208 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/extended-resources: nfd.node.kubernetes.io/feature-labels:cpu-cpuid.ADX,cpu-cpuid.AESNI,cpu-cpuid.AVX,cpu-cpuid.AVX2,cpu-cpuid.AVX512BW,cpu-cpuid.AVX512CD,cpu-cpuid.AVX512DQ,cpu-cpuid.AVX512F,cpu-cpuid.AVX512VL,cpu-cpuid.FMA3,cpu-cpuid.HLE,cpu-cpuid.IBPB,cpu-cpuid.MPX,cpu-cpuid.RTM,cpu-cpuid.SSE4,cpu-cpuid.SSE42,cpu-cpuid.STIBP,cpu-cpuid.VMX,cpu-cstate.enabled,cpu-hardware_multithreading,cpu-pstate.status,cpu-pstate.turbo,cpu-rdt.RDTCMT,cpu-rdt.RDTL3CA,cpu-rdt.RDTMBA,cpu-rdt.RDTMBM,cpu-rdt.RDTMON,kernel-config.NO_HZ,kernel-config.NO_HZ_FULL,kernel-selinux.enabled,kernel-version.full,kernel-version.major,kernel-version.minor,kernel-version.revision,memory-numa,network-sriov.capable,network-sriov.configured,pci-0300_1a03.present,storage-nonrotationaldisk,system-os_release.ID,system-os_release.VERSION_ID,system-os_release.VERSION_ID.major nfd.node.kubernetes.io/worker.version:v0.8.2 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2022-06-03 19:59:32 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.4.0/24\"":{}}}}} {kubeadm Update v1 2022-06-03 19:59:33 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {flanneld Update v1 2022-06-03 20:00:37 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {nfd-master Update v1 2022-06-03 20:08:16 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/extended-resources":{},"f:nfd.node.kubernetes.io/feature-labels":{},"f:nfd.node.kubernetes.io/worker.version":{}},"f:labels":{"f:feature.node.kubernetes.io/cpu-cpuid.ADX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AESNI":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX2":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512BW":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512CD":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512DQ":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512F":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512VL":{},"f:feature.node.kubernetes.io/cpu-cpuid.FMA3":{},"f:feature.node.kubernetes.io/cpu-cpuid.HLE":{},"f:feature.node.kubernetes.io/cpu-cpuid.IBPB":{},"f:feature.node.kubernetes.io/cpu-cpuid.MPX":{},"f:feature.node.kubernetes.io/cpu-cpuid.RTM":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE4":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE42":{},"f:feature.node.kubernetes.io/cpu-cpuid.STIBP":{},"f:feature.node.kubernetes.io/cpu-cpuid.VMX":{},"f:feature.node.kubernetes.io/cpu-cstate.enabled":{},"f:feature.node.kubernetes.io/cpu-hardware_multithreading":{},"f:feature.node.kubernetes.io/cpu-pstate.status":{},"f:feature.node.kubernetes.io/cpu-pstate.turbo":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTCMT":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTL3CA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBM":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMON":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ_FULL":{},"f:feature.node.kubernetes.io/kernel-selinux.enabled":{},"f:feature.node.kubernetes.io/kernel-version.full":{},"f:feature.node.kubernetes.io/kernel-version.major":{},"f:feature.node.kubernetes.io/kernel-version.minor":{},"f:feature.node.kubernetes.io/kernel-version.revision":{},"f:feature.node.kubernetes.io/memory-numa":{},"f:feature.node.kubernetes.io/network-sriov.capable":{},"f:feature.node.kubernetes.io/network-sriov.configured":{},"f:feature.node.kubernetes.io/pci-0300_1a03.present":{},"f:feature.node.kubernetes.io/storage-nonrotationaldisk":{},"f:feature.node.kubernetes.io/system-os_release.ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID.major":{}}}}} {Swagger-Codegen Update v1 2022-06-03 20:12:07 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:cmk.intel.com/cmk-node":{}}},"f:status":{"f:capacity":{"f:cmk.intel.com/exclusive-cores":{}}}}} {kubelet Update v1 2022-06-03 20:12:10 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:cmk.intel.com/exclusive-cores":{},"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:capacity":{"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.4.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.4.0/24],},Status:NodeStatus{Capacity:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{451201003520 0} {} 440625980Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{201269604352 0} {} 196552348Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{77 0} {} 77 DecimalSI},ephemeral-storage: {{406080902496 0} {} 406080902496 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{178884603904 0} {} 174691996Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-06-03 20:03:25 +0000 UTC,LastTransitionTime:2022-06-03 20:03:25 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-06-03 21:59:36 +0000 UTC,LastTransitionTime:2022-06-03 19:59:32 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-06-03 21:59:36 +0000 UTC,LastTransitionTime:2022-06-03 19:59:32 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-06-03 21:59:36 +0000 UTC,LastTransitionTime:2022-06-03 19:59:32 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-06-03 21:59:36 +0000 UTC,LastTransitionTime:2022-06-03 20:03:20 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.208,},NodeAddress{Type:Hostname,Address:node2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:73f6f7c4482d4ddfadf38b35a5d03575,SystemUUID:80B3CD56-852F-E711-906E-0017A4403562,BootID:14b04379-324d-413e-8b7f-b1dff077c955,KernelVersion:3.10.0-1160.66.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.16,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[opnfv/barometer-collectd@sha256:f30e965aa6195e6ac4ca2410f5a15e3704c92e4afa5208178ca22a7911975d66],SizeBytes:1075575763,},ContainerImage{Names:[localhost:30500/cmk@sha256:196eade72a7e16bdb2d709d29fdec354c8a3dbbb68e384608929b41c5ec41520 localhost:30500/cmk:v1.5.1],SizeBytes:727687199,},ContainerImage{Names:[cmk:v1.5.1],SizeBytes:727687199,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[aquasec/kube-hunter@sha256:2be6820bc1d7e0f57193a9a27d5a3e16b2fd93c53747b03ce8ca48c6fc323781 aquasec/kube-hunter:0.3.1],SizeBytes:347611549,},ContainerImage{Names:[sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 sirot/netperf-latest:latest],SizeBytes:282025213,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[nginx@sha256:a05b0cdd4fc1be3b224ba9662ebdf98fe44c09c0c9215b45f84344c12867002e nginx:1.21.1],SizeBytes:133175493,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:125930239,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[k8s.gcr.io/nfd/node-feature-discovery@sha256:74a1cbd82354f148277e20cdce25d57816e355a896bc67f67a0f722164b16945 k8s.gcr.io/nfd/node-feature-discovery:v0.8.2],SizeBytes:108486428,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[localhost:30500/sriov-device-plugin@sha256:bec5a478455b8244d18398355b5ec18540557180ddc029404300ca241638521b localhost:30500/sriov-device-plugin:v3.3.2],SizeBytes:42676189,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[localhost:30500/tasextender@sha256:eddd5e176ac5f79e2e8ba9a1b7023bbf7200edfa835da39de54a6bf3568f9668 localhost:30500/tasextender:0.4],SizeBytes:28910791,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 aquasec/kube-bench:0.3.1],SizeBytes:19301876,},ContainerImage{Names:[prom/collectd-exporter@sha256:73fbda4d24421bff3b741c27efc36f1b6fbe7c57c378d56d4ff78101cd556654],SizeBytes:17463681,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:16032814,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:11443478,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonewprivs@sha256:8ac1264691820febacf3aea5d152cbde6d10685731ec14966a9401c6f47a68ac k8s.gcr.io/e2e-test-images/nonewprivs:1.3],SizeBytes:7107254,},ContainerImage{Names:[appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172cead6502fec989198b98d8bb appropriate/curl:edge],SizeBytes:5654234,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:1154361,},ContainerImage{Names:[busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 busybox:1.28],SizeBytes:1146369,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jun 3 21:59:41.774: INFO: Logging kubelet events for node node2 Jun 3 21:59:41.776: INFO: Logging pods the kubelet thinks is on node node2 Jun 3 21:59:41.791: INFO: pod-service-account-defaultsa-nomountspec started at 2022-06-03 21:58:52 +0000 UTC (0+1 container statuses recorded) Jun 3 21:59:41.791: INFO: Container token-test ready: true, restart count 0 Jun 3 21:59:41.791: INFO: tas-telemetry-aware-scheduling-84ff454dfb-j2kg5 started at 2022-06-03 20:16:39 +0000 UTC (0+1 container statuses recorded) Jun 3 21:59:41.791: INFO: Container tas-extender ready: true, restart count 0 Jun 3 21:59:41.791: INFO: ss2-1 started at 2022-06-03 21:58:46 +0000 UTC (0+1 container statuses recorded) Jun 3 21:59:41.791: INFO: Container webserver ready: true, restart count 0 Jun 3 21:59:41.791: INFO: var-expansion-0d63ffa1-fbc9-4a25-8bb7-d7766e1b23e9 started at 2022-06-03 21:59:35 +0000 UTC (0+1 container statuses recorded) Jun 3 21:59:41.791: INFO: Container dapi-container ready: false, restart count 0 Jun 3 21:59:41.791: INFO: affinity-clusterip-timeout-lzqf8 started at 2022-06-03 21:59:38 +0000 UTC (0+1 container statuses recorded) Jun 3 21:59:41.792: INFO: Container affinity-clusterip-timeout ready: false, restart count 0 Jun 3 21:59:41.792: INFO: ss2-0 started at 2022-06-03 21:59:35 +0000 UTC (0+1 container statuses recorded) Jun 3 21:59:41.792: INFO: Container webserver ready: true, restart count 0 Jun 3 21:59:41.792: INFO: affinity-clusterip-timeout-jvbkm started at 2022-06-03 21:59:38 +0000 UTC (0+1 container statuses recorded) Jun 3 21:59:41.792: INFO: Container affinity-clusterip-timeout ready: false, restart count 0 Jun 3 21:59:41.792: INFO: kube-flannel-pc7wj started at 2022-06-03 20:00:32 +0000 UTC (1+1 container statuses recorded) Jun 3 21:59:41.792: INFO: Init container install-cni ready: true, restart count 0 Jun 3 21:59:41.792: INFO: Container kube-flannel ready: true, restart count 1 Jun 3 21:59:41.792: INFO: kube-multus-ds-amd64-n7spl started at 2022-06-03 20:00:40 +0000 UTC (0+1 container statuses recorded) Jun 3 21:59:41.792: INFO: Container kube-multus ready: true, restart count 1 Jun 3 21:59:41.792: INFO: test-new-deployment-847dcfb7fb-4js5f started at 2022-06-03 21:59:37 +0000 UTC (0+1 container statuses recorded) Jun 3 21:59:41.792: INFO: Container httpd ready: false, restart count 0 Jun 3 21:59:41.792: INFO: update-demo-nautilus-p2776 started at (0+0 container statuses recorded) Jun 3 21:59:41.792: INFO: kube-proxy-qmkcq started at 2022-06-03 19:59:36 +0000 UTC (0+1 container statuses recorded) Jun 3 21:59:41.792: INFO: Container kube-proxy ready: true, restart count 1 Jun 3 21:59:41.792: INFO: node-feature-discovery-worker-gn855 started at 2022-06-03 20:08:09 +0000 UTC (0+1 container statuses recorded) Jun 3 21:59:41.792: INFO: Container nfd-worker ready: true, restart count 0 Jun 3 21:59:41.792: INFO: collectd-q2l4t started at 2022-06-03 20:17:32 +0000 UTC (0+3 container statuses recorded) Jun 3 21:59:41.792: INFO: Container collectd ready: true, restart count 0 Jun 3 21:59:41.792: INFO: Container collectd-exporter ready: true, restart count 0 Jun 3 21:59:41.792: INFO: Container rbac-proxy ready: true, restart count 0 Jun 3 21:59:41.792: INFO: test-webserver-f9e25df1-fdee-4024-abe0-cd0a6e13c7ec started at 2022-06-03 21:56:50 +0000 UTC (0+1 container statuses recorded) Jun 3 21:59:41.792: INFO: Container test-webserver ready: true, restart count 0 Jun 3 21:59:41.792: INFO: kubernetes-dashboard-785dcbb76d-25c95 started at 2022-06-03 20:01:12 +0000 UTC (0+1 container statuses recorded) Jun 3 21:59:41.792: INFO: Container kubernetes-dashboard ready: true, restart count 1 Jun 3 21:59:41.792: INFO: cmk-v446x started at 2022-06-03 20:12:24 +0000 UTC (0+2 container statuses recorded) Jun 3 21:59:41.792: INFO: Container nodereport ready: true, restart count 0 Jun 3 21:59:41.792: INFO: Container reconcile ready: true, restart count 0 Jun 3 21:59:41.792: INFO: node-exporter-g45bm started at 2022-06-03 20:13:28 +0000 UTC (0+2 container statuses recorded) Jun 3 21:59:41.792: INFO: Container kube-rbac-proxy ready: true, restart count 0 Jun 3 21:59:41.792: INFO: Container node-exporter ready: true, restart count 0 Jun 3 21:59:41.792: INFO: kubernetes-metrics-scraper-5558854cb-fz4kn started at 2022-06-03 20:01:12 +0000 UTC (0+1 container statuses recorded) Jun 3 21:59:41.792: INFO: Container kubernetes-metrics-scraper ready: true, restart count 1 Jun 3 21:59:41.792: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-49xzt started at 2022-06-03 20:09:20 +0000 UTC (0+1 container statuses recorded) Jun 3 21:59:41.792: INFO: Container kube-sriovdp ready: true, restart count 0 Jun 3 21:59:41.792: INFO: downwardapi-volume-f5b204a1-2df3-44a9-b2bd-14b8536825e1 started at (0+0 container statuses recorded) Jun 3 21:59:41.792: INFO: nginx-proxy-node2 started at 2022-06-03 19:59:32 +0000 UTC (0+1 container statuses recorded) Jun 3 21:59:41.792: INFO: Container nginx-proxy ready: true, restart count 2 Jun 3 21:59:41.792: INFO: cmk-init-discover-node2-xvf8p started at 2022-06-03 20:12:02 +0000 UTC (0+3 container statuses recorded) Jun 3 21:59:41.792: INFO: Container discover ready: false, restart count 0 Jun 3 21:59:41.792: INFO: Container init ready: false, restart count 0 Jun 3 21:59:41.792: INFO: Container install ready: false, restart count 0 Jun 3 21:59:41.792: INFO: nodeport-test-5k588 started at 2022-06-03 21:57:25 +0000 UTC (0+1 container statuses recorded) Jun 3 21:59:41.792: INFO: Container nodeport-test ready: true, restart count 0 Jun 3 21:59:42.017: INFO: Latency metrics for node node2 Jun 3 21:59:42.017: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-7321" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750 • Failure [136.213 seconds] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should be able to create a functioning NodePort service [Conformance] [It] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Jun 3 21:59:41.096: Unexpected error: <*errors.errorString | 0xc004990560>: { s: "service is not reachable within 2m0s timeout on endpoint 10.10.190.207:32697 over TCP protocol", } service is not reachable within 2m0s timeout on endpoint 10.10.190.207:32697 over TCP protocol occurred /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1169 ------------------------------ {"msg":"FAILED [sig-network] Services should be able to create a functioning NodePort service [Conformance]","total":-1,"completed":7,"skipped":83,"failed":1,"failures":["[sig-network] Services should be able to create a functioning NodePort service [Conformance]"]} SSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-instrumentation] Events /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 3 21:59:42.085: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [It] should delete a collection of events [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Create set of events Jun 3 21:59:42.111: INFO: created test-event-1 Jun 3 21:59:42.114: INFO: created test-event-2 Jun 3 21:59:42.117: INFO: created test-event-3 STEP: get a list of Events with a label in the current namespace STEP: delete collection of events Jun 3 21:59:42.118: INFO: requesting DeleteCollection of events STEP: check that the list of events matches the requested quantity Jun 3 21:59:42.129: INFO: requesting list of events to confirm quantity [AfterEach] [sig-instrumentation] Events /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 3 21:59:42.131: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-2673" for this suite. • ------------------------------ {"msg":"PASSED [sig-instrumentation] Events should delete a collection of events [Conformance]","total":-1,"completed":8,"skipped":102,"failed":1,"failures":["[sig-network] Services should be able to create a functioning NodePort service [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 3 21:59:35.821: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a volume subpath [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test substitution in volume subpath Jun 3 21:59:35.857: INFO: Waiting up to 5m0s for pod "var-expansion-0d63ffa1-fbc9-4a25-8bb7-d7766e1b23e9" in namespace "var-expansion-1738" to be "Succeeded or Failed" Jun 3 21:59:35.864: INFO: Pod "var-expansion-0d63ffa1-fbc9-4a25-8bb7-d7766e1b23e9": Phase="Pending", Reason="", readiness=false. Elapsed: 7.564403ms Jun 3 21:59:37.868: INFO: Pod "var-expansion-0d63ffa1-fbc9-4a25-8bb7-d7766e1b23e9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011697721s Jun 3 21:59:39.875: INFO: Pod "var-expansion-0d63ffa1-fbc9-4a25-8bb7-d7766e1b23e9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.018427272s Jun 3 21:59:41.879: INFO: Pod "var-expansion-0d63ffa1-fbc9-4a25-8bb7-d7766e1b23e9": Phase="Pending", Reason="", readiness=false. Elapsed: 6.022536148s Jun 3 21:59:43.884: INFO: Pod "var-expansion-0d63ffa1-fbc9-4a25-8bb7-d7766e1b23e9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.027435651s STEP: Saw pod success Jun 3 21:59:43.884: INFO: Pod "var-expansion-0d63ffa1-fbc9-4a25-8bb7-d7766e1b23e9" satisfied condition "Succeeded or Failed" Jun 3 21:59:43.887: INFO: Trying to get logs from node node2 pod var-expansion-0d63ffa1-fbc9-4a25-8bb7-d7766e1b23e9 container dapi-container: STEP: delete the pod Jun 3 21:59:43.902: INFO: Waiting for pod var-expansion-0d63ffa1-fbc9-4a25-8bb7-d7766e1b23e9 to disappear Jun 3 21:59:43.904: INFO: Pod var-expansion-0d63ffa1-fbc9-4a25-8bb7-d7766e1b23e9 no longer exists [AfterEach] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 3 21:59:43.904: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-1738" for this suite. • [SLOW TEST:8.090 seconds] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should allow substituting values in a volume subpath [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Variable Expansion should allow substituting values in a volume subpath [Conformance]","total":-1,"completed":32,"skipped":552,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 3 21:59:37.529: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:86 [It] Deployment should have a working scale subresource [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Jun 3 21:59:37.550: INFO: Creating simple deployment test-new-deployment Jun 3 21:59:37.560: INFO: deployment "test-new-deployment" doesn't have the required revision set Jun 3 21:59:39.569: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63789890377, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63789890377, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63789890377, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63789890377, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-new-deployment-847dcfb7fb\" is progressing."}}, CollisionCount:(*int32)(nil)} Jun 3 21:59:41.573: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63789890377, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63789890377, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63789890377, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63789890377, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-new-deployment-847dcfb7fb\" is progressing."}}, CollisionCount:(*int32)(nil)} Jun 3 21:59:43.573: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63789890377, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63789890377, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63789890377, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63789890377, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-new-deployment-847dcfb7fb\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: getting scale subresource STEP: updating a scale subresource STEP: verifying the deployment Spec.Replicas was modified STEP: Patch a scale subresource [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:80 Jun 3 21:59:45.594: INFO: Deployment "test-new-deployment": &Deployment{ObjectMeta:{test-new-deployment deployment-8003 011fca57-d598-40f6-9c9a-d02f8fb05daa 38178 3 2022-06-03 21:59:37 +0000 UTC map[name:httpd] map[deployment.kubernetes.io/revision:1] [] [] [{e2e.test Update apps/v1 2022-06-03 21:59:37 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2022-06-03 21:59:43 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:availableReplicas":{},"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{},"f:updatedReplicas":{}}}}]},Spec:DeploymentSpec{Replicas:*4,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd] map[] [] [] []} {[] [] [{httpd k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc00443f248 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2022-06-03 21:59:43 +0000 UTC,LastTransitionTime:2022-06-03 21:59:43 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-new-deployment-847dcfb7fb" has successfully progressed.,LastUpdateTime:2022-06-03 21:59:43 +0000 UTC,LastTransitionTime:2022-06-03 21:59:37 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} Jun 3 21:59:45.601: INFO: New ReplicaSet "test-new-deployment-847dcfb7fb" of Deployment "test-new-deployment": &ReplicaSet{ObjectMeta:{test-new-deployment-847dcfb7fb deployment-8003 9074ea32-dc41-4fd6-9e2b-90514254cc06 38181 3 2022-06-03 21:59:37 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[deployment.kubernetes.io/desired-replicas:4 deployment.kubernetes.io/max-replicas:5 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-new-deployment 011fca57-d598-40f6-9c9a-d02f8fb05daa 0xc00443f667 0xc00443f668}] [] [{kube-controller-manager Update apps/v1 2022-06-03 21:59:43 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"011fca57-d598-40f6-9c9a-d02f8fb05daa\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*4,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: 847dcfb7fb,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[] [] [] []} {[] [] [{httpd k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc00443f6d8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} Jun 3 21:59:45.604: INFO: Pod "test-new-deployment-847dcfb7fb-4js5f" is available: &Pod{ObjectMeta:{test-new-deployment-847dcfb7fb-4js5f test-new-deployment-847dcfb7fb- deployment-8003 d64251ba-095a-4ea1-afe3-b69d21edef66 38135 0 2022-06-03 21:59:37 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[k8s.v1.cni.cncf.io/network-status:[{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.4.45" ], "mac": "7a:f0:de:da:a2:8e", "default": true, "dns": {} }] k8s.v1.cni.cncf.io/networks-status:[{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.4.45" ], "mac": "7a:f0:de:da:a2:8e", "default": true, "dns": {} }] kubernetes.io/psp:collectd] [{apps/v1 ReplicaSet test-new-deployment-847dcfb7fb 9074ea32-dc41-4fd6-9e2b-90514254cc06 0xc00443facf 0xc00443fae0}] [] [{kube-controller-manager Update v1 2022-06-03 21:59:37 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"9074ea32-dc41-4fd6-9e2b-90514254cc06\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {multus Update v1 2022-06-03 21:59:39 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:k8s.v1.cni.cncf.io/network-status":{},"f:k8s.v1.cni.cncf.io/networks-status":{}}}}} {kubelet Update v1 2022-06-03 21:59:43 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.4.45\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-m6mnq,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-m6mnq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-06-03 21:59:37 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-06-03 21:59:41 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-06-03 21:59:41 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-06-03 21:59:37 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.10.190.208,PodIP:10.244.4.45,StartTime:2022-06-03 21:59:37 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2022-06-03 21:59:40 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,ImageID:docker-pullable://k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50,ContainerID:docker://e536bacfbe2e297bfa76155e7752f8ca4f9a0457ceb0b8c22e5cab46762d489b,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.4.45,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Jun 3 21:59:45.605: INFO: Pod "test-new-deployment-847dcfb7fb-vwtxs" is not available: &Pod{ObjectMeta:{test-new-deployment-847dcfb7fb-vwtxs test-new-deployment-847dcfb7fb- deployment-8003 476a0cca-72b0-44e2-a7ab-25d653e0f75a 38185 0 2022-06-03 21:59:45 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[kubernetes.io/psp:collectd] [{apps/v1 ReplicaSet test-new-deployment-847dcfb7fb 9074ea32-dc41-4fd6-9e2b-90514254cc06 0xc00443fccf 0xc00443fce0}] [] [{kube-controller-manager Update v1 2022-06-03 21:59:45 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"9074ea32-dc41-4fd6-9e2b-90514254cc06\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-m5gd9,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-m5gd9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-06-03 21:59:45 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 3 21:59:45.605: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-8003" for this suite. • [SLOW TEST:8.084 seconds] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 Deployment should have a working scale subresource [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] Deployment Deployment should have a working scale subresource [Conformance]","total":-1,"completed":11,"skipped":220,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 3 21:59:45.660: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:186 [It] should delete a collection of pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Create set of pods Jun 3 21:59:45.691: INFO: created test-pod-1 Jun 3 21:59:45.702: INFO: created test-pod-2 Jun 3 21:59:45.711: INFO: created test-pod-3 STEP: waiting for all 3 pods to be located STEP: waiting for all pods to be deleted [AfterEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 3 21:59:45.730: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-664" for this suite. • ------------------------------ [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 3 21:59:41.156: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:241 [BeforeEach] Update Demo /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:293 [It] should create and stop a replication controller [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating a replication controller Jun 3 21:59:41.177: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-8180 create -f -' Jun 3 21:59:41.531: INFO: stderr: "" Jun 3 21:59:41.531: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Jun 3 21:59:41.531: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-8180 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Jun 3 21:59:41.708: INFO: stderr: "" Jun 3 21:59:41.709: INFO: stdout: "update-demo-nautilus-brrcw update-demo-nautilus-p2776 " Jun 3 21:59:41.709: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-8180 get pods update-demo-nautilus-brrcw -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Jun 3 21:59:41.890: INFO: stderr: "" Jun 3 21:59:41.890: INFO: stdout: "" Jun 3 21:59:41.890: INFO: update-demo-nautilus-brrcw is created but not running Jun 3 21:59:46.891: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-8180 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Jun 3 21:59:47.081: INFO: stderr: "" Jun 3 21:59:47.081: INFO: stdout: "update-demo-nautilus-brrcw update-demo-nautilus-p2776 " Jun 3 21:59:47.081: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-8180 get pods update-demo-nautilus-brrcw -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Jun 3 21:59:47.254: INFO: stderr: "" Jun 3 21:59:47.254: INFO: stdout: "" Jun 3 21:59:47.254: INFO: update-demo-nautilus-brrcw is created but not running Jun 3 21:59:52.255: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-8180 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Jun 3 21:59:52.441: INFO: stderr: "" Jun 3 21:59:52.441: INFO: stdout: "update-demo-nautilus-brrcw update-demo-nautilus-p2776 " Jun 3 21:59:52.441: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-8180 get pods update-demo-nautilus-brrcw -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Jun 3 21:59:52.609: INFO: stderr: "" Jun 3 21:59:52.609: INFO: stdout: "true" Jun 3 21:59:52.609: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-8180 get pods update-demo-nautilus-brrcw -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' Jun 3 21:59:52.760: INFO: stderr: "" Jun 3 21:59:52.760: INFO: stdout: "k8s.gcr.io/e2e-test-images/nautilus:1.4" Jun 3 21:59:52.760: INFO: validating pod update-demo-nautilus-brrcw Jun 3 21:59:52.764: INFO: got data: { "image": "nautilus.jpg" } Jun 3 21:59:52.764: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jun 3 21:59:52.764: INFO: update-demo-nautilus-brrcw is verified up and running Jun 3 21:59:52.764: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-8180 get pods update-demo-nautilus-p2776 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Jun 3 21:59:52.928: INFO: stderr: "" Jun 3 21:59:52.928: INFO: stdout: "true" Jun 3 21:59:52.928: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-8180 get pods update-demo-nautilus-p2776 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' Jun 3 21:59:53.083: INFO: stderr: "" Jun 3 21:59:53.083: INFO: stdout: "k8s.gcr.io/e2e-test-images/nautilus:1.4" Jun 3 21:59:53.083: INFO: validating pod update-demo-nautilus-p2776 Jun 3 21:59:53.085: INFO: got data: { "image": "nautilus.jpg" } Jun 3 21:59:53.085: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jun 3 21:59:53.085: INFO: update-demo-nautilus-p2776 is verified up and running STEP: using delete to clean up resources Jun 3 21:59:53.086: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-8180 delete --grace-period=0 --force -f -' Jun 3 21:59:53.211: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jun 3 21:59:53.211: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Jun 3 21:59:53.211: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-8180 get rc,svc -l name=update-demo --no-headers' Jun 3 21:59:53.408: INFO: stderr: "No resources found in kubectl-8180 namespace.\n" Jun 3 21:59:53.408: INFO: stdout: "" Jun 3 21:59:53.409: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-8180 get pods -l name=update-demo -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Jun 3 21:59:53.589: INFO: stderr: "" Jun 3 21:59:53.589: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 3 21:59:53.589: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8180" for this suite. • [SLOW TEST:12.441 seconds] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Update Demo /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:291 should create and stop a replication controller [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]","total":-1,"completed":15,"skipped":211,"failed":0} [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 3 21:59:41.562: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41 [It] should provide podname only [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test downward API volume plugin Jun 3 21:59:41.596: INFO: Waiting up to 5m0s for pod "downwardapi-volume-f5b204a1-2df3-44a9-b2bd-14b8536825e1" in namespace "downward-api-4166" to be "Succeeded or Failed" Jun 3 21:59:41.598: INFO: Pod "downwardapi-volume-f5b204a1-2df3-44a9-b2bd-14b8536825e1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.422479ms Jun 3 21:59:43.602: INFO: Pod "downwardapi-volume-f5b204a1-2df3-44a9-b2bd-14b8536825e1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005570665s Jun 3 21:59:45.605: INFO: Pod "downwardapi-volume-f5b204a1-2df3-44a9-b2bd-14b8536825e1": Phase="Pending", Reason="", readiness=false. Elapsed: 4.00877842s Jun 3 21:59:47.610: INFO: Pod "downwardapi-volume-f5b204a1-2df3-44a9-b2bd-14b8536825e1": Phase="Pending", Reason="", readiness=false. Elapsed: 6.013891015s Jun 3 21:59:49.613: INFO: Pod "downwardapi-volume-f5b204a1-2df3-44a9-b2bd-14b8536825e1": Phase="Pending", Reason="", readiness=false. Elapsed: 8.016856965s Jun 3 21:59:51.617: INFO: Pod "downwardapi-volume-f5b204a1-2df3-44a9-b2bd-14b8536825e1": Phase="Pending", Reason="", readiness=false. Elapsed: 10.020625912s Jun 3 21:59:53.621: INFO: Pod "downwardapi-volume-f5b204a1-2df3-44a9-b2bd-14b8536825e1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.024745908s STEP: Saw pod success Jun 3 21:59:53.621: INFO: Pod "downwardapi-volume-f5b204a1-2df3-44a9-b2bd-14b8536825e1" satisfied condition "Succeeded or Failed" Jun 3 21:59:53.623: INFO: Trying to get logs from node node2 pod downwardapi-volume-f5b204a1-2df3-44a9-b2bd-14b8536825e1 container client-container: STEP: delete the pod Jun 3 21:59:53.636: INFO: Waiting for pod downwardapi-volume-f5b204a1-2df3-44a9-b2bd-14b8536825e1 to disappear Jun 3 21:59:53.638: INFO: Pod downwardapi-volume-f5b204a1-2df3-44a9-b2bd-14b8536825e1 no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 3 21:59:53.638: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-4166" for this suite. • [SLOW TEST:12.084 seconds] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should provide podname only [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance]","total":-1,"completed":16,"skipped":211,"failed":0} SSSSSSSS ------------------------------ {"msg":"PASSED [sig-node] Pods should delete a collection of pods [Conformance]","total":-1,"completed":12,"skipped":238,"failed":0} [BeforeEach] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 3 21:59:45.740: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's command [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test substitution in container's command Jun 3 21:59:45.770: INFO: Waiting up to 5m0s for pod "var-expansion-83ae5c95-bec7-4de9-a5ba-c097ae3838f1" in namespace "var-expansion-9533" to be "Succeeded or Failed" Jun 3 21:59:45.773: INFO: Pod "var-expansion-83ae5c95-bec7-4de9-a5ba-c097ae3838f1": Phase="Pending", Reason="", readiness=false. Elapsed: 3.135529ms Jun 3 21:59:47.777: INFO: Pod "var-expansion-83ae5c95-bec7-4de9-a5ba-c097ae3838f1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007276844s Jun 3 21:59:49.781: INFO: Pod "var-expansion-83ae5c95-bec7-4de9-a5ba-c097ae3838f1": Phase="Pending", Reason="", readiness=false. Elapsed: 4.010962284s Jun 3 21:59:51.785: INFO: Pod "var-expansion-83ae5c95-bec7-4de9-a5ba-c097ae3838f1": Phase="Pending", Reason="", readiness=false. Elapsed: 6.014502707s Jun 3 21:59:53.789: INFO: Pod "var-expansion-83ae5c95-bec7-4de9-a5ba-c097ae3838f1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.018573439s STEP: Saw pod success Jun 3 21:59:53.789: INFO: Pod "var-expansion-83ae5c95-bec7-4de9-a5ba-c097ae3838f1" satisfied condition "Succeeded or Failed" Jun 3 21:59:53.791: INFO: Trying to get logs from node node2 pod var-expansion-83ae5c95-bec7-4de9-a5ba-c097ae3838f1 container dapi-container: STEP: delete the pod Jun 3 21:59:53.802: INFO: Waiting for pod var-expansion-83ae5c95-bec7-4de9-a5ba-c097ae3838f1 to disappear Jun 3 21:59:53.804: INFO: Pod var-expansion-83ae5c95-bec7-4de9-a5ba-c097ae3838f1 no longer exists [AfterEach] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 3 21:59:53.804: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-9533" for this suite. • [SLOW TEST:8.073 seconds] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should allow substituting values in a container's command [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance]","total":-1,"completed":13,"skipped":238,"failed":0} SSSSSSSSSS ------------------------------ [BeforeEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 3 21:59:43.975: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/rc.go:54 [It] should serve a basic image on each replica with a public image [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating replication controller my-hostname-basic-1f0e974a-5752-4ff4-8677-67cccd8916b9 Jun 3 21:59:44.004: INFO: Pod name my-hostname-basic-1f0e974a-5752-4ff4-8677-67cccd8916b9: Found 0 pods out of 1 Jun 3 21:59:49.016: INFO: Pod name my-hostname-basic-1f0e974a-5752-4ff4-8677-67cccd8916b9: Found 1 pods out of 1 Jun 3 21:59:49.016: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-1f0e974a-5752-4ff4-8677-67cccd8916b9" are running Jun 3 21:59:53.024: INFO: Pod "my-hostname-basic-1f0e974a-5752-4ff4-8677-67cccd8916b9-b98zk" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-03 21:59:44 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-03 21:59:44 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-1f0e974a-5752-4ff4-8677-67cccd8916b9]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-03 21:59:44 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-1f0e974a-5752-4ff4-8677-67cccd8916b9]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-03 21:59:44 +0000 UTC Reason: Message:}]) Jun 3 21:59:53.025: INFO: Trying to dial the pod Jun 3 21:59:58.036: INFO: Controller my-hostname-basic-1f0e974a-5752-4ff4-8677-67cccd8916b9: Got expected result from replica 1 [my-hostname-basic-1f0e974a-5752-4ff4-8677-67cccd8916b9-b98zk]: "my-hostname-basic-1f0e974a-5752-4ff4-8677-67cccd8916b9-b98zk", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 3 21:59:58.036: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-3490" for this suite. • [SLOW TEST:14.071 seconds] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance]","total":-1,"completed":33,"skipped":579,"failed":0} S ------------------------------ [BeforeEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 3 21:59:53.668: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-1108.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.dns-1108.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-1108.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-1108.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.dns-1108.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-1108.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe /etc/hosts STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Jun 3 21:59:59.736: INFO: DNS probes using dns-1108/dns-test-a6d1875e-6bce-416b-ba5b-e25a8a60fc6a succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 3 21:59:59.744: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-1108" for this suite. • [SLOW TEST:6.083 seconds] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]","total":-1,"completed":17,"skipped":219,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 3 21:59:42.220: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:126 STEP: Setting up server cert STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication STEP: Deploying the custom resource conversion webhook pod STEP: Wait for the deployment to be ready Jun 3 21:59:42.552: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set Jun 3 21:59:44.560: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63789890382, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63789890382, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63789890382, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63789890382, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-697cdbd8f4\" is progressing."}}, CollisionCount:(*int32)(nil)} Jun 3 21:59:46.565: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63789890382, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63789890382, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63789890382, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63789890382, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-697cdbd8f4\" is progressing."}}, CollisionCount:(*int32)(nil)} Jun 3 21:59:48.564: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63789890382, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63789890382, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63789890382, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63789890382, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-697cdbd8f4\" is progressing."}}, CollisionCount:(*int32)(nil)} Jun 3 21:59:50.567: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63789890382, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63789890382, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63789890382, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63789890382, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-697cdbd8f4\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jun 3 21:59:53.571: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1 [It] should be able to convert from CR v1 to CR v2 [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Jun 3 21:59:53.573: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating a v1 custom resource STEP: v2 custom resource should be converted [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 3 22:00:01.682: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-webhook-6455" for this suite. [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:137 • [SLOW TEST:19.490 seconds] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to convert from CR v1 to CR v2 [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","total":-1,"completed":9,"skipped":135,"failed":1,"failures":["[sig-network] Services should be able to create a functioning NodePort service [Conformance]"]} SSSSSSS ------------------------------ [BeforeEach] [sig-network] IngressClass API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 3 22:00:01.730: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename ingressclass STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] IngressClass API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/ingressclass.go:149 [It] should support creating IngressClass API operations [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: getting /apis STEP: getting /apis/networking.k8s.io STEP: getting /apis/networking.k8s.iov1 STEP: creating STEP: getting STEP: listing STEP: watching Jun 3 22:00:01.780: INFO: starting watch STEP: patching STEP: updating Jun 3 22:00:01.787: INFO: waiting for watch events with expected annotations Jun 3 22:00:01.787: INFO: saw patched and updated annotations STEP: deleting STEP: deleting a collection [AfterEach] [sig-network] IngressClass API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 3 22:00:01.803: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "ingressclass-2161" for this suite. • ------------------------------ {"msg":"PASSED [sig-network] IngressClass API should support creating IngressClass API operations [Conformance]","total":-1,"completed":10,"skipped":142,"failed":1,"failures":["[sig-network] Services should be able to create a functioning NodePort service [Conformance]"]} SSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 3 21:59:58.050: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Jun 3 21:59:58.710: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Jun 3 22:00:00.717: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63789890398, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63789890398, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63789890398, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63789890398, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jun 3 22:00:03.730: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should unconditionally reject operations on fail closed webhook [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Registering a webhook that server cannot talk to, with fail closed policy, via the AdmissionRegistration API STEP: create a namespace for the webhook STEP: create a configmap should be unconditionally rejected by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 3 22:00:04.770: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-1984" for this suite. STEP: Destroying namespace "webhook-1984-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.759 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should unconditionally reject operations on fail closed webhook [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","total":-1,"completed":34,"skipped":580,"failed":0} SSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 3 22:00:04.827: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating configMap with name projected-configmap-test-volume-a2ec49ee-9f81-4133-8fa6-f34a3f33ea7a STEP: Creating a pod to test consume configMaps Jun 3 22:00:04.863: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-70d679ea-6a81-4022-860f-b0778e42a671" in namespace "projected-7173" to be "Succeeded or Failed" Jun 3 22:00:04.869: INFO: Pod "pod-projected-configmaps-70d679ea-6a81-4022-860f-b0778e42a671": Phase="Pending", Reason="", readiness=false. Elapsed: 6.672936ms Jun 3 22:00:06.873: INFO: Pod "pod-projected-configmaps-70d679ea-6a81-4022-860f-b0778e42a671": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00981017s Jun 3 22:00:08.881: INFO: Pod "pod-projected-configmaps-70d679ea-6a81-4022-860f-b0778e42a671": Phase="Pending", Reason="", readiness=false. Elapsed: 4.018655117s Jun 3 22:00:10.887: INFO: Pod "pod-projected-configmaps-70d679ea-6a81-4022-860f-b0778e42a671": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.024524818s STEP: Saw pod success Jun 3 22:00:10.887: INFO: Pod "pod-projected-configmaps-70d679ea-6a81-4022-860f-b0778e42a671" satisfied condition "Succeeded or Failed" Jun 3 22:00:10.889: INFO: Trying to get logs from node node2 pod pod-projected-configmaps-70d679ea-6a81-4022-860f-b0778e42a671 container projected-configmap-volume-test: STEP: delete the pod Jun 3 22:00:10.904: INFO: Waiting for pod pod-projected-configmaps-70d679ea-6a81-4022-860f-b0778e42a671 to disappear Jun 3 22:00:10.906: INFO: Pod pod-projected-configmaps-70d679ea-6a81-4022-860f-b0778e42a671 no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 3 22:00:10.906: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7173" for this suite. • [SLOW TEST:6.088 seconds] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":-1,"completed":35,"skipped":586,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] EndpointSlice /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 3 22:00:11.057: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename endpointslice STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] EndpointSlice /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/endpointslice.go:49 [It] should have Endpoints and EndpointSlices pointing to API Server [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [AfterEach] [sig-network] EndpointSlice /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 3 22:00:11.085: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "endpointslice-2964" for this suite. • ------------------------------ {"msg":"PASSED [sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","total":-1,"completed":36,"skipped":657,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 3 22:00:11.139: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/init_container.go:162 [It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating the pod Jun 3 22:00:11.192: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [sig-node] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 3 22:00:17.700: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-2723" for this suite. • [SLOW TEST:6.572 seconds] [sig-node] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]","total":-1,"completed":37,"skipped":678,"failed":0} SSSSSSSSSS ------------------------------ [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 3 22:00:17.738: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:241 [It] should check if Kubernetes control plane services is included in cluster-info [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: validating cluster-info Jun 3 22:00:17.765: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-4675 cluster-info' Jun 3 22:00:17.936: INFO: stderr: "" Jun 3 22:00:17.936: INFO: stdout: "\x1b[0;32mKubernetes control plane\x1b[0m is running at \x1b[0;33mhttps://10.10.190.202:6443\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 3 22:00:17.936: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4675" for this suite. • ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes control plane services is included in cluster-info [Conformance]","total":-1,"completed":38,"skipped":688,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance]","total":-1,"completed":7,"skipped":179,"failed":1,"failures":["[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]"]} [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 3 21:59:53.603: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of same group and version but different kinds [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: CRs in the same group and version but different kinds (two CRDs) show up in OpenAPI documentation Jun 3 21:59:53.626: INFO: >>> kubeConfig: /root/.kube/config Jun 3 22:00:03.198: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 3 22:00:21.354: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-6265" for this suite. • [SLOW TEST:27.774 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of same group and version but different kinds [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance]","total":-1,"completed":8,"skipped":179,"failed":1,"failures":["[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]"]} SSSSSSSS ------------------------------ [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 3 22:00:21.403: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:241 [It] should check is all data is printed [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Jun 3 22:00:21.428: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-4907 version' Jun 3 22:00:21.534: INFO: stderr: "" Jun 3 22:00:21.534: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"21\", GitVersion:\"v1.21.9\", GitCommit:\"b631974d68ac5045e076c86a5c66fba6f128dc72\", GitTreeState:\"clean\", BuildDate:\"2022-01-19T17:51:12Z\", GoVersion:\"go1.16.12\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"21\", GitVersion:\"v1.21.1\", GitCommit:\"5e58841cce77d4bc13713ad2b91fa0d961e69192\", GitTreeState:\"clean\", BuildDate:\"2021-05-12T14:12:29Z\", GoVersion:\"go1.16.4\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 3 22:00:21.534: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4907" for this suite. • ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl version should check is all data is printed [Conformance]","total":-1,"completed":9,"skipped":187,"failed":1,"failures":["[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]"]} SSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 3 21:59:31.963: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746 [It] should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating service in namespace services-9883 Jun 3 21:59:31.996: INFO: The status of Pod kube-proxy-mode-detector is Pending, waiting for it to be Running (with Ready = true) Jun 3 21:59:33.999: INFO: The status of Pod kube-proxy-mode-detector is Pending, waiting for it to be Running (with Ready = true) Jun 3 21:59:35.999: INFO: The status of Pod kube-proxy-mode-detector is Pending, waiting for it to be Running (with Ready = true) Jun 3 21:59:38.000: INFO: The status of Pod kube-proxy-mode-detector is Running (Ready = true) Jun 3 21:59:38.003: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9883 exec kube-proxy-mode-detector -- /bin/sh -x -c curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode' Jun 3 21:59:38.261: INFO: stderr: "+ curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode\n" Jun 3 21:59:38.261: INFO: stdout: "iptables" Jun 3 21:59:38.261: INFO: proxyMode: iptables Jun 3 21:59:38.268: INFO: Waiting for pod kube-proxy-mode-detector to disappear Jun 3 21:59:38.271: INFO: Pod kube-proxy-mode-detector no longer exists STEP: creating service affinity-clusterip-timeout in namespace services-9883 STEP: creating replication controller affinity-clusterip-timeout in namespace services-9883 I0603 21:59:38.281868 34 runners.go:190] Created replication controller with name: affinity-clusterip-timeout, namespace: services-9883, replica count: 3 I0603 21:59:41.334220 34 runners.go:190] affinity-clusterip-timeout Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0603 21:59:44.334431 34 runners.go:190] affinity-clusterip-timeout Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0603 21:59:47.334701 34 runners.go:190] affinity-clusterip-timeout Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jun 3 21:59:47.339: INFO: Creating new exec pod Jun 3 21:59:52.361: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9883 exec execpod-affinityst48s -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-timeout 80' Jun 3 21:59:52.604: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-clusterip-timeout 80\nConnection to affinity-clusterip-timeout 80 port [tcp/http] succeeded!\n" Jun 3 21:59:52.604: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" Jun 3 21:59:52.604: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9883 exec execpod-affinityst48s -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.233.35.252 80' Jun 3 21:59:52.850: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 10.233.35.252 80\nConnection to 10.233.35.252 80 port [tcp/http] succeeded!\n" Jun 3 21:59:52.850: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" Jun 3 21:59:52.850: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9883 exec execpod-affinityst48s -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://10.233.35.252:80/ ; done' Jun 3 21:59:53.153: INFO: stderr: "+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.35.252:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.35.252:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.35.252:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.35.252:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.35.252:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.35.252:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.35.252:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.35.252:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.35.252:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.35.252:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.35.252:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.35.252:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.35.252:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.35.252:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.35.252:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.35.252:80/\n" Jun 3 21:59:53.153: INFO: stdout: "\naffinity-clusterip-timeout-jvbkm\naffinity-clusterip-timeout-jvbkm\naffinity-clusterip-timeout-jvbkm\naffinity-clusterip-timeout-jvbkm\naffinity-clusterip-timeout-jvbkm\naffinity-clusterip-timeout-jvbkm\naffinity-clusterip-timeout-jvbkm\naffinity-clusterip-timeout-jvbkm\naffinity-clusterip-timeout-jvbkm\naffinity-clusterip-timeout-jvbkm\naffinity-clusterip-timeout-jvbkm\naffinity-clusterip-timeout-jvbkm\naffinity-clusterip-timeout-jvbkm\naffinity-clusterip-timeout-jvbkm\naffinity-clusterip-timeout-jvbkm\naffinity-clusterip-timeout-jvbkm" Jun 3 21:59:53.153: INFO: Received response from host: affinity-clusterip-timeout-jvbkm Jun 3 21:59:53.153: INFO: Received response from host: affinity-clusterip-timeout-jvbkm Jun 3 21:59:53.153: INFO: Received response from host: affinity-clusterip-timeout-jvbkm Jun 3 21:59:53.153: INFO: Received response from host: affinity-clusterip-timeout-jvbkm Jun 3 21:59:53.153: INFO: Received response from host: affinity-clusterip-timeout-jvbkm Jun 3 21:59:53.153: INFO: Received response from host: affinity-clusterip-timeout-jvbkm Jun 3 21:59:53.153: INFO: Received response from host: affinity-clusterip-timeout-jvbkm Jun 3 21:59:53.153: INFO: Received response from host: affinity-clusterip-timeout-jvbkm Jun 3 21:59:53.153: INFO: Received response from host: affinity-clusterip-timeout-jvbkm Jun 3 21:59:53.153: INFO: Received response from host: affinity-clusterip-timeout-jvbkm Jun 3 21:59:53.153: INFO: Received response from host: affinity-clusterip-timeout-jvbkm Jun 3 21:59:53.153: INFO: Received response from host: affinity-clusterip-timeout-jvbkm Jun 3 21:59:53.153: INFO: Received response from host: affinity-clusterip-timeout-jvbkm Jun 3 21:59:53.153: INFO: Received response from host: affinity-clusterip-timeout-jvbkm Jun 3 21:59:53.153: INFO: Received response from host: affinity-clusterip-timeout-jvbkm Jun 3 21:59:53.153: INFO: Received response from host: affinity-clusterip-timeout-jvbkm Jun 3 21:59:53.153: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9883 exec execpod-affinityst48s -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://10.233.35.252:80/' Jun 3 21:59:53.432: INFO: stderr: "+ curl -q -s --connect-timeout 2 http://10.233.35.252:80/\n" Jun 3 21:59:53.432: INFO: stdout: "affinity-clusterip-timeout-jvbkm" Jun 3 22:00:13.435: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9883 exec execpod-affinityst48s -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://10.233.35.252:80/' Jun 3 22:00:13.697: INFO: stderr: "+ curl -q -s --connect-timeout 2 http://10.233.35.252:80/\n" Jun 3 22:00:13.697: INFO: stdout: "affinity-clusterip-timeout-zmnx4" Jun 3 22:00:13.697: INFO: Cleaning up the exec pod STEP: deleting ReplicationController affinity-clusterip-timeout in namespace services-9883, will wait for the garbage collector to delete the pods Jun 3 22:00:13.765: INFO: Deleting ReplicationController affinity-clusterip-timeout took: 4.761319ms Jun 3 22:00:13.865: INFO: Terminating ReplicationController affinity-clusterip-timeout pods took: 100.660237ms [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 3 22:00:22.176: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-9883" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750 • [SLOW TEST:50.222 seconds] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","total":-1,"completed":26,"skipped":440,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 3 21:59:59.816: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating pod pod-subpath-test-configmap-dtt5 STEP: Creating a pod to test atomic-volume-subpath Jun 3 21:59:59.858: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-dtt5" in namespace "subpath-7456" to be "Succeeded or Failed" Jun 3 21:59:59.864: INFO: Pod "pod-subpath-test-configmap-dtt5": Phase="Pending", Reason="", readiness=false. Elapsed: 6.575428ms Jun 3 22:00:01.868: INFO: Pod "pod-subpath-test-configmap-dtt5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010166645s Jun 3 22:00:03.873: INFO: Pod "pod-subpath-test-configmap-dtt5": Phase="Running", Reason="", readiness=true. Elapsed: 4.014985399s Jun 3 22:00:05.877: INFO: Pod "pod-subpath-test-configmap-dtt5": Phase="Running", Reason="", readiness=true. Elapsed: 6.019373955s Jun 3 22:00:07.882: INFO: Pod "pod-subpath-test-configmap-dtt5": Phase="Running", Reason="", readiness=true. Elapsed: 8.024534994s Jun 3 22:00:09.885: INFO: Pod "pod-subpath-test-configmap-dtt5": Phase="Running", Reason="", readiness=true. Elapsed: 10.027576401s Jun 3 22:00:11.889: INFO: Pod "pod-subpath-test-configmap-dtt5": Phase="Running", Reason="", readiness=true. Elapsed: 12.030912146s Jun 3 22:00:13.895: INFO: Pod "pod-subpath-test-configmap-dtt5": Phase="Running", Reason="", readiness=true. Elapsed: 14.037117276s Jun 3 22:00:15.900: INFO: Pod "pod-subpath-test-configmap-dtt5": Phase="Running", Reason="", readiness=true. Elapsed: 16.042399228s Jun 3 22:00:17.905: INFO: Pod "pod-subpath-test-configmap-dtt5": Phase="Running", Reason="", readiness=true. Elapsed: 18.046842062s Jun 3 22:00:19.909: INFO: Pod "pod-subpath-test-configmap-dtt5": Phase="Running", Reason="", readiness=true. Elapsed: 20.050850781s Jun 3 22:00:21.912: INFO: Pod "pod-subpath-test-configmap-dtt5": Phase="Running", Reason="", readiness=true. Elapsed: 22.054166818s Jun 3 22:00:23.916: INFO: Pod "pod-subpath-test-configmap-dtt5": Phase="Running", Reason="", readiness=true. Elapsed: 24.05835916s Jun 3 22:00:25.922: INFO: Pod "pod-subpath-test-configmap-dtt5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.064204658s STEP: Saw pod success Jun 3 22:00:25.922: INFO: Pod "pod-subpath-test-configmap-dtt5" satisfied condition "Succeeded or Failed" Jun 3 22:00:25.925: INFO: Trying to get logs from node node2 pod pod-subpath-test-configmap-dtt5 container test-container-subpath-configmap-dtt5: STEP: delete the pod Jun 3 22:00:25.943: INFO: Waiting for pod pod-subpath-test-configmap-dtt5 to disappear Jun 3 22:00:25.945: INFO: Pod pod-subpath-test-configmap-dtt5 no longer exists STEP: Deleting pod pod-subpath-test-configmap-dtt5 Jun 3 22:00:25.945: INFO: Deleting pod "pod-subpath-test-configmap-dtt5" in namespace "subpath-7456" [AfterEach] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 3 22:00:25.948: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-7456" for this suite. • [SLOW TEST:26.141 seconds] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]","total":-1,"completed":18,"skipped":243,"failed":0} SSSSSS ------------------------------ [BeforeEach] [sig-apps] ReplicaSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 3 22:00:21.589: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] Replicaset should have a working scale subresource [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating replica set "test-rs" that asks for more than the allowed pod quota Jun 3 22:00:21.616: INFO: Pod name sample-pod: Found 0 pods out of 1 Jun 3 22:00:26.619: INFO: Pod name sample-pod: Found 1 pods out of 1 STEP: ensuring each pod is running STEP: getting scale subresource STEP: updating a scale subresource STEP: verifying the replicaset Spec.Replicas was modified STEP: Patch a scale subresource [AfterEach] [sig-apps] ReplicaSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 3 22:00:26.632: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-4323" for this suite. • [SLOW TEST:5.051 seconds] [sig-apps] ReplicaSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 Replicaset should have a working scale subresource [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] ReplicaSet Replicaset should have a working scale subresource [Conformance]","total":-1,"completed":10,"skipped":203,"failed":1,"failures":["[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] EmptyDir wrapper volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 3 22:00:25.976: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not conflict [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Jun 3 22:00:26.024: INFO: The status of Pod pod-secrets-33c60f6e-62eb-4bd8-9c7e-8a8ec6aa8ba6 is Pending, waiting for it to be Running (with Ready = true) Jun 3 22:00:28.028: INFO: The status of Pod pod-secrets-33c60f6e-62eb-4bd8-9c7e-8a8ec6aa8ba6 is Pending, waiting for it to be Running (with Ready = true) Jun 3 22:00:30.029: INFO: The status of Pod pod-secrets-33c60f6e-62eb-4bd8-9c7e-8a8ec6aa8ba6 is Running (Ready = true) STEP: Cleaning up the secret STEP: Cleaning up the configmap STEP: Cleaning up the pod [AfterEach] [sig-storage] EmptyDir wrapper volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 3 22:00:30.046: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-3522" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance]","total":-1,"completed":19,"skipped":249,"failed":0} S ------------------------------ [BeforeEach] [sig-api-machinery] server version /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 3 22:00:30.059: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename server-version STEP: Waiting for a default service account to be provisioned in namespace [It] should find the server version [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Request ServerVersion STEP: Confirm major version Jun 3 22:00:30.087: INFO: Major version: 1 STEP: Confirm minor version Jun 3 22:00:30.087: INFO: cleanMinorVersion: 21 Jun 3 22:00:30.087: INFO: Minor version: 21 [AfterEach] [sig-api-machinery] server version /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 3 22:00:30.087: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "server-version-7986" for this suite. • ------------------------------ {"msg":"PASSED [sig-api-machinery] server version should find the server version [Conformance]","total":-1,"completed":20,"skipped":250,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 3 22:00:17.999: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746 [It] should be able to change the type from ExternalName to ClusterIP [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating a service externalname-service with the type=ExternalName in namespace services-5116 STEP: changing the ExternalName service to type=ClusterIP STEP: creating replication controller externalname-service in namespace services-5116 I0603 22:00:18.036747 38 runners.go:190] Created replication controller with name: externalname-service, namespace: services-5116, replica count: 2 I0603 22:00:21.087731 38 runners.go:190] externalname-service Pods: 2 out of 2 created, 1 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0603 22:00:24.095607 38 runners.go:190] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jun 3 22:00:24.095: INFO: Creating new exec pod Jun 3 22:00:29.119: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5116 exec execpodl4vkn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80' Jun 3 22:00:29.640: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 externalname-service 80\nConnection to externalname-service 80 port [tcp/http] succeeded!\n" Jun 3 22:00:29.640: INFO: stdout: "" Jun 3 22:00:30.640: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5116 exec execpodl4vkn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80' Jun 3 22:00:30.879: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 externalname-service 80\nConnection to externalname-service 80 port [tcp/http] succeeded!\n" Jun 3 22:00:30.879: INFO: stdout: "externalname-service-ggqps" Jun 3 22:00:30.879: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5116 exec execpodl4vkn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.233.34.201 80' Jun 3 22:00:31.192: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 10.233.34.201 80\nConnection to 10.233.34.201 80 port [tcp/http] succeeded!\n" Jun 3 22:00:31.192: INFO: stdout: "externalname-service-lqhph" Jun 3 22:00:31.192: INFO: Cleaning up the ExternalName to ClusterIP test service [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 3 22:00:31.202: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-5116" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750 • [SLOW TEST:13.211 seconds] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should be able to change the type from ExternalName to ClusterIP [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","total":-1,"completed":39,"skipped":709,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-apps] ReplicaSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 3 22:00:22.237: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation and release no longer matching pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Given a Pod with a 'name' label pod-adoption-release is created Jun 3 22:00:22.273: INFO: The status of Pod pod-adoption-release is Pending, waiting for it to be Running (with Ready = true) Jun 3 22:00:24.277: INFO: The status of Pod pod-adoption-release is Pending, waiting for it to be Running (with Ready = true) Jun 3 22:00:26.277: INFO: The status of Pod pod-adoption-release is Pending, waiting for it to be Running (with Ready = true) Jun 3 22:00:28.278: INFO: The status of Pod pod-adoption-release is Pending, waiting for it to be Running (with Ready = true) Jun 3 22:00:30.278: INFO: The status of Pod pod-adoption-release is Running (Ready = true) STEP: When a replicaset with a matching selector is created STEP: Then the orphan pod is adopted STEP: When the matched label of one of its pods change Jun 3 22:00:31.293: INFO: Pod name pod-adoption-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicaSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 3 22:00:32.308: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-7507" for this suite. • [SLOW TEST:10.080 seconds] [sig-apps] ReplicaSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching pods on creation and release no longer matching pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance]","total":-1,"completed":27,"skipped":460,"failed":0} SS ------------------------------ [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 3 22:00:26.735: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test downward API volume plugin Jun 3 22:00:26.767: INFO: Waiting up to 5m0s for pod "downwardapi-volume-4b3f014a-ea5b-47a8-99a2-6e05c2f3d788" in namespace "projected-1486" to be "Succeeded or Failed" Jun 3 22:00:26.770: INFO: Pod "downwardapi-volume-4b3f014a-ea5b-47a8-99a2-6e05c2f3d788": Phase="Pending", Reason="", readiness=false. Elapsed: 3.035171ms Jun 3 22:00:28.774: INFO: Pod "downwardapi-volume-4b3f014a-ea5b-47a8-99a2-6e05c2f3d788": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007560155s Jun 3 22:00:30.777: INFO: Pod "downwardapi-volume-4b3f014a-ea5b-47a8-99a2-6e05c2f3d788": Phase="Pending", Reason="", readiness=false. Elapsed: 4.010754638s Jun 3 22:00:32.782: INFO: Pod "downwardapi-volume-4b3f014a-ea5b-47a8-99a2-6e05c2f3d788": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.015363548s STEP: Saw pod success Jun 3 22:00:32.782: INFO: Pod "downwardapi-volume-4b3f014a-ea5b-47a8-99a2-6e05c2f3d788" satisfied condition "Succeeded or Failed" Jun 3 22:00:32.785: INFO: Trying to get logs from node node2 pod downwardapi-volume-4b3f014a-ea5b-47a8-99a2-6e05c2f3d788 container client-container: STEP: delete the pod Jun 3 22:00:32.799: INFO: Waiting for pod downwardapi-volume-4b3f014a-ea5b-47a8-99a2-6e05c2f3d788 to disappear Jun 3 22:00:32.800: INFO: Pod downwardapi-volume-4b3f014a-ea5b-47a8-99a2-6e05c2f3d788 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 3 22:00:32.800: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1486" for this suite. • [SLOW TEST:6.073 seconds] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":11,"skipped":238,"failed":1,"failures":["[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] RuntimeClass /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 3 22:00:33.104: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename runtimeclass STEP: Waiting for a default service account to be provisioned in namespace [It] should support RuntimeClasses API operations [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: getting /apis STEP: getting /apis/node.k8s.io STEP: getting /apis/node.k8s.io/v1 STEP: creating STEP: watching Jun 3 22:00:33.138: INFO: starting watch STEP: getting STEP: listing STEP: patching STEP: updating Jun 3 22:00:33.152: INFO: waiting for watch events with expected annotations STEP: deleting STEP: deleting a collection [AfterEach] [sig-node] RuntimeClass /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 3 22:00:33.168: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "runtimeclass-3403" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] RuntimeClass should support RuntimeClasses API operations [Conformance]","total":-1,"completed":12,"skipped":356,"failed":1,"failures":["[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]"]} SSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 3 22:00:32.325: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46 [It] should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Jun 3 22:00:32.412: INFO: Waiting up to 5m0s for pod "busybox-readonly-false-b155a034-9456-4d52-958f-8fa5b486b204" in namespace "security-context-test-5713" to be "Succeeded or Failed" Jun 3 22:00:32.415: INFO: Pod "busybox-readonly-false-b155a034-9456-4d52-958f-8fa5b486b204": Phase="Pending", Reason="", readiness=false. Elapsed: 2.806653ms Jun 3 22:00:34.418: INFO: Pod "busybox-readonly-false-b155a034-9456-4d52-958f-8fa5b486b204": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006233896s Jun 3 22:00:36.421: INFO: Pod "busybox-readonly-false-b155a034-9456-4d52-958f-8fa5b486b204": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.009075837s Jun 3 22:00:36.421: INFO: Pod "busybox-readonly-false-b155a034-9456-4d52-958f-8fa5b486b204" satisfied condition "Succeeded or Failed" [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 3 22:00:36.421: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-5713" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]","total":-1,"completed":28,"skipped":462,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Docker Containers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 3 22:00:31.252: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test override arguments Jun 3 22:00:31.308: INFO: Waiting up to 5m0s for pod "client-containers-c3bf62ed-5dcd-4273-af3c-43cd1688e9ec" in namespace "containers-8829" to be "Succeeded or Failed" Jun 3 22:00:31.310: INFO: Pod "client-containers-c3bf62ed-5dcd-4273-af3c-43cd1688e9ec": Phase="Pending", Reason="", readiness=false. Elapsed: 2.231413ms Jun 3 22:00:33.313: INFO: Pod "client-containers-c3bf62ed-5dcd-4273-af3c-43cd1688e9ec": Phase="Pending", Reason="", readiness=false. Elapsed: 2.004690376s Jun 3 22:00:35.318: INFO: Pod "client-containers-c3bf62ed-5dcd-4273-af3c-43cd1688e9ec": Phase="Pending", Reason="", readiness=false. Elapsed: 4.00986498s Jun 3 22:00:37.322: INFO: Pod "client-containers-c3bf62ed-5dcd-4273-af3c-43cd1688e9ec": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.013837988s STEP: Saw pod success Jun 3 22:00:37.322: INFO: Pod "client-containers-c3bf62ed-5dcd-4273-af3c-43cd1688e9ec" satisfied condition "Succeeded or Failed" Jun 3 22:00:37.324: INFO: Trying to get logs from node node1 pod client-containers-c3bf62ed-5dcd-4273-af3c-43cd1688e9ec container agnhost-container: STEP: delete the pod Jun 3 22:00:37.338: INFO: Waiting for pod client-containers-c3bf62ed-5dcd-4273-af3c-43cd1688e9ec to disappear Jun 3 22:00:37.339: INFO: Pod client-containers-c3bf62ed-5dcd-4273-af3c-43cd1688e9ec no longer exists [AfterEach] [sig-node] Docker Containers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 3 22:00:37.339: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-8829" for this suite. • [SLOW TEST:6.094 seconds] [sig-node] Docker Containers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]","total":-1,"completed":40,"skipped":727,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 3 22:00:30.146: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Jun 3 22:00:30.734: INFO: new replicaset for deployment "sample-webhook-deployment" is yet to be created Jun 3 22:00:32.745: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63789890430, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63789890430, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63789890430, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63789890430, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} Jun 3 22:00:34.750: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63789890430, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63789890430, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63789890430, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63789890430, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jun 3 22:00:37.757: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Registering a validating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API STEP: Registering a mutating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API STEP: Creating a dummy validating-webhook-configuration object STEP: Deleting the validating-webhook-configuration, which should be possible to remove STEP: Creating a dummy mutating-webhook-configuration object STEP: Deleting the mutating-webhook-configuration, which should be possible to remove [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 3 22:00:37.802: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-7779" for this suite. STEP: Destroying namespace "webhook-7779-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:7.689 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","total":-1,"completed":21,"skipped":270,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 3 22:00:36.496: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating configMap with name projected-configmap-test-volume-map-02ed1107-0988-429e-9b92-c1ef3b3268ef STEP: Creating a pod to test consume configMaps Jun 3 22:00:36.532: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-dfc3d230-2f9e-4802-8c71-ab16dd7ae61a" in namespace "projected-2248" to be "Succeeded or Failed" Jun 3 22:00:36.535: INFO: Pod "pod-projected-configmaps-dfc3d230-2f9e-4802-8c71-ab16dd7ae61a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.400366ms Jun 3 22:00:38.540: INFO: Pod "pod-projected-configmaps-dfc3d230-2f9e-4802-8c71-ab16dd7ae61a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007739942s Jun 3 22:00:40.546: INFO: Pod "pod-projected-configmaps-dfc3d230-2f9e-4802-8c71-ab16dd7ae61a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013475145s STEP: Saw pod success Jun 3 22:00:40.546: INFO: Pod "pod-projected-configmaps-dfc3d230-2f9e-4802-8c71-ab16dd7ae61a" satisfied condition "Succeeded or Failed" Jun 3 22:00:40.549: INFO: Trying to get logs from node node2 pod pod-projected-configmaps-dfc3d230-2f9e-4802-8c71-ab16dd7ae61a container agnhost-container: STEP: delete the pod Jun 3 22:00:40.566: INFO: Waiting for pod pod-projected-configmaps-dfc3d230-2f9e-4802-8c71-ab16dd7ae61a to disappear Jun 3 22:00:40.568: INFO: Pod pod-projected-configmaps-dfc3d230-2f9e-4802-8c71-ab16dd7ae61a no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 3 22:00:40.568: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2248" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":-1,"completed":29,"skipped":490,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 3 21:59:53.837: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should succeed in writing subpaths in container [Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating the pod STEP: waiting for pod running STEP: creating a file in subpath Jun 3 21:59:57.877: INFO: ExecWithOptions {Command:[/bin/sh -c touch /volume_mount/mypath/foo/test.log] Namespace:var-expansion-5488 PodName:var-expansion-ecf0dbb2-6984-4989-8c33-e87009132e31 ContainerName:dapi-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 3 21:59:57.877: INFO: >>> kubeConfig: /root/.kube/config STEP: test for file in mounted path Jun 3 21:59:57.965: INFO: ExecWithOptions {Command:[/bin/sh -c test -f /subpath_mount/test.log] Namespace:var-expansion-5488 PodName:var-expansion-ecf0dbb2-6984-4989-8c33-e87009132e31 ContainerName:dapi-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 3 21:59:57.965: INFO: >>> kubeConfig: /root/.kube/config STEP: updating the annotation value Jun 3 21:59:58.555: INFO: Successfully updated pod "var-expansion-ecf0dbb2-6984-4989-8c33-e87009132e31" STEP: waiting for annotated pod running STEP: deleting the pod gracefully Jun 3 21:59:58.558: INFO: Deleting pod "var-expansion-ecf0dbb2-6984-4989-8c33-e87009132e31" in namespace "var-expansion-5488" Jun 3 21:59:58.563: INFO: Wait up to 5m0s for pod "var-expansion-ecf0dbb2-6984-4989-8c33-e87009132e31" to be fully deleted [AfterEach] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 3 22:00:42.570: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-5488" for this suite. • [SLOW TEST:48.741 seconds] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should succeed in writing subpaths in container [Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Variable Expansion should succeed in writing subpaths in container [Slow] [Conformance]","total":-1,"completed":14,"skipped":248,"failed":0} SSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] Watchers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 3 22:00:33.213: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating a watch on configmaps with a certain label STEP: creating a new configmap STEP: modifying the configmap once STEP: changing the label value of the configmap STEP: Expecting to observe a delete notification for the watched object Jun 3 22:00:33.246: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-9560 db0340e9-1847-4133-8679-f4c126bdf5b2 39382 0 2022-06-03 22:00:33 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2022-06-03 22:00:33 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} Jun 3 22:00:33.246: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-9560 db0340e9-1847-4133-8679-f4c126bdf5b2 39383 0 2022-06-03 22:00:33 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2022-06-03 22:00:33 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} Jun 3 22:00:33.246: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-9560 db0340e9-1847-4133-8679-f4c126bdf5b2 39384 0 2022-06-03 22:00:33 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2022-06-03 22:00:33 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: modifying the configmap a second time STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements STEP: changing the label value of the configmap back STEP: modifying the configmap a third time STEP: deleting the configmap STEP: Expecting to observe an add notification for the watched object when the label value was restored Jun 3 22:00:43.268: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-9560 db0340e9-1847-4133-8679-f4c126bdf5b2 39684 0 2022-06-03 22:00:33 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2022-06-03 22:00:33 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} Jun 3 22:00:43.268: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-9560 db0340e9-1847-4133-8679-f4c126bdf5b2 39685 0 2022-06-03 22:00:33 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2022-06-03 22:00:33 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},Immutable:nil,} Jun 3 22:00:43.268: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-9560 db0340e9-1847-4133-8679-f4c126bdf5b2 39686 0 2022-06-03 22:00:33 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2022-06-03 22:00:33 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 3 22:00:43.268: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-9560" for this suite. • [SLOW TEST:10.065 seconds] [sig-api-machinery] Watchers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance]","total":-1,"completed":13,"skipped":370,"failed":1,"failures":["[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]"]} SSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 3 22:00:42.613: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating configMap with name configmap-test-volume-map-54bebcfc-d6c9-4c1c-baab-912681673faf STEP: Creating a pod to test consume configMaps Jun 3 22:00:42.650: INFO: Waiting up to 5m0s for pod "pod-configmaps-26d65f3f-befe-4f84-80e8-d0f037a8d988" in namespace "configmap-884" to be "Succeeded or Failed" Jun 3 22:00:42.652: INFO: Pod "pod-configmaps-26d65f3f-befe-4f84-80e8-d0f037a8d988": Phase="Pending", Reason="", readiness=false. Elapsed: 2.127347ms Jun 3 22:00:44.656: INFO: Pod "pod-configmaps-26d65f3f-befe-4f84-80e8-d0f037a8d988": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005751017s Jun 3 22:00:46.660: INFO: Pod "pod-configmaps-26d65f3f-befe-4f84-80e8-d0f037a8d988": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.009547794s STEP: Saw pod success Jun 3 22:00:46.660: INFO: Pod "pod-configmaps-26d65f3f-befe-4f84-80e8-d0f037a8d988" satisfied condition "Succeeded or Failed" Jun 3 22:00:46.663: INFO: Trying to get logs from node node1 pod pod-configmaps-26d65f3f-befe-4f84-80e8-d0f037a8d988 container agnhost-container: STEP: delete the pod Jun 3 22:00:47.308: INFO: Waiting for pod pod-configmaps-26d65f3f-befe-4f84-80e8-d0f037a8d988 to disappear Jun 3 22:00:47.310: INFO: Pod pod-configmaps-26d65f3f-befe-4f84-80e8-d0f037a8d988 no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 3 22:00:47.310: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-884" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":-1,"completed":15,"skipped":262,"failed":0} S ------------------------------ [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 3 22:00:43.309: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Jun 3 22:00:43.669: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Jun 3 22:00:45.677: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63789890443, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63789890443, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63789890443, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63789890443, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jun 3 22:00:48.687: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should include webhook resources in discovery documents [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: fetching the /apis discovery document STEP: finding the admissionregistration.k8s.io API group in the /apis discovery document STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis discovery document STEP: fetching the /apis/admissionregistration.k8s.io discovery document STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis/admissionregistration.k8s.io discovery document STEP: fetching the /apis/admissionregistration.k8s.io/v1 discovery document STEP: finding mutatingwebhookconfigurations and validatingwebhookconfigurations resources in the /apis/admissionregistration.k8s.io/v1 discovery document [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 3 22:00:48.693: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-4622" for this suite. STEP: Destroying namespace "webhook-4622-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:5.419 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should include webhook resources in discovery documents [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance]","total":-1,"completed":14,"skipped":382,"failed":1,"failures":["[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]"]} SSSS ------------------------------ [BeforeEach] [sig-node] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 3 22:00:47.323: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating configMap configmap-2554/configmap-test-62f54913-fd99-46b8-adf5-cd74c09f1401 STEP: Creating a pod to test consume configMaps Jun 3 22:00:47.363: INFO: Waiting up to 5m0s for pod "pod-configmaps-ead92833-18e1-4fa5-9003-27d1e3244887" in namespace "configmap-2554" to be "Succeeded or Failed" Jun 3 22:00:47.370: INFO: Pod "pod-configmaps-ead92833-18e1-4fa5-9003-27d1e3244887": Phase="Pending", Reason="", readiness=false. Elapsed: 6.88614ms Jun 3 22:00:49.374: INFO: Pod "pod-configmaps-ead92833-18e1-4fa5-9003-27d1e3244887": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010156471s Jun 3 22:00:51.376: INFO: Pod "pod-configmaps-ead92833-18e1-4fa5-9003-27d1e3244887": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012800868s STEP: Saw pod success Jun 3 22:00:51.376: INFO: Pod "pod-configmaps-ead92833-18e1-4fa5-9003-27d1e3244887" satisfied condition "Succeeded or Failed" Jun 3 22:00:51.379: INFO: Trying to get logs from node node1 pod pod-configmaps-ead92833-18e1-4fa5-9003-27d1e3244887 container env-test: STEP: delete the pod Jun 3 22:00:51.391: INFO: Waiting for pod pod-configmaps-ead92833-18e1-4fa5-9003-27d1e3244887 to disappear Jun 3 22:00:51.393: INFO: Pod pod-configmaps-ead92833-18e1-4fa5-9003-27d1e3244887 no longer exists [AfterEach] [sig-node] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 3 22:00:51.393: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-2554" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance]","total":-1,"completed":16,"skipped":263,"failed":0} SSSSSSS ------------------------------ [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 3 21:58:43.119: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 [BeforeEach] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:105 STEP: Creating service test in namespace statefulset-8992 [It] should perform canary updates and phased rolling updates of template modifications [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a new StatefulSet Jun 3 21:58:43.149: INFO: Found 0 stateful pods, waiting for 3 Jun 3 21:58:53.153: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Jun 3 21:58:53.153: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Jun 3 21:58:53.153: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false Jun 3 21:59:03.154: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Jun 3 21:59:03.154: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Jun 3 21:59:03.154: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Updating stateful set template: update image from k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 to k8s.gcr.io/e2e-test-images/httpd:2.4.39-1 Jun 3 21:59:03.178: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Not applying an update when the partition is greater than the number of replicas STEP: Performing a canary update Jun 3 21:59:13.207: INFO: Updating stateful set ss2 Jun 3 21:59:13.213: INFO: Waiting for Pod statefulset-8992/ss2-2 to have revision ss2-5bbbc9fc94 update revision ss2-677d6db895 STEP: Restoring Pods to the correct revision when they are deleted Jun 3 21:59:23.235: INFO: Found 1 stateful pods, waiting for 3 Jun 3 21:59:33.242: INFO: Found 2 stateful pods, waiting for 3 Jun 3 21:59:43.244: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Jun 3 21:59:43.244: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Jun 3 21:59:43.244: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false Jun 3 21:59:53.242: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Jun 3 21:59:53.242: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Jun 3 21:59:53.242: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Performing a phased rolling update Jun 3 21:59:53.262: INFO: Updating stateful set ss2 Jun 3 21:59:53.266: INFO: Waiting for Pod statefulset-8992/ss2-1 to have revision ss2-5bbbc9fc94 update revision ss2-677d6db895 Jun 3 22:00:03.290: INFO: Updating stateful set ss2 Jun 3 22:00:03.295: INFO: Waiting for StatefulSet statefulset-8992/ss2 to complete update Jun 3 22:00:03.295: INFO: Waiting for Pod statefulset-8992/ss2-0 to have revision ss2-5bbbc9fc94 update revision ss2-677d6db895 Jun 3 22:00:13.302: INFO: Waiting for StatefulSet statefulset-8992/ss2 to complete update Jun 3 22:00:13.302: INFO: Waiting for Pod statefulset-8992/ss2-0 to have revision ss2-5bbbc9fc94 update revision ss2-677d6db895 Jun 3 22:00:23.303: INFO: Waiting for StatefulSet statefulset-8992/ss2 to complete update [AfterEach] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:116 Jun 3 22:00:33.302: INFO: Deleting all statefulset in ns statefulset-8992 Jun 3 22:00:33.305: INFO: Scaling statefulset ss2 to 0 Jun 3 22:00:53.319: INFO: Waiting for statefulset status.replicas updated to 0 Jun 3 22:00:53.322: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 3 22:00:53.331: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-8992" for this suite. • [SLOW TEST:130.220 seconds] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:95 should perform canary updates and phased rolling updates of template modifications [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]","total":-1,"completed":5,"skipped":45,"failed":0} SSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 3 22:00:51.421: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating secret with name projected-secret-test-f7816158-94c1-46da-bc62-af44a5e77435 STEP: Creating a pod to test consume secrets Jun 3 22:00:51.459: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-b20f8b02-9449-41d4-b8b8-859fcdd81250" in namespace "projected-9801" to be "Succeeded or Failed" Jun 3 22:00:51.462: INFO: Pod "pod-projected-secrets-b20f8b02-9449-41d4-b8b8-859fcdd81250": Phase="Pending", Reason="", readiness=false. Elapsed: 2.940515ms Jun 3 22:00:53.465: INFO: Pod "pod-projected-secrets-b20f8b02-9449-41d4-b8b8-859fcdd81250": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006738657s Jun 3 22:00:55.472: INFO: Pod "pod-projected-secrets-b20f8b02-9449-41d4-b8b8-859fcdd81250": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013207126s STEP: Saw pod success Jun 3 22:00:55.472: INFO: Pod "pod-projected-secrets-b20f8b02-9449-41d4-b8b8-859fcdd81250" satisfied condition "Succeeded or Failed" Jun 3 22:00:55.474: INFO: Trying to get logs from node node2 pod pod-projected-secrets-b20f8b02-9449-41d4-b8b8-859fcdd81250 container secret-volume-test: STEP: delete the pod Jun 3 22:00:55.487: INFO: Waiting for pod pod-projected-secrets-b20f8b02-9449-41d4-b8b8-859fcdd81250 to disappear Jun 3 22:00:55.489: INFO: Pod pod-projected-secrets-b20f8b02-9449-41d4-b8b8-859fcdd81250 no longer exists [AfterEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 3 22:00:55.489: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9801" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":-1,"completed":17,"skipped":270,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 3 21:56:50.200: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:54 [It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating pod test-webserver-f9e25df1-fdee-4024-abe0-cd0a6e13c7ec in namespace container-probe-8945 Jun 3 21:56:56.240: INFO: Started pod test-webserver-f9e25df1-fdee-4024-abe0-cd0a6e13c7ec in namespace container-probe-8945 STEP: checking the pod's current state and verifying that restartCount is present Jun 3 21:56:56.243: INFO: Initial restart count of pod test-webserver-f9e25df1-fdee-4024-abe0-cd0a6e13c7ec is 0 STEP: deleting the pod [AfterEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 3 22:00:56.759: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-8945" for this suite. • [SLOW TEST:246.566 seconds] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":-1,"completed":9,"skipped":162,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 3 22:00:48.742: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] custom resource defaulting for requests and from storage works [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Jun 3 22:00:48.764: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 3 22:00:56.869: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-5580" for this suite. • [SLOW TEST:8.136 seconds] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 custom resource defaulting for requests and from storage works [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works [Conformance]","total":-1,"completed":15,"skipped":386,"failed":1,"failures":["[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]"]} SSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 3 22:00:53.364: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41 [It] should provide container's memory limit [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test downward API volume plugin Jun 3 22:00:53.396: INFO: Waiting up to 5m0s for pod "downwardapi-volume-b162e533-945d-42f2-8bd2-8ead169bccc8" in namespace "downward-api-6427" to be "Succeeded or Failed" Jun 3 22:00:53.404: INFO: Pod "downwardapi-volume-b162e533-945d-42f2-8bd2-8ead169bccc8": Phase="Pending", Reason="", readiness=false. Elapsed: 8.101222ms Jun 3 22:00:55.409: INFO: Pod "downwardapi-volume-b162e533-945d-42f2-8bd2-8ead169bccc8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012991715s Jun 3 22:00:57.416: INFO: Pod "downwardapi-volume-b162e533-945d-42f2-8bd2-8ead169bccc8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.020481919s STEP: Saw pod success Jun 3 22:00:57.417: INFO: Pod "downwardapi-volume-b162e533-945d-42f2-8bd2-8ead169bccc8" satisfied condition "Succeeded or Failed" Jun 3 22:00:57.419: INFO: Trying to get logs from node node1 pod downwardapi-volume-b162e533-945d-42f2-8bd2-8ead169bccc8 container client-container: STEP: delete the pod Jun 3 22:00:57.433: INFO: Waiting for pod downwardapi-volume-b162e533-945d-42f2-8bd2-8ead169bccc8 to disappear Jun 3 22:00:57.435: INFO: Pod downwardapi-volume-b162e533-945d-42f2-8bd2-8ead169bccc8 no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 3 22:00:57.435: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-6427" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance]","total":-1,"completed":6,"skipped":54,"failed":0} S ------------------------------ [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 3 22:00:57.448: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test downward API volume plugin Jun 3 22:00:57.481: INFO: Waiting up to 5m0s for pod "downwardapi-volume-c53a7aa7-8039-4216-b451-04625f7e1a3e" in namespace "downward-api-4697" to be "Succeeded or Failed" Jun 3 22:00:57.485: INFO: Pod "downwardapi-volume-c53a7aa7-8039-4216-b451-04625f7e1a3e": Phase="Pending", Reason="", readiness=false. Elapsed: 3.369709ms Jun 3 22:00:59.488: INFO: Pod "downwardapi-volume-c53a7aa7-8039-4216-b451-04625f7e1a3e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00652151s Jun 3 22:01:01.492: INFO: Pod "downwardapi-volume-c53a7aa7-8039-4216-b451-04625f7e1a3e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010120609s STEP: Saw pod success Jun 3 22:01:01.492: INFO: Pod "downwardapi-volume-c53a7aa7-8039-4216-b451-04625f7e1a3e" satisfied condition "Succeeded or Failed" Jun 3 22:01:01.494: INFO: Trying to get logs from node node2 pod downwardapi-volume-c53a7aa7-8039-4216-b451-04625f7e1a3e container client-container: STEP: delete the pod Jun 3 22:01:01.514: INFO: Waiting for pod downwardapi-volume-c53a7aa7-8039-4216-b451-04625f7e1a3e to disappear Jun 3 22:01:01.516: INFO: Pod downwardapi-volume-c53a7aa7-8039-4216-b451-04625f7e1a3e no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 3 22:01:01.516: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-4697" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":-1,"completed":7,"skipped":55,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] Ingress API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 3 22:01:01.580: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename ingress STEP: Waiting for a default service account to be provisioned in namespace [It] should support creating Ingress API operations [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: getting /apis STEP: getting /apis/networking.k8s.io STEP: getting /apis/networking.k8s.iov1 STEP: creating STEP: getting STEP: listing STEP: watching Jun 3 22:01:01.627: INFO: starting watch STEP: cluster-wide listing STEP: cluster-wide watching Jun 3 22:01:01.630: INFO: starting watch STEP: patching STEP: updating Jun 3 22:01:01.640: INFO: waiting for watch events with expected annotations Jun 3 22:01:01.640: INFO: saw patched and updated annotations STEP: patching /status STEP: updating /status STEP: get /status STEP: deleting STEP: deleting a collection [AfterEach] [sig-network] Ingress API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 3 22:01:01.667: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "ingress-8598" for this suite. • ------------------------------ {"msg":"PASSED [sig-network] Ingress API should support creating Ingress API operations [Conformance]","total":-1,"completed":8,"skipped":76,"failed":0} S ------------------------------ [BeforeEach] [sig-api-machinery] Watchers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 3 22:00:01.859: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe add, update, and delete watch notifications on configmaps [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating a watch on configmaps with label A STEP: creating a watch on configmaps with label B STEP: creating a watch on configmaps with label A or B STEP: creating a configmap with label A and ensuring the correct watchers observe the notification Jun 3 22:00:01.884: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-62 f0df6666-7059-4026-86be-1b8b0dccd4ba 38685 0 2022-06-03 22:00:01 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2022-06-03 22:00:01 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} Jun 3 22:00:01.884: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-62 f0df6666-7059-4026-86be-1b8b0dccd4ba 38685 0 2022-06-03 22:00:01 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2022-06-03 22:00:01 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} STEP: modifying configmap A and ensuring the correct watchers observe the notification Jun 3 22:00:11.893: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-62 f0df6666-7059-4026-86be-1b8b0dccd4ba 38874 0 2022-06-03 22:00:01 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2022-06-03 22:00:11 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} Jun 3 22:00:11.893: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-62 f0df6666-7059-4026-86be-1b8b0dccd4ba 38874 0 2022-06-03 22:00:01 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2022-06-03 22:00:11 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: modifying configmap A again and ensuring the correct watchers observe the notification Jun 3 22:00:21.900: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-62 f0df6666-7059-4026-86be-1b8b0dccd4ba 39039 0 2022-06-03 22:00:01 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2022-06-03 22:00:11 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} Jun 3 22:00:21.900: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-62 f0df6666-7059-4026-86be-1b8b0dccd4ba 39039 0 2022-06-03 22:00:01 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2022-06-03 22:00:11 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: deleting configmap A and ensuring the correct watchers observe the notification Jun 3 22:00:31.906: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-62 f0df6666-7059-4026-86be-1b8b0dccd4ba 39341 0 2022-06-03 22:00:01 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2022-06-03 22:00:11 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} Jun 3 22:00:31.906: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-62 f0df6666-7059-4026-86be-1b8b0dccd4ba 39341 0 2022-06-03 22:00:01 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2022-06-03 22:00:11 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: creating a configmap with label B and ensuring the correct watchers observe the notification Jun 3 22:00:41.913: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-62 2968df6e-5e89-4b61-bda3-fc18f9fed920 39614 0 2022-06-03 22:00:41 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] [{e2e.test Update v1 2022-06-03 22:00:41 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} Jun 3 22:00:41.913: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-62 2968df6e-5e89-4b61-bda3-fc18f9fed920 39614 0 2022-06-03 22:00:41 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] [{e2e.test Update v1 2022-06-03 22:00:41 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} STEP: deleting configmap B and ensuring the correct watchers observe the notification Jun 3 22:00:51.920: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-62 2968df6e-5e89-4b61-bda3-fc18f9fed920 39850 0 2022-06-03 22:00:41 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] [{e2e.test Update v1 2022-06-03 22:00:41 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} Jun 3 22:00:51.920: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-62 2968df6e-5e89-4b61-bda3-fc18f9fed920 39850 0 2022-06-03 22:00:41 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] [{e2e.test Update v1 2022-06-03 22:00:41 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 3 22:01:01.920: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-62" for this suite. • [SLOW TEST:60.069 seconds] [sig-api-machinery] Watchers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe add, update, and delete watch notifications on configmaps [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance]","total":-1,"completed":11,"skipped":161,"failed":1,"failures":["[sig-network] Services should be able to create a functioning NodePort service [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 3 22:00:56.823: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:186 [It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating the pod STEP: submitting the pod to kubernetes Jun 3 22:00:56.859: INFO: The status of Pod pod-update-activedeadlineseconds-574df198-0283-4cd1-9d7c-326065f55434 is Pending, waiting for it to be Running (with Ready = true) Jun 3 22:00:58.863: INFO: The status of Pod pod-update-activedeadlineseconds-574df198-0283-4cd1-9d7c-326065f55434 is Pending, waiting for it to be Running (with Ready = true) Jun 3 22:01:00.863: INFO: The status of Pod pod-update-activedeadlineseconds-574df198-0283-4cd1-9d7c-326065f55434 is Running (Ready = true) STEP: verifying the pod is in kubernetes STEP: updating the pod Jun 3 22:01:01.377: INFO: Successfully updated pod "pod-update-activedeadlineseconds-574df198-0283-4cd1-9d7c-326065f55434" Jun 3 22:01:01.377: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-574df198-0283-4cd1-9d7c-326065f55434" in namespace "pods-922" to be "terminated due to deadline exceeded" Jun 3 22:01:01.379: INFO: Pod "pod-update-activedeadlineseconds-574df198-0283-4cd1-9d7c-326065f55434": Phase="Running", Reason="", readiness=true. Elapsed: 2.153508ms Jun 3 22:01:03.384: INFO: Pod "pod-update-activedeadlineseconds-574df198-0283-4cd1-9d7c-326065f55434": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.00640966s Jun 3 22:01:03.384: INFO: Pod "pod-update-activedeadlineseconds-574df198-0283-4cd1-9d7c-326065f55434" satisfied condition "terminated due to deadline exceeded" [AfterEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 3 22:01:03.384: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-922" for this suite. • [SLOW TEST:6.569 seconds] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]","total":-1,"completed":10,"skipped":183,"failed":0} SSSSS ------------------------------ [BeforeEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 3 22:00:40.623: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Performing setup for networking test in namespace pod-network-test-252 STEP: creating a selector STEP: Creating the service pods in kubernetes Jun 3 22:00:40.647: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Jun 3 22:00:40.679: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jun 3 22:00:42.683: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jun 3 22:00:44.683: INFO: The status of Pod netserver-0 is Running (Ready = false) Jun 3 22:00:46.682: INFO: The status of Pod netserver-0 is Running (Ready = false) Jun 3 22:00:48.686: INFO: The status of Pod netserver-0 is Running (Ready = false) Jun 3 22:00:50.683: INFO: The status of Pod netserver-0 is Running (Ready = false) Jun 3 22:00:52.683: INFO: The status of Pod netserver-0 is Running (Ready = false) Jun 3 22:00:54.684: INFO: The status of Pod netserver-0 is Running (Ready = false) Jun 3 22:00:56.685: INFO: The status of Pod netserver-0 is Running (Ready = false) Jun 3 22:00:58.684: INFO: The status of Pod netserver-0 is Running (Ready = false) Jun 3 22:01:00.686: INFO: The status of Pod netserver-0 is Running (Ready = true) Jun 3 22:01:00.691: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods Jun 3 22:01:08.733: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2 Jun 3 22:01:08.733: INFO: Going to poll 10.244.3.237 on port 8080 at least 0 times, with a maximum of 34 tries before failing Jun 3 22:01:08.735: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.3.237:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-252 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 3 22:01:08.735: INFO: >>> kubeConfig: /root/.kube/config Jun 3 22:01:08.827: INFO: Found all 1 expected endpoints: [netserver-0] Jun 3 22:01:08.827: INFO: Going to poll 10.244.4.65 on port 8080 at least 0 times, with a maximum of 34 tries before failing Jun 3 22:01:08.829: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.4.65:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-252 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 3 22:01:08.829: INFO: >>> kubeConfig: /root/.kube/config Jun 3 22:01:08.912: INFO: Found all 1 expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 3 22:01:08.912: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-252" for this suite. • [SLOW TEST:28.298 seconds] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/network/framework.go:23 Granular Checks: Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/network/networking.go:30 should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":30,"skipped":507,"failed":0} SSSS ------------------------------ [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 3 22:00:55.561: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:241 [It] should check if kubectl can dry-run update Pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: running the image k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 Jun 3 22:00:55.585: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-3609 run e2e-test-httpd-pod --image=k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 --labels=run=e2e-test-httpd-pod' Jun 3 22:00:55.733: INFO: stderr: "" Jun 3 22:00:55.733: INFO: stdout: "pod/e2e-test-httpd-pod created\n" STEP: replace the image in the pod with server-side dry-run Jun 3 22:00:55.733: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-3609 patch pod e2e-test-httpd-pod -p {"spec":{"containers":[{"name": "e2e-test-httpd-pod","image": "k8s.gcr.io/e2e-test-images/busybox:1.29-1"}]}} --dry-run=server' Jun 3 22:00:56.167: INFO: stderr: "" Jun 3 22:00:56.167: INFO: stdout: "pod/e2e-test-httpd-pod patched\n" STEP: verifying the pod e2e-test-httpd-pod has the right image k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 Jun 3 22:00:56.171: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-3609 delete pods e2e-test-httpd-pod' Jun 3 22:01:10.190: INFO: stderr: "" Jun 3 22:01:10.190: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 3 22:01:10.190: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3609" for this suite. • [SLOW TEST:14.637 seconds] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl server-side dry-run /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:903 should check if kubectl can dry-run update Pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl server-side dry-run should check if kubectl can dry-run update Pods [Conformance]","total":-1,"completed":18,"skipped":298,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 3 22:01:03.409: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746 [It] should serve a basic endpoint from pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating service endpoint-test2 in namespace services-9446 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-9446 to expose endpoints map[] Jun 3 22:01:03.439: INFO: Failed go get Endpoints object: endpoints "endpoint-test2" not found Jun 3 22:01:04.445: INFO: successfully validated that service endpoint-test2 in namespace services-9446 exposes endpoints map[] STEP: Creating pod pod1 in namespace services-9446 Jun 3 22:01:04.459: INFO: The status of Pod pod1 is Pending, waiting for it to be Running (with Ready = true) Jun 3 22:01:06.463: INFO: The status of Pod pod1 is Pending, waiting for it to be Running (with Ready = true) Jun 3 22:01:08.465: INFO: The status of Pod pod1 is Running (Ready = true) STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-9446 to expose endpoints map[pod1:[80]] Jun 3 22:01:08.475: INFO: successfully validated that service endpoint-test2 in namespace services-9446 exposes endpoints map[pod1:[80]] STEP: Creating pod pod2 in namespace services-9446 Jun 3 22:01:08.490: INFO: The status of Pod pod2 is Pending, waiting for it to be Running (with Ready = true) Jun 3 22:01:10.495: INFO: The status of Pod pod2 is Pending, waiting for it to be Running (with Ready = true) Jun 3 22:01:12.496: INFO: The status of Pod pod2 is Running (Ready = true) STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-9446 to expose endpoints map[pod1:[80] pod2:[80]] Jun 3 22:01:12.512: INFO: successfully validated that service endpoint-test2 in namespace services-9446 exposes endpoints map[pod1:[80] pod2:[80]] STEP: Deleting pod pod1 in namespace services-9446 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-9446 to expose endpoints map[pod2:[80]] Jun 3 22:01:12.529: INFO: successfully validated that service endpoint-test2 in namespace services-9446 exposes endpoints map[pod2:[80]] STEP: Deleting pod pod2 in namespace services-9446 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-9446 to expose endpoints map[] Jun 3 22:01:12.540: INFO: successfully validated that service endpoint-test2 in namespace services-9446 exposes endpoints map[] [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 3 22:01:12.549: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-9446" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750 • [SLOW TEST:9.148 seconds] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should serve a basic endpoint from pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-network] Services should serve a basic endpoint from pods [Conformance]","total":-1,"completed":11,"skipped":188,"failed":0} SSSS ------------------------------ [BeforeEach] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 3 22:01:12.570: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide host IP as an env var [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test downward api env vars Jun 3 22:01:12.603: INFO: Waiting up to 5m0s for pod "downward-api-a4b224da-65a3-4963-af07-1b070d399a85" in namespace "downward-api-5961" to be "Succeeded or Failed" Jun 3 22:01:12.607: INFO: Pod "downward-api-a4b224da-65a3-4963-af07-1b070d399a85": Phase="Pending", Reason="", readiness=false. Elapsed: 4.098629ms Jun 3 22:01:14.611: INFO: Pod "downward-api-a4b224da-65a3-4963-af07-1b070d399a85": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007951484s Jun 3 22:01:16.614: INFO: Pod "downward-api-a4b224da-65a3-4963-af07-1b070d399a85": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011298872s STEP: Saw pod success Jun 3 22:01:16.614: INFO: Pod "downward-api-a4b224da-65a3-4963-af07-1b070d399a85" satisfied condition "Succeeded or Failed" Jun 3 22:01:16.617: INFO: Trying to get logs from node node2 pod downward-api-a4b224da-65a3-4963-af07-1b070d399a85 container dapi-container: STEP: delete the pod Jun 3 22:01:16.632: INFO: Waiting for pod downward-api-a4b224da-65a3-4963-af07-1b070d399a85 to disappear Jun 3 22:01:16.634: INFO: Pod downward-api-a4b224da-65a3-4963-af07-1b070d399a85 no longer exists [AfterEach] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 3 22:01:16.634: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-5961" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance]","total":-1,"completed":12,"skipped":192,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 3 21:57:17.342: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:54 [It] should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating pod liveness-c3c83afb-b3bf-4ac7-849b-27de5e84cc08 in namespace container-probe-3915 Jun 3 21:57:21.386: INFO: Started pod liveness-c3c83afb-b3bf-4ac7-849b-27de5e84cc08 in namespace container-probe-3915 STEP: checking the pod's current state and verifying that restartCount is present Jun 3 21:57:21.388: INFO: Initial restart count of pod liveness-c3c83afb-b3bf-4ac7-849b-27de5e84cc08 is 0 STEP: deleting the pod [AfterEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 3 22:01:21.969: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-3915" for this suite. • [SLOW TEST:244.635 seconds] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Probing container should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance]","total":-1,"completed":9,"skipped":186,"failed":0} SS ------------------------------ [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 3 22:01:10.259: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD preserving unknown fields at the schema root [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Jun 3 22:01:10.283: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties Jun 3 22:01:18.393: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-5809 --namespace=crd-publish-openapi-5809 create -f -' Jun 3 22:01:18.934: INFO: stderr: "" Jun 3 22:01:18.935: INFO: stdout: "e2e-test-crd-publish-openapi-9041-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n" Jun 3 22:01:18.935: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-5809 --namespace=crd-publish-openapi-5809 delete e2e-test-crd-publish-openapi-9041-crds test-cr' Jun 3 22:01:19.100: INFO: stderr: "" Jun 3 22:01:19.100: INFO: stdout: "e2e-test-crd-publish-openapi-9041-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n" Jun 3 22:01:19.100: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-5809 --namespace=crd-publish-openapi-5809 apply -f -' Jun 3 22:01:19.453: INFO: stderr: "" Jun 3 22:01:19.453: INFO: stdout: "e2e-test-crd-publish-openapi-9041-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n" Jun 3 22:01:19.453: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-5809 --namespace=crd-publish-openapi-5809 delete e2e-test-crd-publish-openapi-9041-crds test-cr' Jun 3 22:01:19.635: INFO: stderr: "" Jun 3 22:01:19.635: INFO: stdout: "e2e-test-crd-publish-openapi-9041-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR Jun 3 22:01:19.636: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-5809 explain e2e-test-crd-publish-openapi-9041-crds' Jun 3 22:01:20.009: INFO: stderr: "" Jun 3 22:01:20.009: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-9041-crd\nVERSION: crd-publish-openapi-test-unknown-at-root.example.com/v1\n\nDESCRIPTION:\n \n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 3 22:01:23.123: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-5809" for this suite. • [SLOW TEST:12.883 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD preserving unknown fields at the schema root [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance]","total":-1,"completed":19,"skipped":323,"failed":0} SSS ------------------------------ [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 3 22:01:01.680: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:54 [It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Jun 3 22:01:01.716: INFO: The status of Pod test-webserver-6c8fbc2e-e1fb-49b3-bda8-83b97f82834e is Pending, waiting for it to be Running (with Ready = true) Jun 3 22:01:03.720: INFO: The status of Pod test-webserver-6c8fbc2e-e1fb-49b3-bda8-83b97f82834e is Pending, waiting for it to be Running (with Ready = true) Jun 3 22:01:05.721: INFO: The status of Pod test-webserver-6c8fbc2e-e1fb-49b3-bda8-83b97f82834e is Pending, waiting for it to be Running (with Ready = true) Jun 3 22:01:07.724: INFO: The status of Pod test-webserver-6c8fbc2e-e1fb-49b3-bda8-83b97f82834e is Running (Ready = false) Jun 3 22:01:09.719: INFO: The status of Pod test-webserver-6c8fbc2e-e1fb-49b3-bda8-83b97f82834e is Running (Ready = false) Jun 3 22:01:11.720: INFO: The status of Pod test-webserver-6c8fbc2e-e1fb-49b3-bda8-83b97f82834e is Running (Ready = false) Jun 3 22:01:13.719: INFO: The status of Pod test-webserver-6c8fbc2e-e1fb-49b3-bda8-83b97f82834e is Running (Ready = false) Jun 3 22:01:15.722: INFO: The status of Pod test-webserver-6c8fbc2e-e1fb-49b3-bda8-83b97f82834e is Running (Ready = false) Jun 3 22:01:17.719: INFO: The status of Pod test-webserver-6c8fbc2e-e1fb-49b3-bda8-83b97f82834e is Running (Ready = false) Jun 3 22:01:19.720: INFO: The status of Pod test-webserver-6c8fbc2e-e1fb-49b3-bda8-83b97f82834e is Running (Ready = false) Jun 3 22:01:21.719: INFO: The status of Pod test-webserver-6c8fbc2e-e1fb-49b3-bda8-83b97f82834e is Running (Ready = false) Jun 3 22:01:23.721: INFO: The status of Pod test-webserver-6c8fbc2e-e1fb-49b3-bda8-83b97f82834e is Running (Ready = true) Jun 3 22:01:23.723: INFO: Container started at 2022-06-03 22:01:05 +0000 UTC, pod became ready at 2022-06-03 22:01:21 +0000 UTC [AfterEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 3 22:01:23.723: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-1456" for this suite. • [SLOW TEST:22.052 seconds] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","total":-1,"completed":9,"skipped":77,"failed":0} SS ------------------------------ [BeforeEach] [sig-auth] Certificates API [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 3 22:01:23.151: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename certificates STEP: Waiting for a default service account to be provisioned in namespace [It] should support CSR API operations [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: getting /apis STEP: getting /apis/certificates.k8s.io STEP: getting /apis/certificates.k8s.io/v1 STEP: creating STEP: getting STEP: listing STEP: watching Jun 3 22:01:23.897: INFO: starting watch STEP: patching STEP: updating Jun 3 22:01:23.905: INFO: waiting for watch events with expected annotations Jun 3 22:01:23.905: INFO: saw patched and updated annotations STEP: getting /approval STEP: patching /approval STEP: updating /approval STEP: getting /status STEP: patching /status STEP: updating /status STEP: deleting STEP: deleting a collection [AfterEach] [sig-auth] Certificates API [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 3 22:01:23.943: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "certificates-6935" for this suite. • ------------------------------ {"msg":"PASSED [sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","total":-1,"completed":20,"skipped":326,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 3 22:01:16.706: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a service. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Service STEP: Creating a NodePort Service STEP: Not allowing a LoadBalancer Service with NodePort to be created that exceeds remaining quota STEP: Ensuring resource quota status captures service creation STEP: Deleting Services STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 3 22:01:27.805: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-7369" for this suite. • [SLOW TEST:11.108 seconds] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a service. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance]","total":-1,"completed":13,"skipped":220,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 3 22:01:01.988: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with downward pod [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating pod pod-subpath-test-downwardapi-rqs2 STEP: Creating a pod to test atomic-volume-subpath Jun 3 22:01:02.025: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-rqs2" in namespace "subpath-3129" to be "Succeeded or Failed" Jun 3 22:01:02.028: INFO: Pod "pod-subpath-test-downwardapi-rqs2": Phase="Pending", Reason="", readiness=false. Elapsed: 3.039143ms Jun 3 22:01:04.031: INFO: Pod "pod-subpath-test-downwardapi-rqs2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006384832s Jun 3 22:01:06.035: INFO: Pod "pod-subpath-test-downwardapi-rqs2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.010029516s Jun 3 22:01:08.042: INFO: Pod "pod-subpath-test-downwardapi-rqs2": Phase="Running", Reason="", readiness=true. Elapsed: 6.017367328s Jun 3 22:01:10.046: INFO: Pod "pod-subpath-test-downwardapi-rqs2": Phase="Running", Reason="", readiness=true. Elapsed: 8.021758696s Jun 3 22:01:12.053: INFO: Pod "pod-subpath-test-downwardapi-rqs2": Phase="Running", Reason="", readiness=true. Elapsed: 10.028564789s Jun 3 22:01:14.058: INFO: Pod "pod-subpath-test-downwardapi-rqs2": Phase="Running", Reason="", readiness=true. Elapsed: 12.033276499s Jun 3 22:01:16.060: INFO: Pod "pod-subpath-test-downwardapi-rqs2": Phase="Running", Reason="", readiness=true. Elapsed: 14.03585511s Jun 3 22:01:18.065: INFO: Pod "pod-subpath-test-downwardapi-rqs2": Phase="Running", Reason="", readiness=true. Elapsed: 16.040148148s Jun 3 22:01:20.067: INFO: Pod "pod-subpath-test-downwardapi-rqs2": Phase="Running", Reason="", readiness=true. Elapsed: 18.042824344s Jun 3 22:01:22.071: INFO: Pod "pod-subpath-test-downwardapi-rqs2": Phase="Running", Reason="", readiness=true. Elapsed: 20.046353599s Jun 3 22:01:24.074: INFO: Pod "pod-subpath-test-downwardapi-rqs2": Phase="Running", Reason="", readiness=true. Elapsed: 22.049757492s Jun 3 22:01:26.077: INFO: Pod "pod-subpath-test-downwardapi-rqs2": Phase="Running", Reason="", readiness=true. Elapsed: 24.052945708s Jun 3 22:01:28.082: INFO: Pod "pod-subpath-test-downwardapi-rqs2": Phase="Running", Reason="", readiness=true. Elapsed: 26.057547699s Jun 3 22:01:30.086: INFO: Pod "pod-subpath-test-downwardapi-rqs2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 28.061026436s STEP: Saw pod success Jun 3 22:01:30.086: INFO: Pod "pod-subpath-test-downwardapi-rqs2" satisfied condition "Succeeded or Failed" Jun 3 22:01:30.088: INFO: Trying to get logs from node node2 pod pod-subpath-test-downwardapi-rqs2 container test-container-subpath-downwardapi-rqs2: STEP: delete the pod Jun 3 22:01:30.100: INFO: Waiting for pod pod-subpath-test-downwardapi-rqs2 to disappear Jun 3 22:01:30.102: INFO: Pod pod-subpath-test-downwardapi-rqs2 no longer exists STEP: Deleting pod pod-subpath-test-downwardapi-rqs2 Jun 3 22:01:30.102: INFO: Deleting pod "pod-subpath-test-downwardapi-rqs2" in namespace "subpath-3129" [AfterEach] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 3 22:01:30.104: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-3129" for this suite. • [SLOW TEST:28.124 seconds] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with downward pod [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance]","total":-1,"completed":12,"skipped":184,"failed":1,"failures":["[sig-network] Services should be able to create a functioning NodePort service [Conformance]"]} SS ------------------------------ [BeforeEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 3 22:01:21.985: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: create the rc1 STEP: create the rc2 STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well STEP: delete the rc simpletest-rc-to-be-deleted STEP: wait for the rc to be deleted STEP: Gathering metrics Jun 3 22:01:32.100: INFO: The status of Pod kube-controller-manager-master3 is Running (Ready = true) Jun 3 22:01:32.271: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: Jun 3 22:01:32.271: INFO: Deleting pod "simpletest-rc-to-be-deleted-bplxg" in namespace "gc-3749" Jun 3 22:01:32.278: INFO: Deleting pod "simpletest-rc-to-be-deleted-hgftn" in namespace "gc-3749" Jun 3 22:01:32.286: INFO: Deleting pod "simpletest-rc-to-be-deleted-kmtbp" in namespace "gc-3749" Jun 3 22:01:32.292: INFO: Deleting pod "simpletest-rc-to-be-deleted-ms794" in namespace "gc-3749" Jun 3 22:01:32.298: INFO: Deleting pod "simpletest-rc-to-be-deleted-n7sxt" in namespace "gc-3749" [AfterEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 3 22:01:32.304: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-3749" for this suite. • [SLOW TEST:10.326 seconds] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]","total":-1,"completed":10,"skipped":188,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 3 22:01:08.936: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with secret pod [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating pod pod-subpath-test-secret-td24 STEP: Creating a pod to test atomic-volume-subpath Jun 3 22:01:09.031: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-td24" in namespace "subpath-7113" to be "Succeeded or Failed" Jun 3 22:01:09.034: INFO: Pod "pod-subpath-test-secret-td24": Phase="Pending", Reason="", readiness=false. Elapsed: 3.575155ms Jun 3 22:01:11.038: INFO: Pod "pod-subpath-test-secret-td24": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006958907s Jun 3 22:01:13.047: INFO: Pod "pod-subpath-test-secret-td24": Phase="Running", Reason="", readiness=true. Elapsed: 4.016784988s Jun 3 22:01:15.051: INFO: Pod "pod-subpath-test-secret-td24": Phase="Running", Reason="", readiness=true. Elapsed: 6.02011433s Jun 3 22:01:17.055: INFO: Pod "pod-subpath-test-secret-td24": Phase="Running", Reason="", readiness=true. Elapsed: 8.024422051s Jun 3 22:01:19.060: INFO: Pod "pod-subpath-test-secret-td24": Phase="Running", Reason="", readiness=true. Elapsed: 10.029345702s Jun 3 22:01:21.064: INFO: Pod "pod-subpath-test-secret-td24": Phase="Running", Reason="", readiness=true. Elapsed: 12.033465041s Jun 3 22:01:23.068: INFO: Pod "pod-subpath-test-secret-td24": Phase="Running", Reason="", readiness=true. Elapsed: 14.037117887s Jun 3 22:01:25.073: INFO: Pod "pod-subpath-test-secret-td24": Phase="Running", Reason="", readiness=true. Elapsed: 16.042585907s Jun 3 22:01:27.080: INFO: Pod "pod-subpath-test-secret-td24": Phase="Running", Reason="", readiness=true. Elapsed: 18.049205048s Jun 3 22:01:29.086: INFO: Pod "pod-subpath-test-secret-td24": Phase="Running", Reason="", readiness=true. Elapsed: 20.055628847s Jun 3 22:01:31.089: INFO: Pod "pod-subpath-test-secret-td24": Phase="Running", Reason="", readiness=true. Elapsed: 22.058835764s Jun 3 22:01:33.094: INFO: Pod "pod-subpath-test-secret-td24": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.063283495s STEP: Saw pod success Jun 3 22:01:33.094: INFO: Pod "pod-subpath-test-secret-td24" satisfied condition "Succeeded or Failed" Jun 3 22:01:33.096: INFO: Trying to get logs from node node1 pod pod-subpath-test-secret-td24 container test-container-subpath-secret-td24: STEP: delete the pod Jun 3 22:01:33.108: INFO: Waiting for pod pod-subpath-test-secret-td24 to disappear Jun 3 22:01:33.110: INFO: Pod pod-subpath-test-secret-td24 no longer exists STEP: Deleting pod pod-subpath-test-secret-td24 Jun 3 22:01:33.110: INFO: Deleting pod "pod-subpath-test-secret-td24" in namespace "subpath-7113" [AfterEach] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 3 22:01:33.112: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-7113" for this suite. • [SLOW TEST:24.183 seconds] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with secret pod [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance]","total":-1,"completed":31,"skipped":511,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 3 22:01:33.171: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746 [It] should test the lifecycle of an Endpoint [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating an Endpoint STEP: waiting for available Endpoint STEP: listing all Endpoints STEP: updating the Endpoint STEP: fetching the Endpoint STEP: patching the Endpoint STEP: fetching the Endpoint STEP: deleting the Endpoint by Collection STEP: waiting for Endpoint deletion STEP: fetching the Endpoint [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 3 22:01:33.234: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-4393" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750 • ------------------------------ {"msg":"PASSED [sig-network] Services should test the lifecycle of an Endpoint [Conformance]","total":-1,"completed":32,"skipped":531,"failed":0} SSSSSS ------------------------------ [BeforeEach] [sig-instrumentation] Events /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 3 22:01:33.262: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that an event can be fetched, patched, deleted, and listed [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating a test event STEP: listing all events in all namespaces STEP: patching the test event STEP: fetching the test event STEP: deleting the test event STEP: listing all events in all namespaces [AfterEach] [sig-instrumentation] Events /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 3 22:01:33.311: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-430" for this suite. • ------------------------------ {"msg":"PASSED [sig-instrumentation] Events should ensure that an event can be fetched, patched, deleted, and listed [Conformance]","total":-1,"completed":33,"skipped":537,"failed":0} SSSS ------------------------------ [BeforeEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 3 22:01:24.016: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating configMap with name projected-configmap-test-volume-ab30e4fb-2c65-4617-b0be-bb47904c5d56 STEP: Creating a pod to test consume configMaps Jun 3 22:01:24.051: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-7f2dc85f-cde3-4995-aaea-2eb383c77836" in namespace "projected-2732" to be "Succeeded or Failed" Jun 3 22:01:24.053: INFO: Pod "pod-projected-configmaps-7f2dc85f-cde3-4995-aaea-2eb383c77836": Phase="Pending", Reason="", readiness=false. Elapsed: 2.082167ms Jun 3 22:01:26.057: INFO: Pod "pod-projected-configmaps-7f2dc85f-cde3-4995-aaea-2eb383c77836": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005641914s Jun 3 22:01:28.062: INFO: Pod "pod-projected-configmaps-7f2dc85f-cde3-4995-aaea-2eb383c77836": Phase="Pending", Reason="", readiness=false. Elapsed: 4.010312651s Jun 3 22:01:30.065: INFO: Pod "pod-projected-configmaps-7f2dc85f-cde3-4995-aaea-2eb383c77836": Phase="Pending", Reason="", readiness=false. Elapsed: 6.0141952s Jun 3 22:01:32.068: INFO: Pod "pod-projected-configmaps-7f2dc85f-cde3-4995-aaea-2eb383c77836": Phase="Pending", Reason="", readiness=false. Elapsed: 8.016896122s Jun 3 22:01:34.072: INFO: Pod "pod-projected-configmaps-7f2dc85f-cde3-4995-aaea-2eb383c77836": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.020998485s STEP: Saw pod success Jun 3 22:01:34.072: INFO: Pod "pod-projected-configmaps-7f2dc85f-cde3-4995-aaea-2eb383c77836" satisfied condition "Succeeded or Failed" Jun 3 22:01:34.075: INFO: Trying to get logs from node node2 pod pod-projected-configmaps-7f2dc85f-cde3-4995-aaea-2eb383c77836 container agnhost-container: STEP: delete the pod Jun 3 22:01:34.086: INFO: Waiting for pod pod-projected-configmaps-7f2dc85f-cde3-4995-aaea-2eb383c77836 to disappear Jun 3 22:01:34.088: INFO: Pod pod-projected-configmaps-7f2dc85f-cde3-4995-aaea-2eb383c77836 no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 3 22:01:34.088: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2732" for this suite. • [SLOW TEST:10.080 seconds] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":-1,"completed":21,"skipped":354,"failed":0} SSSSS ------------------------------ [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 3 22:01:32.400: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test downward API volume plugin Jun 3 22:01:32.433: INFO: Waiting up to 5m0s for pod "downwardapi-volume-390e21e4-3758-4ccb-a0bd-44d5d93503b2" in namespace "downward-api-8610" to be "Succeeded or Failed" Jun 3 22:01:32.436: INFO: Pod "downwardapi-volume-390e21e4-3758-4ccb-a0bd-44d5d93503b2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.841278ms Jun 3 22:01:34.439: INFO: Pod "downwardapi-volume-390e21e4-3758-4ccb-a0bd-44d5d93503b2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005848476s Jun 3 22:01:36.442: INFO: Pod "downwardapi-volume-390e21e4-3758-4ccb-a0bd-44d5d93503b2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.008705563s Jun 3 22:01:38.446: INFO: Pod "downwardapi-volume-390e21e4-3758-4ccb-a0bd-44d5d93503b2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.012533806s STEP: Saw pod success Jun 3 22:01:38.446: INFO: Pod "downwardapi-volume-390e21e4-3758-4ccb-a0bd-44d5d93503b2" satisfied condition "Succeeded or Failed" Jun 3 22:01:38.448: INFO: Trying to get logs from node node1 pod downwardapi-volume-390e21e4-3758-4ccb-a0bd-44d5d93503b2 container client-container: STEP: delete the pod Jun 3 22:01:38.461: INFO: Waiting for pod downwardapi-volume-390e21e4-3758-4ccb-a0bd-44d5d93503b2 to disappear Jun 3 22:01:38.463: INFO: Pod downwardapi-volume-390e21e4-3758-4ccb-a0bd-44d5d93503b2 no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 3 22:01:38.463: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-8610" for this suite. • [SLOW TEST:6.070 seconds] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":11,"skipped":229,"failed":0} SSSSS ------------------------------ [BeforeEach] [sig-apps] Job /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 3 22:01:23.739: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching orphans and release non-matching pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a job STEP: Ensuring active pods == parallelism STEP: Orphaning one of the Job's Pods Jun 3 22:01:34.284: INFO: Successfully updated pod "adopt-release-sx6z4" STEP: Checking that the Job readopts the Pod Jun 3 22:01:34.284: INFO: Waiting up to 15m0s for pod "adopt-release-sx6z4" in namespace "job-8596" to be "adopted" Jun 3 22:01:34.286: INFO: Pod "adopt-release-sx6z4": Phase="Running", Reason="", readiness=true. Elapsed: 2.00074ms Jun 3 22:01:36.289: INFO: Pod "adopt-release-sx6z4": Phase="Running", Reason="", readiness=true. Elapsed: 2.004895899s Jun 3 22:01:36.289: INFO: Pod "adopt-release-sx6z4" satisfied condition "adopted" STEP: Removing the labels from the Job's Pod Jun 3 22:01:36.799: INFO: Successfully updated pod "adopt-release-sx6z4" STEP: Checking that the Job releases the Pod Jun 3 22:01:36.799: INFO: Waiting up to 15m0s for pod "adopt-release-sx6z4" in namespace "job-8596" to be "released" Jun 3 22:01:36.801: INFO: Pod "adopt-release-sx6z4": Phase="Running", Reason="", readiness=true. Elapsed: 2.35271ms Jun 3 22:01:38.805: INFO: Pod "adopt-release-sx6z4": Phase="Running", Reason="", readiness=true. Elapsed: 2.006131415s Jun 3 22:01:38.805: INFO: Pod "adopt-release-sx6z4" satisfied condition "released" [AfterEach] [sig-apps] Job /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 3 22:01:38.805: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-8596" for this suite. • [SLOW TEST:15.074 seconds] [sig-apps] Job /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching orphans and release non-matching pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance]","total":-1,"completed":10,"skipped":79,"failed":0} SSSSSS ------------------------------ [BeforeEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 3 22:01:38.494: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for the cluster [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-2105.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-2105.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Jun 3 22:01:42.566: INFO: DNS probes using dns-2105/dns-test-1af87fc2-0097-4d37-881a-e974e6f87757 succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 3 22:01:42.573: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-2105" for this suite. • ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for the cluster [Conformance]","total":-1,"completed":12,"skipped":234,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-apps] CronJob /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 3 22:01:42.617: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename cronjob STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] CronJob /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/cronjob.go:63 W0603 22:01:42.642105 27 warnings.go:70] batch/v1beta1 CronJob is deprecated in v1.21+, unavailable in v1.25+; use batch/v1 CronJob [It] should support CronJob API operations [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a cronjob STEP: creating STEP: getting STEP: listing STEP: watching Jun 3 22:01:42.648: INFO: starting watch STEP: cluster-wide listing STEP: cluster-wide watching Jun 3 22:01:42.651: INFO: starting watch STEP: patching STEP: updating Jun 3 22:01:42.664: INFO: waiting for watch events with expected annotations Jun 3 22:01:42.664: INFO: saw patched and updated annotations STEP: patching /status STEP: updating /status STEP: get /status STEP: deleting STEP: deleting a collection [AfterEach] [sig-apps] CronJob /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 3 22:01:42.691: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "cronjob-335" for this suite. • ------------------------------ {"msg":"PASSED [sig-apps] CronJob should support CronJob API operations [Conformance]","total":-1,"completed":13,"skipped":251,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 3 22:01:38.832: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test emptydir 0644 on tmpfs Jun 3 22:01:38.870: INFO: Waiting up to 5m0s for pod "pod-eee396fe-d27c-4526-8422-1a260a36c969" in namespace "emptydir-8071" to be "Succeeded or Failed" Jun 3 22:01:38.873: INFO: Pod "pod-eee396fe-d27c-4526-8422-1a260a36c969": Phase="Pending", Reason="", readiness=false. Elapsed: 3.846093ms Jun 3 22:01:40.879: INFO: Pod "pod-eee396fe-d27c-4526-8422-1a260a36c969": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009641426s Jun 3 22:01:42.883: INFO: Pod "pod-eee396fe-d27c-4526-8422-1a260a36c969": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013849038s STEP: Saw pod success Jun 3 22:01:42.884: INFO: Pod "pod-eee396fe-d27c-4526-8422-1a260a36c969" satisfied condition "Succeeded or Failed" Jun 3 22:01:42.886: INFO: Trying to get logs from node node2 pod pod-eee396fe-d27c-4526-8422-1a260a36c969 container test-container: STEP: delete the pod Jun 3 22:01:42.901: INFO: Waiting for pod pod-eee396fe-d27c-4526-8422-1a260a36c969 to disappear Jun 3 22:01:42.903: INFO: Pod pod-eee396fe-d27c-4526-8422-1a260a36c969 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 3 22:01:42.903: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-8071" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":11,"skipped":85,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 3 22:01:30.120: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD without validation schema [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Jun 3 22:01:30.141: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties Jun 3 22:01:38.210: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-9474 --namespace=crd-publish-openapi-9474 create -f -' Jun 3 22:01:38.806: INFO: stderr: "" Jun 3 22:01:38.807: INFO: stdout: "e2e-test-crd-publish-openapi-7205-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n" Jun 3 22:01:38.807: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-9474 --namespace=crd-publish-openapi-9474 delete e2e-test-crd-publish-openapi-7205-crds test-cr' Jun 3 22:01:39.007: INFO: stderr: "" Jun 3 22:01:39.007: INFO: stdout: "e2e-test-crd-publish-openapi-7205-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n" Jun 3 22:01:39.007: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-9474 --namespace=crd-publish-openapi-9474 apply -f -' Jun 3 22:01:39.332: INFO: stderr: "" Jun 3 22:01:39.332: INFO: stdout: "e2e-test-crd-publish-openapi-7205-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n" Jun 3 22:01:39.332: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-9474 --namespace=crd-publish-openapi-9474 delete e2e-test-crd-publish-openapi-7205-crds test-cr' Jun 3 22:01:39.517: INFO: stderr: "" Jun 3 22:01:39.517: INFO: stdout: "e2e-test-crd-publish-openapi-7205-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR without validation schema Jun 3 22:01:39.517: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-9474 explain e2e-test-crd-publish-openapi-7205-crds' Jun 3 22:01:39.871: INFO: stderr: "" Jun 3 22:01:39.871: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-7205-crd\nVERSION: crd-publish-openapi-test-empty.example.com/v1\n\nDESCRIPTION:\n \n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 3 22:01:42.965: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-9474" for this suite. • [SLOW TEST:12.863 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD without validation schema [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ S ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance]","total":-1,"completed":13,"skipped":186,"failed":1,"failures":["[sig-network] Services should be able to create a functioning NodePort service [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] EndpointSlice /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 3 22:01:43.033: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename endpointslice STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] EndpointSlice /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/endpointslice.go:49 [It] should support creating EndpointSlice API operations [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: getting /apis STEP: getting /apis/discovery.k8s.io STEP: getting /apis/discovery.k8s.iov1 STEP: creating STEP: getting STEP: listing STEP: watching Jun 3 22:01:43.073: INFO: starting watch STEP: cluster-wide listing STEP: cluster-wide watching Jun 3 22:01:43.076: INFO: starting watch STEP: patching STEP: updating Jun 3 22:01:43.086: INFO: waiting for watch events with expected annotations Jun 3 22:01:43.086: INFO: saw patched and updated annotations STEP: deleting STEP: deleting a collection [AfterEach] [sig-network] EndpointSlice /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 3 22:01:43.104: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "endpointslice-6431" for this suite. • ------------------------------ {"msg":"PASSED [sig-network] EndpointSlice should support creating EndpointSlice API operations [Conformance]","total":-1,"completed":14,"skipped":208,"failed":1,"failures":["[sig-network] Services should be able to create a functioning NodePort service [Conformance]"]} SSSS ------------------------------ [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 3 22:01:42.752: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test downward API volume plugin Jun 3 22:01:42.789: INFO: Waiting up to 5m0s for pod "downwardapi-volume-cf2be2ba-645d-476f-964c-c2761553f13e" in namespace "projected-3040" to be "Succeeded or Failed" Jun 3 22:01:42.791: INFO: Pod "downwardapi-volume-cf2be2ba-645d-476f-964c-c2761553f13e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.103164ms Jun 3 22:01:44.796: INFO: Pod "downwardapi-volume-cf2be2ba-645d-476f-964c-c2761553f13e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006613155s Jun 3 22:01:46.799: INFO: Pod "downwardapi-volume-cf2be2ba-645d-476f-964c-c2761553f13e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.009654419s STEP: Saw pod success Jun 3 22:01:46.799: INFO: Pod "downwardapi-volume-cf2be2ba-645d-476f-964c-c2761553f13e" satisfied condition "Succeeded or Failed" Jun 3 22:01:46.802: INFO: Trying to get logs from node node2 pod downwardapi-volume-cf2be2ba-645d-476f-964c-c2761553f13e container client-container: STEP: delete the pod Jun 3 22:01:46.814: INFO: Waiting for pod downwardapi-volume-cf2be2ba-645d-476f-964c-c2761553f13e to disappear Jun 3 22:01:46.816: INFO: Pod downwardapi-volume-cf2be2ba-645d-476f-964c-c2761553f13e no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 3 22:01:46.816: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3040" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":-1,"completed":14,"skipped":280,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 3 22:01:43.000: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating configMap with name projected-configmap-test-volume-map-83a486ac-ee37-4509-bb7a-f070c246cf47 STEP: Creating a pod to test consume configMaps Jun 3 22:01:43.036: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-effcea25-0061-47c8-9ba9-43f4f2d578cd" in namespace "projected-3435" to be "Succeeded or Failed" Jun 3 22:01:43.039: INFO: Pod "pod-projected-configmaps-effcea25-0061-47c8-9ba9-43f4f2d578cd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.77389ms Jun 3 22:01:45.044: INFO: Pod "pod-projected-configmaps-effcea25-0061-47c8-9ba9-43f4f2d578cd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007601966s Jun 3 22:01:47.050: INFO: Pod "pod-projected-configmaps-effcea25-0061-47c8-9ba9-43f4f2d578cd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013609689s STEP: Saw pod success Jun 3 22:01:47.050: INFO: Pod "pod-projected-configmaps-effcea25-0061-47c8-9ba9-43f4f2d578cd" satisfied condition "Succeeded or Failed" Jun 3 22:01:47.052: INFO: Trying to get logs from node node1 pod pod-projected-configmaps-effcea25-0061-47c8-9ba9-43f4f2d578cd container agnhost-container: STEP: delete the pod Jun 3 22:01:47.064: INFO: Waiting for pod pod-projected-configmaps-effcea25-0061-47c8-9ba9-43f4f2d578cd to disappear Jun 3 22:01:47.065: INFO: Pod pod-projected-configmaps-effcea25-0061-47c8-9ba9-43f4f2d578cd no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 3 22:01:47.065: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3435" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":12,"skipped":124,"failed":0} S ------------------------------ [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 3 22:01:47.078: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:186 [It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Jun 3 22:01:47.099: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes Jun 3 22:01:47.113: INFO: The status of Pod pod-logs-websocket-bd0a7d54-5e00-4cca-92fc-e2dbc399f86d is Pending, waiting for it to be Running (with Ready = true) Jun 3 22:01:49.116: INFO: The status of Pod pod-logs-websocket-bd0a7d54-5e00-4cca-92fc-e2dbc399f86d is Pending, waiting for it to be Running (with Ready = true) Jun 3 22:01:51.117: INFO: The status of Pod pod-logs-websocket-bd0a7d54-5e00-4cca-92fc-e2dbc399f86d is Pending, waiting for it to be Running (with Ready = true) Jun 3 22:01:53.118: INFO: The status of Pod pod-logs-websocket-bd0a7d54-5e00-4cca-92fc-e2dbc399f86d is Running (Ready = true) [AfterEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 3 22:01:53.960: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-9406" for this suite. • [SLOW TEST:6.889 seconds] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance]","total":-1,"completed":13,"skipped":125,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 3 22:01:43.126: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Jun 3 22:01:43.323: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Jun 3 22:01:45.333: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63789890503, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63789890503, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63789890503, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63789890503, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jun 3 22:01:48.344: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource with pruning [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Jun 3 22:01:48.348: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-1921-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource that should be mutated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 3 22:01:56.428: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-8691" for this suite. STEP: Destroying namespace "webhook-8691-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:13.333 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource with pruning [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","total":-1,"completed":15,"skipped":212,"failed":1,"failures":["[sig-network] Services should be able to create a functioning NodePort service [Conformance]"]} SSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/sysctl.go:35 [BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 3 22:01:54.055: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sysctl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/sysctl.go:64 [It] should support unsafe sysctls which are actually allowed [MinimumKubeletVersion:1.21] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod with the kernel.shm_rmid_forced sysctl STEP: Watching for error events or started pod STEP: Waiting for pod completion STEP: Checking that the pod succeeded STEP: Getting logs from the pod STEP: Checking that the sysctl is actually updated [AfterEach] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 3 22:01:58.104: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sysctl-1473" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] should support unsafe sysctls which are actually allowed [MinimumKubeletVersion:1.21] [Conformance]","total":-1,"completed":14,"skipped":165,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-apps] CronJob /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 3 22:00:37.400: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename cronjob STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] CronJob /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/cronjob.go:63 W0603 22:00:37.430798 38 warnings.go:70] batch/v1beta1 CronJob is deprecated in v1.21+, unavailable in v1.25+; use batch/v1 CronJob [It] should schedule multiple jobs concurrently [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a cronjob STEP: Ensuring more than one job is running at a time STEP: Ensuring at least two running jobs exists by listing jobs explicitly STEP: Removing cronjob [AfterEach] [sig-apps] CronJob /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 3 22:02:01.446: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "cronjob-1382" for this suite. • [SLOW TEST:84.055 seconds] [sig-apps] CronJob /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should schedule multiple jobs concurrently [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","total":-1,"completed":41,"skipped":751,"failed":0} SS ------------------------------ [BeforeEach] [sig-node] KubeletManagedEtcHosts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 3 22:01:58.177: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts STEP: Waiting for a default service account to be provisioned in namespace [It] should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Setting up the test STEP: Creating hostNetwork=false pod Jun 3 22:01:58.222: INFO: The status of Pod test-pod is Pending, waiting for it to be Running (with Ready = true) Jun 3 22:02:00.228: INFO: The status of Pod test-pod is Pending, waiting for it to be Running (with Ready = true) Jun 3 22:02:02.226: INFO: The status of Pod test-pod is Pending, waiting for it to be Running (with Ready = true) Jun 3 22:02:04.227: INFO: The status of Pod test-pod is Pending, waiting for it to be Running (with Ready = true) Jun 3 22:02:06.226: INFO: The status of Pod test-pod is Running (Ready = true) STEP: Creating hostNetwork=true pod Jun 3 22:02:06.247: INFO: The status of Pod test-host-network-pod is Pending, waiting for it to be Running (with Ready = true) Jun 3 22:02:08.252: INFO: The status of Pod test-host-network-pod is Running (Ready = true) STEP: Running the test STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false Jun 3 22:02:08.255: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-2255 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 3 22:02:08.255: INFO: >>> kubeConfig: /root/.kube/config Jun 3 22:02:08.340: INFO: Exec stderr: "" Jun 3 22:02:08.340: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-2255 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 3 22:02:08.340: INFO: >>> kubeConfig: /root/.kube/config Jun 3 22:02:08.436: INFO: Exec stderr: "" Jun 3 22:02:08.436: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-2255 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 3 22:02:08.436: INFO: >>> kubeConfig: /root/.kube/config Jun 3 22:02:08.523: INFO: Exec stderr: "" Jun 3 22:02:08.523: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-2255 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 3 22:02:08.523: INFO: >>> kubeConfig: /root/.kube/config Jun 3 22:02:08.605: INFO: Exec stderr: "" STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount Jun 3 22:02:08.605: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-2255 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 3 22:02:08.605: INFO: >>> kubeConfig: /root/.kube/config Jun 3 22:02:08.690: INFO: Exec stderr: "" Jun 3 22:02:08.690: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-2255 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 3 22:02:08.690: INFO: >>> kubeConfig: /root/.kube/config Jun 3 22:02:08.772: INFO: Exec stderr: "" STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true Jun 3 22:02:08.772: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-2255 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 3 22:02:08.772: INFO: >>> kubeConfig: /root/.kube/config Jun 3 22:02:08.857: INFO: Exec stderr: "" Jun 3 22:02:08.857: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-2255 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 3 22:02:08.857: INFO: >>> kubeConfig: /root/.kube/config Jun 3 22:02:08.946: INFO: Exec stderr: "" Jun 3 22:02:08.946: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-2255 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 3 22:02:08.946: INFO: >>> kubeConfig: /root/.kube/config Jun 3 22:02:09.023: INFO: Exec stderr: "" Jun 3 22:02:09.023: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-2255 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 3 22:02:09.023: INFO: >>> kubeConfig: /root/.kube/config Jun 3 22:02:09.142: INFO: Exec stderr: "" [AfterEach] [sig-node] KubeletManagedEtcHosts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 3 22:02:09.142: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-kubelet-etc-hosts-2255" for this suite. • [SLOW TEST:10.973 seconds] [sig-node] KubeletManagedEtcHosts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":15,"skipped":191,"failed":0} SSSSS ------------------------------ [BeforeEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 3 22:02:09.164: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating secret with name secret-test-c31aa5bf-48ab-4309-ad55-00ad5f21609d STEP: Creating a pod to test consume secrets Jun 3 22:02:09.203: INFO: Waiting up to 5m0s for pod "pod-secrets-97b353b7-e5e3-4901-b6d0-aaf3fa33441a" in namespace "secrets-7853" to be "Succeeded or Failed" Jun 3 22:02:09.210: INFO: Pod "pod-secrets-97b353b7-e5e3-4901-b6d0-aaf3fa33441a": Phase="Pending", Reason="", readiness=false. Elapsed: 6.529939ms Jun 3 22:02:11.213: INFO: Pod "pod-secrets-97b353b7-e5e3-4901-b6d0-aaf3fa33441a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009690096s Jun 3 22:02:13.217: INFO: Pod "pod-secrets-97b353b7-e5e3-4901-b6d0-aaf3fa33441a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013619437s STEP: Saw pod success Jun 3 22:02:13.217: INFO: Pod "pod-secrets-97b353b7-e5e3-4901-b6d0-aaf3fa33441a" satisfied condition "Succeeded or Failed" Jun 3 22:02:13.219: INFO: Trying to get logs from node node2 pod pod-secrets-97b353b7-e5e3-4901-b6d0-aaf3fa33441a container secret-volume-test: STEP: delete the pod Jun 3 22:02:13.233: INFO: Waiting for pod pod-secrets-97b353b7-e5e3-4901-b6d0-aaf3fa33441a to disappear Jun 3 22:02:13.235: INFO: Pod pod-secrets-97b353b7-e5e3-4901-b6d0-aaf3fa33441a no longer exists [AfterEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 3 22:02:13.235: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-7853" for this suite. • ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance]","total":-1,"completed":10,"skipped":175,"failed":0} [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 3 21:57:06.418: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 [BeforeEach] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:105 STEP: Creating service test in namespace statefulset-7924 [It] Should recreate evicted statefulset [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Looking for a node to schedule stateful set and pod STEP: Creating pod with conflicting port in namespace statefulset-7924 STEP: Creating statefulset with conflicting port in namespace statefulset-7924 STEP: Waiting until pod test-pod will start running in namespace statefulset-7924 STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace statefulset-7924 Jun 3 22:02:14.473: FAIL: Pod ss-0 expected to be re-created at least once Full Stack Trace k8s.io/kubernetes/test/e2e.RunE2ETests(0xc000c31800) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x36c k8s.io/kubernetes/test/e2e.TestE2E(0xc000c31800) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:144 +0x2b testing.tRunner(0xc000c31800, 0x70f99e8) /usr/local/go/src/testing/testing.go:1193 +0xef created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1238 +0x2b3 [AfterEach] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:116 Jun 3 22:02:14.478: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-7924 describe po test-pod' Jun 3 22:02:14.686: INFO: stderr: "" Jun 3 22:02:14.686: INFO: stdout: "Name: test-pod\nNamespace: statefulset-7924\nPriority: 0\nNode: node1/10.10.190.207\nStart Time: Fri, 03 Jun 2022 21:57:06 +0000\nLabels: \nAnnotations: k8s.v1.cni.cncf.io/network-status:\n [{\n \"name\": \"default-cni-network\",\n \"interface\": \"eth0\",\n \"ips\": [\n \"10.244.3.186\"\n ],\n \"mac\": \"72:db:90:d6:83:b6\",\n \"default\": true,\n \"dns\": {}\n }]\n k8s.v1.cni.cncf.io/networks-status:\n [{\n \"name\": \"default-cni-network\",\n \"interface\": \"eth0\",\n \"ips\": [\n \"10.244.3.186\"\n ],\n \"mac\": \"72:db:90:d6:83:b6\",\n \"default\": true,\n \"dns\": {}\n }]\n kubernetes.io/psp: privileged\nStatus: Running\nIP: 10.244.3.186\nIPs:\n IP: 10.244.3.186\nContainers:\n webserver:\n Container ID: docker://014019824f6d673dd9d3d5f355f8d36dae9383059ab19a47ef49f1dd24b576cb\n Image: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1\n Image ID: docker-pullable://k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50\n Port: 21017/TCP\n Host Port: 21017/TCP\n State: Running\n Started: Fri, 03 Jun 2022 21:57:10 +0000\n Ready: True\n Restart Count: 0\n Environment: \n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-5x2vr (ro)\nConditions:\n Type Status\n Initialized True \n Ready True \n ContainersReady True \n PodScheduled True \nVolumes:\n kube-api-access-5x2vr:\n Type: Projected (a volume that contains injected data from multiple sources)\n TokenExpirationSeconds: 3607\n ConfigMapName: kube-root-ca.crt\n ConfigMapOptional: \n DownwardAPI: true\nQoS Class: BestEffort\nNode-Selectors: \nTolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s\n node.kubernetes.io/unreachable:NoExecute op=Exists for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Pulling 5m5s kubelet Pulling image \"k8s.gcr.io/e2e-test-images/httpd:2.4.38-1\"\n Normal Pulled 5m4s kubelet Successfully pulled image \"k8s.gcr.io/e2e-test-images/httpd:2.4.38-1\" in 270.387762ms\n Normal Created 5m4s kubelet Created container webserver\n Normal Started 5m4s kubelet Started container webserver\n" Jun 3 22:02:14.686: INFO: Output of kubectl describe test-pod: Name: test-pod Namespace: statefulset-7924 Priority: 0 Node: node1/10.10.190.207 Start Time: Fri, 03 Jun 2022 21:57:06 +0000 Labels: Annotations: k8s.v1.cni.cncf.io/network-status: [{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.3.186" ], "mac": "72:db:90:d6:83:b6", "default": true, "dns": {} }] k8s.v1.cni.cncf.io/networks-status: [{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.3.186" ], "mac": "72:db:90:d6:83:b6", "default": true, "dns": {} }] kubernetes.io/psp: privileged Status: Running IP: 10.244.3.186 IPs: IP: 10.244.3.186 Containers: webserver: Container ID: docker://014019824f6d673dd9d3d5f355f8d36dae9383059ab19a47ef49f1dd24b576cb Image: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 Image ID: docker-pullable://k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 Port: 21017/TCP Host Port: 21017/TCP State: Running Started: Fri, 03 Jun 2022 21:57:10 +0000 Ready: True Restart Count: 0 Environment: Mounts: /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-5x2vr (ro) Conditions: Type Status Initialized True Ready True ContainersReady True PodScheduled True Volumes: kube-api-access-5x2vr: Type: Projected (a volume that contains injected data from multiple sources) TokenExpirationSeconds: 3607 ConfigMapName: kube-root-ca.crt ConfigMapOptional: DownwardAPI: true QoS Class: BestEffort Node-Selectors: Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s node.kubernetes.io/unreachable:NoExecute op=Exists for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Pulling 5m5s kubelet Pulling image "k8s.gcr.io/e2e-test-images/httpd:2.4.38-1" Normal Pulled 5m4s kubelet Successfully pulled image "k8s.gcr.io/e2e-test-images/httpd:2.4.38-1" in 270.387762ms Normal Created 5m4s kubelet Created container webserver Normal Started 5m4s kubelet Started container webserver Jun 3 22:02:14.686: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-7924 logs test-pod --tail=100' Jun 3 22:02:14.848: INFO: stderr: "" Jun 3 22:02:14.848: INFO: stdout: "AH00558: httpd: Could not reliably determine the server's fully qualified domain name, using 10.244.3.186. Set the 'ServerName' directive globally to suppress this message\nAH00558: httpd: Could not reliably determine the server's fully qualified domain name, using 10.244.3.186. Set the 'ServerName' directive globally to suppress this message\n[Fri Jun 03 21:57:10.943034 2022] [mpm_event:notice] [pid 1:tid 140661585759080] AH00489: Apache/2.4.38 (Unix) configured -- resuming normal operations\n[Fri Jun 03 21:57:10.943069 2022] [core:notice] [pid 1:tid 140661585759080] AH00094: Command line: 'httpd -D FOREGROUND'\n" Jun 3 22:02:14.848: INFO: Last 100 log lines of test-pod: AH00558: httpd: Could not reliably determine the server's fully qualified domain name, using 10.244.3.186. Set the 'ServerName' directive globally to suppress this message AH00558: httpd: Could not reliably determine the server's fully qualified domain name, using 10.244.3.186. Set the 'ServerName' directive globally to suppress this message [Fri Jun 03 21:57:10.943034 2022] [mpm_event:notice] [pid 1:tid 140661585759080] AH00489: Apache/2.4.38 (Unix) configured -- resuming normal operations [Fri Jun 03 21:57:10.943069 2022] [core:notice] [pid 1:tid 140661585759080] AH00094: Command line: 'httpd -D FOREGROUND' Jun 3 22:02:14.848: INFO: Deleting all statefulset in ns statefulset-7924 Jun 3 22:02:14.851: INFO: Scaling statefulset ss to 0 Jun 3 22:02:14.860: INFO: Waiting for statefulset status.replicas updated to 0 Jun 3 22:02:14.862: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 STEP: Collecting events from namespace "statefulset-7924". STEP: Found 7 events. Jun 3 22:02:14.873: INFO: At 2022-06-03 21:57:06 +0000 UTC - event for ss: {statefulset-controller } FailedCreate: create Pod ss-0 in StatefulSet ss failed error: pods "ss-0" is forbidden: PodSecurityPolicy: unable to admit pod: [spec.containers[0].hostPort: Invalid value: 21017: Host port 21017 is not allowed to be used. Allowed ports: [9103-9104] spec.containers[0].hostPort: Invalid value: 21017: Host port 21017 is not allowed to be used. Allowed ports: [9100] spec.containers[0].hostPort: Invalid value: 21017: Host port 21017 is not allowed to be used. Allowed ports: []] Jun 3 22:02:14.873: INFO: At 2022-06-03 21:57:06 +0000 UTC - event for ss: {statefulset-controller } FailedCreate: create Pod ss-0 in StatefulSet ss failed error: pods "ss-0" is forbidden: PodSecurityPolicy: unable to admit pod: [spec.containers[0].hostPort: Invalid value: 21017: Host port 21017 is not allowed to be used. Allowed ports: [] spec.containers[0].hostPort: Invalid value: 21017: Host port 21017 is not allowed to be used. Allowed ports: [9103-9104] spec.containers[0].hostPort: Invalid value: 21017: Host port 21017 is not allowed to be used. Allowed ports: [9100]] Jun 3 22:02:14.873: INFO: At 2022-06-03 21:57:06 +0000 UTC - event for ss: {statefulset-controller } FailedCreate: create Pod ss-0 in StatefulSet ss failed error: pods "ss-0" is forbidden: PodSecurityPolicy: unable to admit pod: [spec.containers[0].hostPort: Invalid value: 21017: Host port 21017 is not allowed to be used. Allowed ports: [9100] spec.containers[0].hostPort: Invalid value: 21017: Host port 21017 is not allowed to be used. Allowed ports: [] spec.containers[0].hostPort: Invalid value: 21017: Host port 21017 is not allowed to be used. Allowed ports: [9103-9104]] Jun 3 22:02:14.873: INFO: At 2022-06-03 21:57:09 +0000 UTC - event for test-pod: {kubelet node1} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/httpd:2.4.38-1" Jun 3 22:02:14.873: INFO: At 2022-06-03 21:57:10 +0000 UTC - event for test-pod: {kubelet node1} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/httpd:2.4.38-1" in 270.387762ms Jun 3 22:02:14.873: INFO: At 2022-06-03 21:57:10 +0000 UTC - event for test-pod: {kubelet node1} Created: Created container webserver Jun 3 22:02:14.873: INFO: At 2022-06-03 21:57:10 +0000 UTC - event for test-pod: {kubelet node1} Started: Started container webserver Jun 3 22:02:14.875: INFO: POD NODE PHASE GRACE CONDITIONS Jun 3 22:02:14.875: INFO: test-pod node1 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-06-03 21:57:06 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2022-06-03 21:57:12 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-06-03 21:57:12 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-06-03 21:57:06 +0000 UTC }] Jun 3 22:02:14.875: INFO: Jun 3 22:02:14.880: INFO: Logging node info for node master1 Jun 3 22:02:14.882: INFO: Node Info: &Node{ObjectMeta:{master1 4d289319-b343-4e96-a789-1a1cbeac007b 41729 0 2022-06-03 19:57:53 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master1 kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.202 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2022-06-03 19:57:56 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}}} {kube-controller-manager Update v1 2022-06-03 19:58:10 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.0.0/24\"":{}},"f:taints":{}}}} {flanneld Update v1 2022-06-03 20:00:37 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kubelet Update v1 2022-06-03 20:05:24 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{451201003520 0} {} 440625980Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234743296 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{406080902496 0} {} 406080902496 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324579328 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-06-03 20:03:30 +0000 UTC,LastTransitionTime:2022-06-03 20:03:30 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-06-03 22:02:12 +0000 UTC,LastTransitionTime:2022-06-03 19:57:50 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-06-03 22:02:12 +0000 UTC,LastTransitionTime:2022-06-03 19:57:50 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-06-03 22:02:12 +0000 UTC,LastTransitionTime:2022-06-03 19:57:50 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-06-03 22:02:12 +0000 UTC,LastTransitionTime:2022-06-03 20:00:47 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.202,},NodeAddress{Type:Hostname,Address:master1,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:3d668405f73a457bb0bcb4df5f4edac8,SystemUUID:00ACFB60-0631-E711-906E-0017A4403562,BootID:c08279e3-a5cb-4f4d-b9f0-f2cde655469f,KernelVersion:3.10.0-1160.66.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.16,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:727687199,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 sirot/netperf-latest:latest],SizeBytes:282025213,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e k8s.gcr.io/coredns/coredns:v1.8.0],SizeBytes:42454755,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[localhost:30500/tasextender@sha256:eddd5e176ac5f79e2e8ba9a1b7023bbf7200edfa835da39de54a6bf3568f9668 tasextender:latest localhost:30500/tasextender:0.4],SizeBytes:28910791,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[registry@sha256:1cd9409a311350c3072fe510b52046f104416376c126a479cef9a4dfe692cf57 registry:2.7.0],SizeBytes:24191168,},ContainerImage{Names:[nginx@sha256:b92d3b942c8b84da889ac3dc6e83bd20ffb8cd2d8298eba92c8b0bf88d52f03e nginx:1.20.1-alpine],SizeBytes:22721538,},ContainerImage{Names:[@ :],SizeBytes:5577654,},ContainerImage{Names:[alpine@sha256:c0e9560cda118f9ec63ddefb4a173a2b2a0347082d7dff7dc14272e7841a5b5a alpine:3.12.1],SizeBytes:5573013,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jun 3 22:02:14.883: INFO: Logging kubelet events for node master1 Jun 3 22:02:14.885: INFO: Logging pods the kubelet thinks is on node master1 Jun 3 22:02:14.911: INFO: kube-proxy-zgchh started at 2022-06-03 19:59:36 +0000 UTC (0+1 container statuses recorded) Jun 3 22:02:14.911: INFO: Container kube-proxy ready: true, restart count 2 Jun 3 22:02:14.911: INFO: dns-autoscaler-7df78bfcfb-vdtpl started at 2022-06-03 20:01:09 +0000 UTC (0+1 container statuses recorded) Jun 3 22:02:14.911: INFO: Container autoscaler ready: true, restart count 2 Jun 3 22:02:14.911: INFO: coredns-8474476ff8-rvc4v started at 2022-06-03 20:01:12 +0000 UTC (0+1 container statuses recorded) Jun 3 22:02:14.911: INFO: Container coredns ready: true, restart count 1 Jun 3 22:02:14.911: INFO: container-registry-65d7c44b96-2nzvn started at 2022-06-03 20:05:02 +0000 UTC (0+2 container statuses recorded) Jun 3 22:02:14.911: INFO: Container docker-registry ready: true, restart count 0 Jun 3 22:02:14.911: INFO: Container nginx ready: true, restart count 0 Jun 3 22:02:14.911: INFO: kube-scheduler-master1 started at 2022-06-03 20:06:52 +0000 UTC (0+1 container statuses recorded) Jun 3 22:02:14.911: INFO: Container kube-scheduler ready: true, restart count 0 Jun 3 22:02:14.911: INFO: node-exporter-45rhg started at 2022-06-03 20:13:28 +0000 UTC (0+2 container statuses recorded) Jun 3 22:02:14.911: INFO: Container kube-rbac-proxy ready: true, restart count 0 Jun 3 22:02:14.911: INFO: Container node-exporter ready: true, restart count 0 Jun 3 22:02:14.911: INFO: kube-apiserver-master1 started at 2022-06-03 19:58:57 +0000 UTC (0+1 container statuses recorded) Jun 3 22:02:14.911: INFO: Container kube-apiserver ready: true, restart count 0 Jun 3 22:02:14.911: INFO: kube-controller-manager-master1 started at 2022-06-03 19:58:57 +0000 UTC (0+1 container statuses recorded) Jun 3 22:02:14.911: INFO: Container kube-controller-manager ready: true, restart count 1 Jun 3 22:02:14.911: INFO: kube-flannel-m8sj7 started at 2022-06-03 20:00:31 +0000 UTC (1+1 container statuses recorded) Jun 3 22:02:14.911: INFO: Init container install-cni ready: true, restart count 0 Jun 3 22:02:14.911: INFO: Container kube-flannel ready: true, restart count 1 Jun 3 22:02:14.911: INFO: kube-multus-ds-amd64-n58qk started at 2022-06-03 20:00:40 +0000 UTC (0+1 container statuses recorded) Jun 3 22:02:14.911: INFO: Container kube-multus ready: true, restart count 1 Jun 3 22:02:15.006: INFO: Latency metrics for node master1 Jun 3 22:02:15.006: INFO: Logging node info for node master2 Jun 3 22:02:15.008: INFO: Node Info: &Node{ObjectMeta:{master2 a6ae2f0e-af0f-4dbb-a8e5-6d3a309310bc 41725 0 2022-06-03 19:58:21 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master2 kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.203 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2022-06-03 19:58:23 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}}} {flanneld Update v1 2022-06-03 20:00:37 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2022-06-03 20:00:45 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}},"f:taints":{}}}} {kubelet Update v1 2022-06-03 20:10:57 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:capacity":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.1.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{451201003520 0} {} 440625980Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234739200 0} {} 196518300Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{406080902496 0} {} 406080902496 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324575232 0} {} 195629468Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-06-03 20:03:28 +0000 UTC,LastTransitionTime:2022-06-03 20:03:28 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-06-03 22:02:12 +0000 UTC,LastTransitionTime:2022-06-03 19:58:21 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-06-03 22:02:12 +0000 UTC,LastTransitionTime:2022-06-03 19:58:21 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-06-03 22:02:12 +0000 UTC,LastTransitionTime:2022-06-03 19:58:21 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-06-03 22:02:12 +0000 UTC,LastTransitionTime:2022-06-03 20:00:45 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.203,},NodeAddress{Type:Hostname,Address:master2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:21e5c20b6e4a4d3fb07443d5575db572,SystemUUID:00A0DE53-E51D-E711-906E-0017A4403562,BootID:52401484-5222-49a3-a465-e7215ade9b1e,KernelVersion:3.10.0-1160.66.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.16,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:727687199,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 sirot/netperf-latest:latest],SizeBytes:282025213,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[quay.io/prometheus-operator/prometheus-operator@sha256:850c86bfeda4389bc9c757a9fd17ca5a090ea6b424968178d4467492cfa13921 quay.io/prometheus-operator/prometheus-operator:v0.44.1],SizeBytes:42617274,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e k8s.gcr.io/coredns/coredns:v1.8.0],SizeBytes:42454755,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jun 3 22:02:15.008: INFO: Logging kubelet events for node master2 Jun 3 22:02:15.011: INFO: Logging pods the kubelet thinks is on node master2 Jun 3 22:02:15.025: INFO: kube-apiserver-master2 started at 2022-06-03 19:58:55 +0000 UTC (0+1 container statuses recorded) Jun 3 22:02:15.025: INFO: Container kube-apiserver ready: true, restart count 0 Jun 3 22:02:15.025: INFO: kube-controller-manager-master2 started at 2022-06-03 19:58:55 +0000 UTC (0+1 container statuses recorded) Jun 3 22:02:15.025: INFO: Container kube-controller-manager ready: true, restart count 2 Jun 3 22:02:15.025: INFO: kube-proxy-nlc58 started at 2022-06-03 19:59:36 +0000 UTC (0+1 container statuses recorded) Jun 3 22:02:15.025: INFO: Container kube-proxy ready: true, restart count 1 Jun 3 22:02:15.025: INFO: prometheus-operator-585ccfb458-xp2lz started at 2022-06-03 20:13:21 +0000 UTC (0+2 container statuses recorded) Jun 3 22:02:15.025: INFO: Container kube-rbac-proxy ready: true, restart count 0 Jun 3 22:02:15.025: INFO: Container prometheus-operator ready: true, restart count 0 Jun 3 22:02:15.025: INFO: node-exporter-2h6sb started at 2022-06-03 20:13:28 +0000 UTC (0+2 container statuses recorded) Jun 3 22:02:15.025: INFO: Container kube-rbac-proxy ready: true, restart count 0 Jun 3 22:02:15.025: INFO: Container node-exporter ready: true, restart count 0 Jun 3 22:02:15.025: INFO: kube-scheduler-master2 started at 2022-06-03 19:58:55 +0000 UTC (0+1 container statuses recorded) Jun 3 22:02:15.025: INFO: Container kube-scheduler ready: true, restart count 3 Jun 3 22:02:15.025: INFO: kube-flannel-sbdcv started at 2022-06-03 20:00:32 +0000 UTC (1+1 container statuses recorded) Jun 3 22:02:15.025: INFO: Init container install-cni ready: true, restart count 2 Jun 3 22:02:15.025: INFO: Container kube-flannel ready: true, restart count 1 Jun 3 22:02:15.025: INFO: kube-multus-ds-amd64-ccvdq started at 2022-06-03 20:00:40 +0000 UTC (0+1 container statuses recorded) Jun 3 22:02:15.025: INFO: Container kube-multus ready: true, restart count 1 Jun 3 22:02:15.112: INFO: Latency metrics for node master2 Jun 3 22:02:15.112: INFO: Logging node info for node master3 Jun 3 22:02:15.116: INFO: Node Info: &Node{ObjectMeta:{master3 559b19e7-45b0-4589-9993-9bba259aae96 41728 0 2022-06-03 19:58:27 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master3 kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.204 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/master.version:v0.8.2 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2022-06-03 19:58:32 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}}} {flanneld Update v1 2022-06-03 20:00:37 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2022-06-03 20:00:47 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.2.0/24\"":{}},"f:taints":{}}}} {nfd-master Update v1 2022-06-03 20:08:15 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/master.version":{}}}}} {kubelet Update v1 2022-06-03 20:08:27 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:capacity":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.2.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{451201003520 0} {} 440625980Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234743296 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{406080902496 0} {} 406080902496 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324579328 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-06-03 20:03:22 +0000 UTC,LastTransitionTime:2022-06-03 20:03:22 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-06-03 22:02:12 +0000 UTC,LastTransitionTime:2022-06-03 19:58:27 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-06-03 22:02:12 +0000 UTC,LastTransitionTime:2022-06-03 19:58:27 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-06-03 22:02:12 +0000 UTC,LastTransitionTime:2022-06-03 19:58:27 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-06-03 22:02:12 +0000 UTC,LastTransitionTime:2022-06-03 20:03:18 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.204,},NodeAddress{Type:Hostname,Address:master3,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:5b399eed918a40dd8324debc1c0777a3,SystemUUID:008B1444-141E-E711-906E-0017A4403562,BootID:2fde35f0-2dc9-4531-9d2b-0bd4a6516b3a,KernelVersion:3.10.0-1160.66.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.16,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:727687199,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[k8s.gcr.io/nfd/node-feature-discovery@sha256:74a1cbd82354f148277e20cdce25d57816e355a896bc67f67a0f722164b16945 k8s.gcr.io/nfd/node-feature-discovery:v0.8.2],SizeBytes:108486428,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e k8s.gcr.io/coredns/coredns:v1.8.0],SizeBytes:42454755,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 aquasec/kube-bench:0.3.1],SizeBytes:19301876,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jun 3 22:02:15.116: INFO: Logging kubelet events for node master3 Jun 3 22:02:15.119: INFO: Logging pods the kubelet thinks is on node master3 Jun 3 22:02:15.128: INFO: node-exporter-jn8vv started at 2022-06-03 20:13:28 +0000 UTC (0+2 container statuses recorded) Jun 3 22:02:15.128: INFO: Container kube-rbac-proxy ready: true, restart count 0 Jun 3 22:02:15.128: INFO: Container node-exporter ready: true, restart count 0 Jun 3 22:02:15.128: INFO: kube-controller-manager-master3 started at 2022-06-03 20:03:18 +0000 UTC (0+1 container statuses recorded) Jun 3 22:02:15.128: INFO: Container kube-controller-manager ready: true, restart count 2 Jun 3 22:02:15.128: INFO: kube-scheduler-master3 started at 2022-06-03 19:58:27 +0000 UTC (0+1 container statuses recorded) Jun 3 22:02:15.128: INFO: Container kube-scheduler ready: true, restart count 3 Jun 3 22:02:15.128: INFO: kube-proxy-m8r9n started at 2022-06-03 19:59:36 +0000 UTC (0+1 container statuses recorded) Jun 3 22:02:15.128: INFO: Container kube-proxy ready: true, restart count 2 Jun 3 22:02:15.128: INFO: coredns-8474476ff8-dvwn7 started at 2022-06-03 20:01:07 +0000 UTC (0+1 container statuses recorded) Jun 3 22:02:15.128: INFO: Container coredns ready: true, restart count 1 Jun 3 22:02:15.128: INFO: kube-apiserver-master3 started at 2022-06-03 20:03:18 +0000 UTC (0+1 container statuses recorded) Jun 3 22:02:15.128: INFO: Container kube-apiserver ready: true, restart count 0 Jun 3 22:02:15.128: INFO: kube-flannel-nx64t started at 2022-06-03 20:00:32 +0000 UTC (1+1 container statuses recorded) Jun 3 22:02:15.128: INFO: Init container install-cni ready: true, restart count 2 Jun 3 22:02:15.128: INFO: Container kube-flannel ready: true, restart count 2 Jun 3 22:02:15.128: INFO: kube-multus-ds-amd64-gjv49 started at 2022-06-03 20:00:40 +0000 UTC (0+1 container statuses recorded) Jun 3 22:02:15.128: INFO: Container kube-multus ready: true, restart count 1 Jun 3 22:02:15.128: INFO: node-feature-discovery-controller-cff799f9f-8fbbp started at 2022-06-03 20:08:09 +0000 UTC (0+1 container statuses recorded) Jun 3 22:02:15.128: INFO: Container nfd-controller ready: true, restart count 0 Jun 3 22:02:15.212: INFO: Latency metrics for node master3 Jun 3 22:02:15.212: INFO: Logging node info for node node1 Jun 3 22:02:15.215: INFO: Node Info: &Node{ObjectMeta:{node1 482ecf0f-7f88-436d-a313-227096fe8b8d 41695 0 2022-06-03 19:59:31 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux cmk.intel.com/cmk-node:true feature.node.kubernetes.io/cpu-cpuid.ADX:true feature.node.kubernetes.io/cpu-cpuid.AESNI:true feature.node.kubernetes.io/cpu-cpuid.AVX:true feature.node.kubernetes.io/cpu-cpuid.AVX2:true feature.node.kubernetes.io/cpu-cpuid.AVX512BW:true feature.node.kubernetes.io/cpu-cpuid.AVX512CD:true feature.node.kubernetes.io/cpu-cpuid.AVX512DQ:true feature.node.kubernetes.io/cpu-cpuid.AVX512F:true feature.node.kubernetes.io/cpu-cpuid.AVX512VL:true feature.node.kubernetes.io/cpu-cpuid.FMA3:true feature.node.kubernetes.io/cpu-cpuid.HLE:true feature.node.kubernetes.io/cpu-cpuid.IBPB:true feature.node.kubernetes.io/cpu-cpuid.MPX:true feature.node.kubernetes.io/cpu-cpuid.RTM:true feature.node.kubernetes.io/cpu-cpuid.SSE4:true feature.node.kubernetes.io/cpu-cpuid.SSE42:true feature.node.kubernetes.io/cpu-cpuid.STIBP:true feature.node.kubernetes.io/cpu-cpuid.VMX:true feature.node.kubernetes.io/cpu-cstate.enabled:true feature.node.kubernetes.io/cpu-hardware_multithreading:true feature.node.kubernetes.io/cpu-pstate.status:active feature.node.kubernetes.io/cpu-pstate.turbo:true feature.node.kubernetes.io/cpu-rdt.RDTCMT:true feature.node.kubernetes.io/cpu-rdt.RDTL3CA:true feature.node.kubernetes.io/cpu-rdt.RDTMBA:true feature.node.kubernetes.io/cpu-rdt.RDTMBM:true feature.node.kubernetes.io/cpu-rdt.RDTMON:true feature.node.kubernetes.io/kernel-config.NO_HZ:true feature.node.kubernetes.io/kernel-config.NO_HZ_FULL:true feature.node.kubernetes.io/kernel-selinux.enabled:true feature.node.kubernetes.io/kernel-version.full:3.10.0-1160.66.1.el7.x86_64 feature.node.kubernetes.io/kernel-version.major:3 feature.node.kubernetes.io/kernel-version.minor:10 feature.node.kubernetes.io/kernel-version.revision:0 feature.node.kubernetes.io/memory-numa:true feature.node.kubernetes.io/network-sriov.capable:true feature.node.kubernetes.io/network-sriov.configured:true feature.node.kubernetes.io/pci-0300_1a03.present:true feature.node.kubernetes.io/storage-nonrotationaldisk:true feature.node.kubernetes.io/system-os_release.ID:centos feature.node.kubernetes.io/system-os_release.VERSION_ID:7 feature.node.kubernetes.io/system-os_release.VERSION_ID.major:7 kubernetes.io/arch:amd64 kubernetes.io/hostname:node1 kubernetes.io/os:linux] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.207 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/extended-resources: nfd.node.kubernetes.io/feature-labels:cpu-cpuid.ADX,cpu-cpuid.AESNI,cpu-cpuid.AVX,cpu-cpuid.AVX2,cpu-cpuid.AVX512BW,cpu-cpuid.AVX512CD,cpu-cpuid.AVX512DQ,cpu-cpuid.AVX512F,cpu-cpuid.AVX512VL,cpu-cpuid.FMA3,cpu-cpuid.HLE,cpu-cpuid.IBPB,cpu-cpuid.MPX,cpu-cpuid.RTM,cpu-cpuid.SSE4,cpu-cpuid.SSE42,cpu-cpuid.STIBP,cpu-cpuid.VMX,cpu-cstate.enabled,cpu-hardware_multithreading,cpu-pstate.status,cpu-pstate.turbo,cpu-rdt.RDTCMT,cpu-rdt.RDTL3CA,cpu-rdt.RDTMBA,cpu-rdt.RDTMBM,cpu-rdt.RDTMON,kernel-config.NO_HZ,kernel-config.NO_HZ_FULL,kernel-selinux.enabled,kernel-version.full,kernel-version.major,kernel-version.minor,kernel-version.revision,memory-numa,network-sriov.capable,network-sriov.configured,pci-0300_1a03.present,storage-nonrotationaldisk,system-os_release.ID,system-os_release.VERSION_ID,system-os_release.VERSION_ID.major nfd.node.kubernetes.io/worker.version:v0.8.2 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2022-06-03 19:59:31 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.3.0/24\"":{}}}}} {kubeadm Update v1 2022-06-03 19:59:32 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {flanneld Update v1 2022-06-03 20:00:37 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {nfd-master Update v1 2022-06-03 20:08:16 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/extended-resources":{},"f:nfd.node.kubernetes.io/feature-labels":{},"f:nfd.node.kubernetes.io/worker.version":{}},"f:labels":{"f:feature.node.kubernetes.io/cpu-cpuid.ADX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AESNI":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX2":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512BW":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512CD":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512DQ":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512F":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512VL":{},"f:feature.node.kubernetes.io/cpu-cpuid.FMA3":{},"f:feature.node.kubernetes.io/cpu-cpuid.HLE":{},"f:feature.node.kubernetes.io/cpu-cpuid.IBPB":{},"f:feature.node.kubernetes.io/cpu-cpuid.MPX":{},"f:feature.node.kubernetes.io/cpu-cpuid.RTM":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE4":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE42":{},"f:feature.node.kubernetes.io/cpu-cpuid.STIBP":{},"f:feature.node.kubernetes.io/cpu-cpuid.VMX":{},"f:feature.node.kubernetes.io/cpu-cstate.enabled":{},"f:feature.node.kubernetes.io/cpu-hardware_multithreading":{},"f:feature.node.kubernetes.io/cpu-pstate.status":{},"f:feature.node.kubernetes.io/cpu-pstate.turbo":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTCMT":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTL3CA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBM":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMON":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ_FULL":{},"f:feature.node.kubernetes.io/kernel-selinux.enabled":{},"f:feature.node.kubernetes.io/kernel-version.full":{},"f:feature.node.kubernetes.io/kernel-version.major":{},"f:feature.node.kubernetes.io/kernel-version.minor":{},"f:feature.node.kubernetes.io/kernel-version.revision":{},"f:feature.node.kubernetes.io/memory-numa":{},"f:feature.node.kubernetes.io/network-sriov.capable":{},"f:feature.node.kubernetes.io/network-sriov.configured":{},"f:feature.node.kubernetes.io/pci-0300_1a03.present":{},"f:feature.node.kubernetes.io/storage-nonrotationaldisk":{},"f:feature.node.kubernetes.io/system-os_release.ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID.major":{}}}}} {Swagger-Codegen Update v1 2022-06-03 20:11:45 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:cmk.intel.com/cmk-node":{}}},"f:status":{"f:capacity":{"f:cmk.intel.com/exclusive-cores":{}}}}} {kubelet Update v1 2022-06-03 20:11:51 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:cmk.intel.com/exclusive-cores":{},"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:capacity":{"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.3.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.3.0/24],},Status:NodeStatus{Capacity:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{451201003520 0} {} 440625980Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{201269608448 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{77 0} {} 77 DecimalSI},ephemeral-storage: {{406080902496 0} {} 406080902496 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{178884608000 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-06-03 20:03:39 +0000 UTC,LastTransitionTime:2022-06-03 20:03:39 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-06-03 22:02:08 +0000 UTC,LastTransitionTime:2022-06-03 19:59:31 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-06-03 22:02:08 +0000 UTC,LastTransitionTime:2022-06-03 19:59:31 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-06-03 22:02:08 +0000 UTC,LastTransitionTime:2022-06-03 19:59:31 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-06-03 22:02:08 +0000 UTC,LastTransitionTime:2022-06-03 20:00:42 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.207,},NodeAddress{Type:Hostname,Address:node1,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:d7b1fa7572024d5cac9eec5f4f2a75d3,SystemUUID:00CDA902-D022-E711-906E-0017A4403562,BootID:a1aa46cd-ec2c-417b-ae44-b808bdc04113,KernelVersion:3.10.0-1160.66.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.16,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[opnfv/barometer-collectd@sha256:f30e965aa6195e6ac4ca2410f5a15e3704c92e4afa5208178ca22a7911975d66],SizeBytes:1075575763,},ContainerImage{Names:[@ :],SizeBytes:1003977815,},ContainerImage{Names:[localhost:30500/cmk@sha256:196eade72a7e16bdb2d709d29fdec354c8a3dbbb68e384608929b41c5ec41520 cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:727687199,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[golang@sha256:db2475a1dbb2149508e5db31d7d77a75e6600d54be645f37681f03f2762169ba golang:alpine3.12],SizeBytes:301186719,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:702a992280fb7c3303e84a5801acbb4c9c7fcf48cffe0e9c8be3f0c60f74cf89 k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.4],SizeBytes:253371792,},ContainerImage{Names:[grafana/grafana@sha256:ba39bf5131dcc0464134a3ff0e26e8c6380415249fa725e5f619176601255172 grafana/grafana:7.5.4],SizeBytes:203572842,},ContainerImage{Names:[quay.io/prometheus/prometheus@sha256:b899dbd1b9017b9a379f76ce5b40eead01a62762c4f2057eacef945c3c22d210 quay.io/prometheus/prometheus:v2.22.1],SizeBytes:168344243,},ContainerImage{Names:[nginx@sha256:a05b0cdd4fc1be3b224ba9662ebdf98fe44c09c0c9215b45f84344c12867002e nginx:1.21.1],SizeBytes:133175493,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:716d2f68314c5c4ddd5ecdb45183fcb4ed8019015982c1321571f863989b70b0 k8s.gcr.io/e2e-test-images/httpd:2.4.39-1],SizeBytes:126894770,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:125930239,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:1f36a24cfb5e0c3f725d7565a867c2384282fcbeccc77b07b423c9da95763a9a k8s.gcr.io/e2e-test-images/nautilus:1.4],SizeBytes:121748345,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[k8s.gcr.io/nfd/node-feature-discovery@sha256:74a1cbd82354f148277e20cdce25d57816e355a896bc67f67a0f722164b16945 k8s.gcr.io/nfd/node-feature-discovery:v0.8.2],SizeBytes:108486428,},ContainerImage{Names:[directxman12/k8s-prometheus-adapter@sha256:2b09a571757a12c0245f2f1a74db4d1b9386ff901cf57f5ce48a0a682bd0e3af directxman12/k8s-prometheus-adapter:v0.8.2],SizeBytes:68230450,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[localhost:30500/sriov-device-plugin@sha256:bec5a478455b8244d18398355b5ec18540557180ddc029404300ca241638521b nfvpe/sriov-device-plugin:latest localhost:30500/sriov-device-plugin:v3.3.2],SizeBytes:42676189,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[prom/collectd-exporter@sha256:73fbda4d24421bff3b741c27efc36f1b6fbe7c57c378d56d4ff78101cd556654],SizeBytes:17463681,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:16032814,},ContainerImage{Names:[quay.io/prometheus-operator/prometheus-config-reloader@sha256:4dee0fcf1820355ddd6986c1317b555693776c731315544a99d6cc59a7e34ce9 quay.io/prometheus-operator/prometheus-config-reloader:v0.44.1],SizeBytes:13433274,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:11443478,},ContainerImage{Names:[appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172cead6502fec989198b98d8bb appropriate/curl:edge],SizeBytes:5654234,},ContainerImage{Names:[alpine@sha256:c75ac27b49326926b803b9ed43bf088bc220d22556de1bc5f72d742c91398f69 alpine:3.12],SizeBytes:5581590,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:1154361,},ContainerImage{Names:[busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 busybox:1.28],SizeBytes:1146369,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jun 3 22:02:15.215: INFO: Logging kubelet events for node node1 Jun 3 22:02:15.218: INFO: Logging pods the kubelet thinks is on node node1 Jun 3 22:02:15.234: INFO: kube-multus-ds-amd64-p7r6j started at 2022-06-03 20:00:40 +0000 UTC (0+1 container statuses recorded) Jun 3 22:02:15.234: INFO: Container kube-multus ready: true, restart count 1 Jun 3 22:02:15.234: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-qwqjx started at 2022-06-03 20:09:20 +0000 UTC (0+1 container statuses recorded) Jun 3 22:02:15.234: INFO: Container kube-sriovdp ready: true, restart count 0 Jun 3 22:02:15.234: INFO: cmk-webhook-6c9d5f8578-c927x started at 2022-06-03 20:12:25 +0000 UTC (0+1 container statuses recorded) Jun 3 22:02:15.234: INFO: Container cmk-webhook ready: true, restart count 0 Jun 3 22:02:15.234: INFO: collectd-nbx5z started at 2022-06-03 20:17:32 +0000 UTC (0+3 container statuses recorded) Jun 3 22:02:15.234: INFO: Container collectd ready: true, restart count 0 Jun 3 22:02:15.234: INFO: Container collectd-exporter ready: true, restart count 0 Jun 3 22:02:15.234: INFO: Container rbac-proxy ready: true, restart count 0 Jun 3 22:02:15.234: INFO: test-pod started at 2022-06-03 22:01:58 +0000 UTC (0+3 container statuses recorded) Jun 3 22:02:15.234: INFO: Container busybox-1 ready: true, restart count 0 Jun 3 22:02:15.235: INFO: Container busybox-2 ready: true, restart count 0 Jun 3 22:02:15.235: INFO: Container busybox-3 ready: true, restart count 0 Jun 3 22:02:15.235: INFO: test-host-network-pod started at 2022-06-03 22:02:06 +0000 UTC (0+2 container statuses recorded) Jun 3 22:02:15.235: INFO: Container busybox-1 ready: false, restart count 0 Jun 3 22:02:15.235: INFO: Container busybox-2 ready: false, restart count 0 Jun 3 22:02:15.235: INFO: test-pod started at 2022-06-03 21:57:06 +0000 UTC (0+1 container statuses recorded) Jun 3 22:02:15.235: INFO: Container webserver ready: true, restart count 0 Jun 3 22:02:15.235: INFO: kube-proxy-b6zlv started at 2022-06-03 19:59:36 +0000 UTC (0+1 container statuses recorded) Jun 3 22:02:15.235: INFO: Container kube-proxy ready: true, restart count 2 Jun 3 22:02:15.235: INFO: kube-flannel-hm6bh started at 2022-06-03 20:00:32 +0000 UTC (1+1 container statuses recorded) Jun 3 22:02:15.235: INFO: Init container install-cni ready: true, restart count 2 Jun 3 22:02:15.235: INFO: Container kube-flannel ready: true, restart count 3 Jun 3 22:02:15.235: INFO: pod-logs-websocket-bd0a7d54-5e00-4cca-92fc-e2dbc399f86d started at 2022-06-03 22:01:47 +0000 UTC (0+1 container statuses recorded) Jun 3 22:02:15.235: INFO: Container main ready: true, restart count 0 Jun 3 22:02:15.235: INFO: busybox-031adb13-b270-44e5-b052-30ffbac3d9c3 started at 2022-06-03 22:00:56 +0000 UTC (0+1 container statuses recorded) Jun 3 22:02:15.235: INFO: Container busybox ready: true, restart count 0 Jun 3 22:02:15.235: INFO: nginx-proxy-node1 started at 2022-06-03 19:59:31 +0000 UTC (0+1 container statuses recorded) Jun 3 22:02:15.235: INFO: Container nginx-proxy ready: true, restart count 2 Jun 3 22:02:15.235: INFO: cmk-init-discover-node1-n75dv started at 2022-06-03 20:11:42 +0000 UTC (0+3 container statuses recorded) Jun 3 22:02:15.235: INFO: Container discover ready: false, restart count 0 Jun 3 22:02:15.235: INFO: Container init ready: false, restart count 0 Jun 3 22:02:15.235: INFO: Container install ready: false, restart count 0 Jun 3 22:02:15.235: INFO: prometheus-k8s-0 started at 2022-06-03 20:13:45 +0000 UTC (0+4 container statuses recorded) Jun 3 22:02:15.235: INFO: Container config-reloader ready: true, restart count 0 Jun 3 22:02:15.235: INFO: Container custom-metrics-apiserver ready: true, restart count 0 Jun 3 22:02:15.235: INFO: Container grafana ready: true, restart count 0 Jun 3 22:02:15.235: INFO: Container prometheus ready: true, restart count 1 Jun 3 22:02:15.235: INFO: netserver-0 started at 2022-06-03 22:02:01 +0000 UTC (0+1 container statuses recorded) Jun 3 22:02:15.235: INFO: Container webserver ready: false, restart count 0 Jun 3 22:02:15.235: INFO: node-feature-discovery-worker-rg6tx started at 2022-06-03 20:08:09 +0000 UTC (0+1 container statuses recorded) Jun 3 22:02:15.235: INFO: Container nfd-worker ready: true, restart count 0 Jun 3 22:02:15.235: INFO: cmk-84nbw started at 2022-06-03 20:12:24 +0000 UTC (0+2 container statuses recorded) Jun 3 22:02:15.235: INFO: Container nodereport ready: true, restart count 0 Jun 3 22:02:15.235: INFO: Container reconcile ready: true, restart count 0 Jun 3 22:02:15.235: INFO: node-exporter-f5xkq started at 2022-06-03 20:13:28 +0000 UTC (0+2 container statuses recorded) Jun 3 22:02:15.235: INFO: Container kube-rbac-proxy ready: true, restart count 0 Jun 3 22:02:15.235: INFO: Container node-exporter ready: true, restart count 0 Jun 3 22:02:15.235: INFO: sample-webhook-deployment-78988fc6cd-b9dxt started at 2022-06-03 22:01:57 +0000 UTC (0+1 container statuses recorded) Jun 3 22:02:15.235: INFO: Container sample-webhook ready: true, restart count 0 Jun 3 22:02:15.449: INFO: Latency metrics for node node1 Jun 3 22:02:15.449: INFO: Logging node info for node node2 Jun 3 22:02:15.453: INFO: Node Info: &Node{ObjectMeta:{node2 bb95e261-57f4-4e78-b1f6-cbf8d9287d74 41698 0 2022-06-03 19:59:32 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux cmk.intel.com/cmk-node:true feature.node.kubernetes.io/cpu-cpuid.ADX:true feature.node.kubernetes.io/cpu-cpuid.AESNI:true feature.node.kubernetes.io/cpu-cpuid.AVX:true feature.node.kubernetes.io/cpu-cpuid.AVX2:true feature.node.kubernetes.io/cpu-cpuid.AVX512BW:true feature.node.kubernetes.io/cpu-cpuid.AVX512CD:true feature.node.kubernetes.io/cpu-cpuid.AVX512DQ:true feature.node.kubernetes.io/cpu-cpuid.AVX512F:true feature.node.kubernetes.io/cpu-cpuid.AVX512VL:true feature.node.kubernetes.io/cpu-cpuid.FMA3:true feature.node.kubernetes.io/cpu-cpuid.HLE:true feature.node.kubernetes.io/cpu-cpuid.IBPB:true feature.node.kubernetes.io/cpu-cpuid.MPX:true feature.node.kubernetes.io/cpu-cpuid.RTM:true feature.node.kubernetes.io/cpu-cpuid.SSE4:true feature.node.kubernetes.io/cpu-cpuid.SSE42:true feature.node.kubernetes.io/cpu-cpuid.STIBP:true feature.node.kubernetes.io/cpu-cpuid.VMX:true feature.node.kubernetes.io/cpu-cstate.enabled:true feature.node.kubernetes.io/cpu-hardware_multithreading:true feature.node.kubernetes.io/cpu-pstate.status:active feature.node.kubernetes.io/cpu-pstate.turbo:true feature.node.kubernetes.io/cpu-rdt.RDTCMT:true feature.node.kubernetes.io/cpu-rdt.RDTL3CA:true feature.node.kubernetes.io/cpu-rdt.RDTMBA:true feature.node.kubernetes.io/cpu-rdt.RDTMBM:true feature.node.kubernetes.io/cpu-rdt.RDTMON:true feature.node.kubernetes.io/kernel-config.NO_HZ:true feature.node.kubernetes.io/kernel-config.NO_HZ_FULL:true feature.node.kubernetes.io/kernel-selinux.enabled:true feature.node.kubernetes.io/kernel-version.full:3.10.0-1160.66.1.el7.x86_64 feature.node.kubernetes.io/kernel-version.major:3 feature.node.kubernetes.io/kernel-version.minor:10 feature.node.kubernetes.io/kernel-version.revision:0 feature.node.kubernetes.io/memory-numa:true feature.node.kubernetes.io/network-sriov.capable:true feature.node.kubernetes.io/network-sriov.configured:true feature.node.kubernetes.io/pci-0300_1a03.present:true feature.node.kubernetes.io/storage-nonrotationaldisk:true feature.node.kubernetes.io/system-os_release.ID:centos feature.node.kubernetes.io/system-os_release.VERSION_ID:7 feature.node.kubernetes.io/system-os_release.VERSION_ID.major:7 kubernetes.io/arch:amd64 kubernetes.io/hostname:node2 kubernetes.io/os:linux] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.208 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/extended-resources: nfd.node.kubernetes.io/feature-labels:cpu-cpuid.ADX,cpu-cpuid.AESNI,cpu-cpuid.AVX,cpu-cpuid.AVX2,cpu-cpuid.AVX512BW,cpu-cpuid.AVX512CD,cpu-cpuid.AVX512DQ,cpu-cpuid.AVX512F,cpu-cpuid.AVX512VL,cpu-cpuid.FMA3,cpu-cpuid.HLE,cpu-cpuid.IBPB,cpu-cpuid.MPX,cpu-cpuid.RTM,cpu-cpuid.SSE4,cpu-cpuid.SSE42,cpu-cpuid.STIBP,cpu-cpuid.VMX,cpu-cstate.enabled,cpu-hardware_multithreading,cpu-pstate.status,cpu-pstate.turbo,cpu-rdt.RDTCMT,cpu-rdt.RDTL3CA,cpu-rdt.RDTMBA,cpu-rdt.RDTMBM,cpu-rdt.RDTMON,kernel-config.NO_HZ,kernel-config.NO_HZ_FULL,kernel-selinux.enabled,kernel-version.full,kernel-version.major,kernel-version.minor,kernel-version.revision,memory-numa,network-sriov.capable,network-sriov.configured,pci-0300_1a03.present,storage-nonrotationaldisk,system-os_release.ID,system-os_release.VERSION_ID,system-os_release.VERSION_ID.major nfd.node.kubernetes.io/worker.version:v0.8.2 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2022-06-03 19:59:32 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.4.0/24\"":{}}}}} {kubeadm Update v1 2022-06-03 19:59:33 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {flanneld Update v1 2022-06-03 20:00:37 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {nfd-master Update v1 2022-06-03 20:08:16 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/extended-resources":{},"f:nfd.node.kubernetes.io/feature-labels":{},"f:nfd.node.kubernetes.io/worker.version":{}},"f:labels":{"f:feature.node.kubernetes.io/cpu-cpuid.ADX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AESNI":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX2":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512BW":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512CD":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512DQ":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512F":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512VL":{},"f:feature.node.kubernetes.io/cpu-cpuid.FMA3":{},"f:feature.node.kubernetes.io/cpu-cpuid.HLE":{},"f:feature.node.kubernetes.io/cpu-cpuid.IBPB":{},"f:feature.node.kubernetes.io/cpu-cpuid.MPX":{},"f:feature.node.kubernetes.io/cpu-cpuid.RTM":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE4":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE42":{},"f:feature.node.kubernetes.io/cpu-cpuid.STIBP":{},"f:feature.node.kubernetes.io/cpu-cpuid.VMX":{},"f:feature.node.kubernetes.io/cpu-cstate.enabled":{},"f:feature.node.kubernetes.io/cpu-hardware_multithreading":{},"f:feature.node.kubernetes.io/cpu-pstate.status":{},"f:feature.node.kubernetes.io/cpu-pstate.turbo":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTCMT":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTL3CA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBM":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMON":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ_FULL":{},"f:feature.node.kubernetes.io/kernel-selinux.enabled":{},"f:feature.node.kubernetes.io/kernel-version.full":{},"f:feature.node.kubernetes.io/kernel-version.major":{},"f:feature.node.kubernetes.io/kernel-version.minor":{},"f:feature.node.kubernetes.io/kernel-version.revision":{},"f:feature.node.kubernetes.io/memory-numa":{},"f:feature.node.kubernetes.io/network-sriov.capable":{},"f:feature.node.kubernetes.io/network-sriov.configured":{},"f:feature.node.kubernetes.io/pci-0300_1a03.present":{},"f:feature.node.kubernetes.io/storage-nonrotationaldisk":{},"f:feature.node.kubernetes.io/system-os_release.ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID.major":{}}}}} {Swagger-Codegen Update v1 2022-06-03 20:12:07 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:cmk.intel.com/cmk-node":{}}},"f:status":{"f:capacity":{"f:cmk.intel.com/exclusive-cores":{}}}}} {kubelet Update v1 2022-06-03 20:12:10 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:cmk.intel.com/exclusive-cores":{},"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:capacity":{"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.4.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.4.0/24],},Status:NodeStatus{Capacity:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{451201003520 0} {} 440625980Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{201269604352 0} {} 196552348Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{77 0} {} 77 DecimalSI},ephemeral-storage: {{406080902496 0} {} 406080902496 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{178884603904 0} {} 174691996Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-06-03 20:03:25 +0000 UTC,LastTransitionTime:2022-06-03 20:03:25 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-06-03 22:02:08 +0000 UTC,LastTransitionTime:2022-06-03 19:59:32 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-06-03 22:02:08 +0000 UTC,LastTransitionTime:2022-06-03 19:59:32 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-06-03 22:02:08 +0000 UTC,LastTransitionTime:2022-06-03 19:59:32 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-06-03 22:02:08 +0000 UTC,LastTransitionTime:2022-06-03 20:03:20 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.208,},NodeAddress{Type:Hostname,Address:node2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:73f6f7c4482d4ddfadf38b35a5d03575,SystemUUID:80B3CD56-852F-E711-906E-0017A4403562,BootID:14b04379-324d-413e-8b7f-b1dff077c955,KernelVersion:3.10.0-1160.66.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.16,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[opnfv/barometer-collectd@sha256:f30e965aa6195e6ac4ca2410f5a15e3704c92e4afa5208178ca22a7911975d66],SizeBytes:1075575763,},ContainerImage{Names:[localhost:30500/cmk@sha256:196eade72a7e16bdb2d709d29fdec354c8a3dbbb68e384608929b41c5ec41520 localhost:30500/cmk:v1.5.1],SizeBytes:727687199,},ContainerImage{Names:[cmk:v1.5.1],SizeBytes:727687199,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[aquasec/kube-hunter@sha256:2be6820bc1d7e0f57193a9a27d5a3e16b2fd93c53747b03ce8ca48c6fc323781 aquasec/kube-hunter:0.3.1],SizeBytes:347611549,},ContainerImage{Names:[sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 sirot/netperf-latest:latest],SizeBytes:282025213,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[nginx@sha256:a05b0cdd4fc1be3b224ba9662ebdf98fe44c09c0c9215b45f84344c12867002e nginx:1.21.1],SizeBytes:133175493,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:716d2f68314c5c4ddd5ecdb45183fcb4ed8019015982c1321571f863989b70b0 k8s.gcr.io/e2e-test-images/httpd:2.4.39-1],SizeBytes:126894770,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:125930239,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:1f36a24cfb5e0c3f725d7565a867c2384282fcbeccc77b07b423c9da95763a9a k8s.gcr.io/e2e-test-images/nautilus:1.4],SizeBytes:121748345,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[k8s.gcr.io/nfd/node-feature-discovery@sha256:74a1cbd82354f148277e20cdce25d57816e355a896bc67f67a0f722164b16945 k8s.gcr.io/nfd/node-feature-discovery:v0.8.2],SizeBytes:108486428,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[localhost:30500/sriov-device-plugin@sha256:bec5a478455b8244d18398355b5ec18540557180ddc029404300ca241638521b localhost:30500/sriov-device-plugin:v3.3.2],SizeBytes:42676189,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[localhost:30500/tasextender@sha256:eddd5e176ac5f79e2e8ba9a1b7023bbf7200edfa835da39de54a6bf3568f9668 localhost:30500/tasextender:0.4],SizeBytes:28910791,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 aquasec/kube-bench:0.3.1],SizeBytes:19301876,},ContainerImage{Names:[prom/collectd-exporter@sha256:73fbda4d24421bff3b741c27efc36f1b6fbe7c57c378d56d4ff78101cd556654],SizeBytes:17463681,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:16032814,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:11443478,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonewprivs@sha256:8ac1264691820febacf3aea5d152cbde6d10685731ec14966a9401c6f47a68ac k8s.gcr.io/e2e-test-images/nonewprivs:1.3],SizeBytes:7107254,},ContainerImage{Names:[appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172cead6502fec989198b98d8bb appropriate/curl:edge],SizeBytes:5654234,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:1154361,},ContainerImage{Names:[busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 busybox:1.28],SizeBytes:1146369,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jun 3 22:02:15.454: INFO: Logging kubelet events for node node2 Jun 3 22:02:15.456: INFO: Logging pods the kubelet thinks is on node node2 Jun 3 22:02:15.478: INFO: kube-proxy-qmkcq started at 2022-06-03 19:59:36 +0000 UTC (0+1 container statuses recorded) Jun 3 22:02:15.478: INFO: Container kube-proxy ready: true, restart count 1 Jun 3 22:02:15.478: INFO: node-feature-discovery-worker-gn855 started at 2022-06-03 20:08:09 +0000 UTC (0+1 container statuses recorded) Jun 3 22:02:15.478: INFO: Container nfd-worker ready: true, restart count 0 Jun 3 22:02:15.478: INFO: collectd-q2l4t started at 2022-06-03 20:17:32 +0000 UTC (0+3 container statuses recorded) Jun 3 22:02:15.478: INFO: Container collectd ready: true, restart count 0 Jun 3 22:02:15.478: INFO: Container collectd-exporter ready: true, restart count 0 Jun 3 22:02:15.478: INFO: Container rbac-proxy ready: true, restart count 0 Jun 3 22:02:15.478: INFO: kubernetes-dashboard-785dcbb76d-25c95 started at 2022-06-03 20:01:12 +0000 UTC (0+1 container statuses recorded) Jun 3 22:02:15.478: INFO: Container kubernetes-dashboard ready: true, restart count 1 Jun 3 22:02:15.478: INFO: cmk-v446x started at 2022-06-03 20:12:24 +0000 UTC (0+2 container statuses recorded) Jun 3 22:02:15.478: INFO: Container nodereport ready: true, restart count 0 Jun 3 22:02:15.478: INFO: Container reconcile ready: true, restart count 0 Jun 3 22:02:15.478: INFO: node-exporter-g45bm started at 2022-06-03 20:13:28 +0000 UTC (0+2 container statuses recorded) Jun 3 22:02:15.478: INFO: Container kube-rbac-proxy ready: true, restart count 0 Jun 3 22:02:15.478: INFO: Container node-exporter ready: true, restart count 0 Jun 3 22:02:15.479: INFO: kubernetes-metrics-scraper-5558854cb-fz4kn started at 2022-06-03 20:01:12 +0000 UTC (0+1 container statuses recorded) Jun 3 22:02:15.479: INFO: Container kubernetes-metrics-scraper ready: true, restart count 1 Jun 3 22:02:15.479: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-49xzt started at 2022-06-03 20:09:20 +0000 UTC (0+1 container statuses recorded) Jun 3 22:02:15.479: INFO: Container kube-sriovdp ready: true, restart count 0 Jun 3 22:02:15.479: INFO: forbid-27571562-gfhz5 started at 2022-06-03 22:02:00 +0000 UTC (0+1 container statuses recorded) Jun 3 22:02:15.479: INFO: Container c ready: true, restart count 0 Jun 3 22:02:15.479: INFO: nginx-proxy-node2 started at 2022-06-03 19:59:32 +0000 UTC (0+1 container statuses recorded) Jun 3 22:02:15.479: INFO: Container nginx-proxy ready: true, restart count 2 Jun 3 22:02:15.479: INFO: cmk-init-discover-node2-xvf8p started at 2022-06-03 20:12:02 +0000 UTC (0+3 container statuses recorded) Jun 3 22:02:15.479: INFO: Container discover ready: false, restart count 0 Jun 3 22:02:15.479: INFO: Container init ready: false, restart count 0 Jun 3 22:02:15.479: INFO: Container install ready: false, restart count 0 Jun 3 22:02:15.479: INFO: concurrent-27571561-b82rp started at 2022-06-03 22:01:00 +0000 UTC (0+1 container statuses recorded) Jun 3 22:02:15.479: INFO: Container c ready: true, restart count 0 Jun 3 22:02:15.479: INFO: tas-telemetry-aware-scheduling-84ff454dfb-j2kg5 started at 2022-06-03 20:16:39 +0000 UTC (0+1 container statuses recorded) Jun 3 22:02:15.479: INFO: Container tas-extender ready: true, restart count 0 Jun 3 22:02:15.479: INFO: foo-5lfsg started at 2022-06-03 22:01:46 +0000 UTC (0+1 container statuses recorded) Jun 3 22:02:15.479: INFO: Container c ready: true, restart count 0 Jun 3 22:02:15.479: INFO: adopt-release-zrsps started at 2022-06-03 22:01:23 +0000 UTC (0+1 container statuses recorded) Jun 3 22:02:15.479: INFO: Container c ready: true, restart count 0 Jun 3 22:02:15.479: INFO: netserver-1 started at 2022-06-03 22:02:01 +0000 UTC (0+1 container statuses recorded) Jun 3 22:02:15.479: INFO: Container webserver ready: false, restart count 0 Jun 3 22:02:15.479: INFO: ss2-0 started at 2022-06-03 22:02:13 +0000 UTC (0+1 container statuses recorded) Jun 3 22:02:15.479: INFO: Container webserver ready: false, restart count 0 Jun 3 22:02:15.479: INFO: pod-init-c6ec4863-28f2-40f2-a279-7621ed7c503c started at 2022-06-03 22:01:33 +0000 UTC (2+1 container statuses recorded) Jun 3 22:02:15.479: INFO: Init container init1 ready: false, restart count 2 Jun 3 22:02:15.479: INFO: Init container init2 ready: false, restart count 0 Jun 3 22:02:15.479: INFO: Container run1 ready: false, restart count 0 Jun 3 22:02:15.479: INFO: kube-flannel-pc7wj started at 2022-06-03 20:00:32 +0000 UTC (1+1 container statuses recorded) Jun 3 22:02:15.479: INFO: Init container install-cni ready: true, restart count 0 Jun 3 22:02:15.479: INFO: Container kube-flannel ready: true, restart count 1 Jun 3 22:02:15.479: INFO: kube-multus-ds-amd64-n7spl started at 2022-06-03 20:00:40 +0000 UTC (0+1 container statuses recorded) Jun 3 22:02:15.479: INFO: Container kube-multus ready: true, restart count 1 Jun 3 22:02:15.479: INFO: adopt-release-sx6z4 started at 2022-06-03 22:01:23 +0000 UTC (0+1 container statuses recorded) Jun 3 22:02:15.479: INFO: Container c ready: true, restart count 0 Jun 3 22:02:15.479: INFO: adopt-release-5sqrm started at 2022-06-03 22:01:36 +0000 UTC (0+1 container statuses recorded) Jun 3 22:02:15.479: INFO: Container c ready: true, restart count 0 Jun 3 22:02:15.479: INFO: foo-7zgjm started at 2022-06-03 22:01:46 +0000 UTC (0+1 container statuses recorded) Jun 3 22:02:15.479: INFO: Container c ready: true, restart count 0 Jun 3 22:02:15.712: INFO: Latency metrics for node node2 Jun 3 22:02:15.712: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-7924" for this suite. • Failure [309.303 seconds] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:95 Should recreate evicted statefulset [Conformance] [It] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Jun 3 22:02:14.473: Pod ss-0 expected to be re-created at least once /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes/runner.go:113 ------------------------------ [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 3 22:01:56.504: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Jun 3 22:01:57.307: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Jun 3 22:01:59.319: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63789890517, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63789890517, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63789890517, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63789890517, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} Jun 3 22:02:01.322: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63789890517, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63789890517, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63789890517, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63789890517, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jun 3 22:02:04.335: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should honor timeout [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Setting timeout (1s) shorter than webhook latency (5s) STEP: Registering slow webhook via the AdmissionRegistration API STEP: Request fails when timeout (1s) is shorter than slow webhook latency (5s) STEP: Having no error when timeout is shorter than webhook latency and failure policy is ignore STEP: Registering slow webhook via the AdmissionRegistration API STEP: Having no error when timeout is longer than webhook latency STEP: Registering slow webhook via the AdmissionRegistration API STEP: Having no error when timeout is empty (defaulted to 10s in v1) STEP: Registering slow webhook via the AdmissionRegistration API [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 3 22:02:16.426: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-3425" for this suite. STEP: Destroying namespace "webhook-3425-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:19.957 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should honor timeout [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","total":-1,"completed":16,"skipped":231,"failed":1,"failures":["[sig-network] Services should be able to create a functioning NodePort service [Conformance]"]} SSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] EndpointSlice /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 3 22:02:16.489: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename endpointslice STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] EndpointSlice /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/endpointslice.go:49 [It] should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [AfterEach] [sig-network] EndpointSlice /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 3 22:02:16.540: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "endpointslice-3251" for this suite. • ------------------------------ {"msg":"PASSED [sig-network] EndpointSlice should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance]","total":-1,"completed":17,"skipped":242,"failed":1,"failures":["[sig-network] Services should be able to create a functioning NodePort service [Conformance]"]} SS ------------------------------ [BeforeEach] [sig-node] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 3 22:01:33.332: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/init_container.go:162 [It] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating the pod Jun 3 22:01:33.353: INFO: PodSpec: initContainers in spec.initContainers Jun 3 22:02:19.579: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-c6ec4863-28f2-40f2-a279-7621ed7c503c", GenerateName:"", Namespace:"init-container-5401", SelfLink:"", UID:"54d99ebc-f30d-4fa9-a827-7c5cbc063150", ResourceVersion:"41917", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63789890493, loc:(*time.Location)(0x9e2e180)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"353621136"}, Annotations:map[string]string{"k8s.v1.cni.cncf.io/network-status":"[{\n \"name\": \"default-cni-network\",\n \"interface\": \"eth0\",\n \"ips\": [\n \"10.244.4.83\"\n ],\n \"mac\": \"a6:4f:68:1e:70:5c\",\n \"default\": true,\n \"dns\": {}\n}]", "k8s.v1.cni.cncf.io/networks-status":"[{\n \"name\": \"default-cni-network\",\n \"interface\": \"eth0\",\n \"ips\": [\n \"10.244.4.83\"\n ],\n \"mac\": \"a6:4f:68:1e:70:5c\",\n \"default\": true,\n \"dns\": {}\n}]", "kubernetes.io/psp":"collectd"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc003c10af8), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc003c10b10)}, v1.ManagedFieldsEntry{Manager:"multus", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc003c10b40), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc003c10b58)}, v1.ManagedFieldsEntry{Manager:"kubelet", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc003c10b88), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc003c10ba0)}}}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"kube-api-access-f7vf9", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(0xc007ebb8c0), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"k8s.gcr.io/e2e-test-images/busybox:1.29-1", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"kube-api-access-f7vf9", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"Always", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"k8s.gcr.io/e2e-test-images/busybox:1.29-1", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"kube-api-access-f7vf9", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"Always", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.4.1", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"kube-api-access-f7vf9", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"Always", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc00423fa68), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"node2", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc000c9f110), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc00423faf0)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc00423fb10)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc00423fb18), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc00423fb1c), PreemptionPolicy:(*v1.PreemptionPolicy)(0xc00103a2b0), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63789890493, loc:(*time.Location)(0x9e2e180)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63789890493, loc:(*time.Location)(0x9e2e180)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63789890493, loc:(*time.Location)(0x9e2e180)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63789890493, loc:(*time.Location)(0x9e2e180)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"10.10.190.208", PodIP:"10.244.4.83", PodIPs:[]v1.PodIP{v1.PodIP{IP:"10.244.4.83"}}, StartTime:(*v1.Time)(0xc003c10bd0), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc000c9f260)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc000c9f2d0)}, Ready:false, RestartCount:3, Image:"k8s.gcr.io/e2e-test-images/busybox:1.29-1", ImageID:"docker-pullable://k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592", ContainerID:"docker://dcbee9dbdadbf6995caa36a425f7b08cf7e36d933556bfd390638326002cc471", Started:(*bool)(nil)}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc007ebb940), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/e2e-test-images/busybox:1.29-1", ImageID:"", ContainerID:"", Started:(*bool)(nil)}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc007ebb920), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.4.1", ImageID:"", ContainerID:"", Started:(*bool)(0xc00423fb9f)}}, QOSClass:"Burstable", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}} [AfterEach] [sig-node] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 3 22:02:19.580: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-5401" for this suite. • [SLOW TEST:46.257 seconds] [sig-node] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should not start app containers if init containers fail on a RestartAlways pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance]","total":-1,"completed":34,"skipped":541,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 3 22:02:16.555: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test emptydir 0777 on tmpfs Jun 3 22:02:16.588: INFO: Waiting up to 5m0s for pod "pod-98e20a6e-a13a-4306-8239-fe01604954bc" in namespace "emptydir-2312" to be "Succeeded or Failed" Jun 3 22:02:16.591: INFO: Pod "pod-98e20a6e-a13a-4306-8239-fe01604954bc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.958014ms Jun 3 22:02:18.595: INFO: Pod "pod-98e20a6e-a13a-4306-8239-fe01604954bc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007340452s Jun 3 22:02:20.602: INFO: Pod "pod-98e20a6e-a13a-4306-8239-fe01604954bc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013945453s STEP: Saw pod success Jun 3 22:02:20.602: INFO: Pod "pod-98e20a6e-a13a-4306-8239-fe01604954bc" satisfied condition "Succeeded or Failed" Jun 3 22:02:20.604: INFO: Trying to get logs from node node1 pod pod-98e20a6e-a13a-4306-8239-fe01604954bc container test-container: STEP: delete the pod Jun 3 22:02:20.615: INFO: Waiting for pod pod-98e20a6e-a13a-4306-8239-fe01604954bc to disappear Jun 3 22:02:20.617: INFO: Pod pod-98e20a6e-a13a-4306-8239-fe01604954bc no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 3 22:02:20.617: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-2312" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":18,"skipped":244,"failed":1,"failures":["[sig-network] Services should be able to create a functioning NodePort service [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ {"msg":"FAILED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","total":-1,"completed":10,"skipped":175,"failed":1,"failures":["[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} [BeforeEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 3 22:02:15.728: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not be blocked by dependency circle [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Jun 3 22:02:15.797: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"77534947-5b35-4d24-a23d-d27eddbdee85", Controller:(*bool)(0xc004b9c702), BlockOwnerDeletion:(*bool)(0xc004b9c703)}} Jun 3 22:02:15.801: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"88b2c3bf-0812-4c1d-a1c2-11e9dc04ff2b", Controller:(*bool)(0xc004b9c9aa), BlockOwnerDeletion:(*bool)(0xc004b9c9ab)}} Jun 3 22:02:15.805: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"eb7874f7-2e8f-4aeb-acfc-a255895b4f26", Controller:(*bool)(0xc004b9cc4a), BlockOwnerDeletion:(*bool)(0xc004b9cc4b)}} [AfterEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 3 22:02:20.813: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-6600" for this suite. • [SLOW TEST:5.093 seconds] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not be blocked by dependency circle [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance]","total":-1,"completed":11,"skipped":175,"failed":1,"failures":["[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 3 22:02:19.719: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's args [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test substitution in container's args Jun 3 22:02:19.754: INFO: Waiting up to 5m0s for pod "var-expansion-905ea80b-bd76-417d-b4ad-d0a763bcb4a2" in namespace "var-expansion-3581" to be "Succeeded or Failed" Jun 3 22:02:19.759: INFO: Pod "var-expansion-905ea80b-bd76-417d-b4ad-d0a763bcb4a2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.715815ms Jun 3 22:02:21.762: INFO: Pod "var-expansion-905ea80b-bd76-417d-b4ad-d0a763bcb4a2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008391023s Jun 3 22:02:23.767: INFO: Pod "var-expansion-905ea80b-bd76-417d-b4ad-d0a763bcb4a2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.012740974s Jun 3 22:02:25.772: INFO: Pod "var-expansion-905ea80b-bd76-417d-b4ad-d0a763bcb4a2": Phase="Pending", Reason="", readiness=false. Elapsed: 6.018084452s Jun 3 22:02:27.777: INFO: Pod "var-expansion-905ea80b-bd76-417d-b4ad-d0a763bcb4a2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.022913374s STEP: Saw pod success Jun 3 22:02:27.777: INFO: Pod "var-expansion-905ea80b-bd76-417d-b4ad-d0a763bcb4a2" satisfied condition "Succeeded or Failed" Jun 3 22:02:27.779: INFO: Trying to get logs from node node1 pod var-expansion-905ea80b-bd76-417d-b4ad-d0a763bcb4a2 container dapi-container: STEP: delete the pod Jun 3 22:02:27.792: INFO: Waiting for pod var-expansion-905ea80b-bd76-417d-b4ad-d0a763bcb4a2 to disappear Jun 3 22:02:27.794: INFO: Pod var-expansion-905ea80b-bd76-417d-b4ad-d0a763bcb4a2 no longer exists [AfterEach] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 3 22:02:27.794: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-3581" for this suite. • [SLOW TEST:8.082 seconds] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should allow substituting values in a container's args [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance]","total":-1,"completed":35,"skipped":602,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-apps] Job /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 3 22:01:46.886: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should delete a job [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a job STEP: Ensuring active pods == parallelism STEP: delete a job STEP: deleting Job.batch foo in namespace job-515, will wait for the garbage collector to delete the pods Jun 3 22:01:50.977: INFO: Deleting Job.batch foo took: 3.514761ms Jun 3 22:01:51.077: INFO: Terminating Job.batch foo pods took: 100.563089ms STEP: Ensuring job was deleted [AfterEach] [sig-apps] Job /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 3 22:02:29.480: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-515" for this suite. • [SLOW TEST:42.604 seconds] [sig-apps] Job /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should delete a job [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] Job should delete a job [Conformance]","total":-1,"completed":15,"skipped":307,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 3 22:02:29.555: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41 [It] should provide container's memory limit [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test downward API volume plugin Jun 3 22:02:29.588: INFO: Waiting up to 5m0s for pod "downwardapi-volume-11630ed9-5591-4f4c-a51d-5779228341dd" in namespace "projected-106" to be "Succeeded or Failed" Jun 3 22:02:29.593: INFO: Pod "downwardapi-volume-11630ed9-5591-4f4c-a51d-5779228341dd": Phase="Pending", Reason="", readiness=false. Elapsed: 4.403857ms Jun 3 22:02:31.597: INFO: Pod "downwardapi-volume-11630ed9-5591-4f4c-a51d-5779228341dd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008116654s Jun 3 22:02:33.602: INFO: Pod "downwardapi-volume-11630ed9-5591-4f4c-a51d-5779228341dd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013142126s STEP: Saw pod success Jun 3 22:02:33.602: INFO: Pod "downwardapi-volume-11630ed9-5591-4f4c-a51d-5779228341dd" satisfied condition "Succeeded or Failed" Jun 3 22:02:33.604: INFO: Trying to get logs from node node1 pod downwardapi-volume-11630ed9-5591-4f4c-a51d-5779228341dd container client-container: STEP: delete the pod Jun 3 22:02:33.619: INFO: Waiting for pod downwardapi-volume-11630ed9-5591-4f4c-a51d-5779228341dd to disappear Jun 3 22:02:33.621: INFO: Pod downwardapi-volume-11630ed9-5591-4f4c-a51d-5779228341dd no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 3 22:02:33.621: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-106" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance]","total":-1,"completed":16,"skipped":340,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 3 22:02:01.462: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: udp [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Performing setup for networking test in namespace pod-network-test-8767 STEP: creating a selector STEP: Creating the service pods in kubernetes Jun 3 22:02:01.485: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Jun 3 22:02:01.516: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jun 3 22:02:03.520: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jun 3 22:02:05.520: INFO: The status of Pod netserver-0 is Running (Ready = false) Jun 3 22:02:07.520: INFO: The status of Pod netserver-0 is Running (Ready = false) Jun 3 22:02:09.520: INFO: The status of Pod netserver-0 is Running (Ready = false) Jun 3 22:02:11.520: INFO: The status of Pod netserver-0 is Running (Ready = false) Jun 3 22:02:13.520: INFO: The status of Pod netserver-0 is Running (Ready = false) Jun 3 22:02:15.519: INFO: The status of Pod netserver-0 is Running (Ready = false) Jun 3 22:02:17.519: INFO: The status of Pod netserver-0 is Running (Ready = false) Jun 3 22:02:19.519: INFO: The status of Pod netserver-0 is Running (Ready = false) Jun 3 22:02:21.519: INFO: The status of Pod netserver-0 is Running (Ready = false) Jun 3 22:02:23.521: INFO: The status of Pod netserver-0 is Running (Ready = true) Jun 3 22:02:23.527: INFO: The status of Pod netserver-1 is Running (Ready = false) Jun 3 22:02:25.534: INFO: The status of Pod netserver-1 is Running (Ready = false) Jun 3 22:02:27.531: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods Jun 3 22:02:35.558: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2 Jun 3 22:02:35.558: INFO: Breadth first check of 10.244.3.11 on host 10.10.190.207... Jun 3 22:02:35.561: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.4.101:9080/dial?request=hostname&protocol=udp&host=10.244.3.11&port=8081&tries=1'] Namespace:pod-network-test-8767 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 3 22:02:35.561: INFO: >>> kubeConfig: /root/.kube/config Jun 3 22:02:35.661: INFO: Waiting for responses: map[] Jun 3 22:02:35.661: INFO: reached 10.244.3.11 after 0/1 tries Jun 3 22:02:35.661: INFO: Breadth first check of 10.244.4.90 on host 10.10.190.208... Jun 3 22:02:35.664: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.4.101:9080/dial?request=hostname&protocol=udp&host=10.244.4.90&port=8081&tries=1'] Namespace:pod-network-test-8767 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 3 22:02:35.664: INFO: >>> kubeConfig: /root/.kube/config Jun 3 22:02:35.850: INFO: Waiting for responses: map[] Jun 3 22:02:35.851: INFO: reached 10.244.4.90 after 0/1 tries Jun 3 22:02:35.851: INFO: Going to retry 0 out of 2 pods.... [AfterEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 3 22:02:35.851: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-8767" for this suite. • [SLOW TEST:34.399 seconds] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/network/framework.go:23 Granular Checks: Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/network/networking.go:30 should function for intra-pod communication: udp [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance]","total":-1,"completed":42,"skipped":753,"failed":0} SSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 3 22:01:27.864: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-watch STEP: Waiting for a default service account to be provisioned in namespace [It] watch on custom resource definition objects [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Jun 3 22:01:27.885: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating first CR Jun 3 22:01:35.432: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2022-06-03T22:01:35Z generation:1 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2022-06-03T22:01:35Z]] name:name1 resourceVersion:40907 uid:19ff967b-1f49-48fd-b188-0d649649aa2e] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Creating second CR Jun 3 22:01:45.438: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2022-06-03T22:01:45Z generation:1 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2022-06-03T22:01:45Z]] name:name2 resourceVersion:41244 uid:39c7ab7d-96f6-43d6-b2a1-cc465b550a7b] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Modifying first CR Jun 3 22:01:55.443: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2022-06-03T22:01:35Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2022-06-03T22:01:55Z]] name:name1 resourceVersion:41435 uid:19ff967b-1f49-48fd-b188-0d649649aa2e] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Modifying second CR Jun 3 22:02:05.450: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2022-06-03T22:01:45Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2022-06-03T22:02:05Z]] name:name2 resourceVersion:41654 uid:39c7ab7d-96f6-43d6-b2a1-cc465b550a7b] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Deleting first CR Jun 3 22:02:15.456: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2022-06-03T22:01:35Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2022-06-03T22:01:55Z]] name:name1 resourceVersion:41787 uid:19ff967b-1f49-48fd-b188-0d649649aa2e] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Deleting second CR Jun 3 22:02:25.463: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2022-06-03T22:01:45Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2022-06-03T22:02:05Z]] name:name2 resourceVersion:42134 uid:39c7ab7d-96f6-43d6-b2a1-cc465b550a7b] num:map[num1:9223372036854775807 num2:1000000]]} [AfterEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 3 22:02:35.973: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-watch-2114" for this suite. • [SLOW TEST:68.118 seconds] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 CustomResourceDefinition Watch /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_watch.go:42 watch on custom resource definition objects [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance]","total":-1,"completed":14,"skipped":241,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 3 22:02:20.877: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Jun 3 22:02:21.324: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Jun 3 22:02:23.335: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63789890541, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63789890541, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63789890541, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63789890541, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} Jun 3 22:02:25.339: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63789890541, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63789890541, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63789890541, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63789890541, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} Jun 3 22:02:27.339: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63789890541, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63789890541, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63789890541, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63789890541, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} Jun 3 22:02:29.339: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63789890541, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63789890541, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63789890541, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63789890541, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} Jun 3 22:02:31.338: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63789890541, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63789890541, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63789890541, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63789890541, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} Jun 3 22:02:33.340: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63789890541, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63789890541, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63789890541, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63789890541, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jun 3 22:02:36.346: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate pod and apply defaults after mutation [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Registering the mutating pod webhook via the AdmissionRegistration API STEP: create a pod that should be updated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 3 22:02:36.389: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-922" for this suite. STEP: Destroying namespace "webhook-922-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:15.543 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate pod and apply defaults after mutation [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","total":-1,"completed":12,"skipped":203,"failed":1,"failures":["[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 3 22:02:36.422: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746 [It] should provide secure master service [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 3 22:02:36.445: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-6098" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750 • ------------------------------ {"msg":"PASSED [sig-network] Services should provide secure master service [Conformance]","total":-1,"completed":13,"skipped":203,"failed":1,"failures":["[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 3 22:02:20.680: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:86 [It] deployment should support proportional scaling [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Jun 3 22:02:20.702: INFO: Creating deployment "webserver-deployment" Jun 3 22:02:20.705: INFO: Waiting for observed generation 1 Jun 3 22:02:22.710: INFO: Waiting for all required pods to come up Jun 3 22:02:22.714: INFO: Pod name httpd: Found 10 pods out of 10 STEP: ensuring each pod is running Jun 3 22:02:34.720: INFO: Waiting for deployment "webserver-deployment" to complete Jun 3 22:02:34.725: INFO: Updating deployment "webserver-deployment" with a non-existent image Jun 3 22:02:34.732: INFO: Updating deployment webserver-deployment Jun 3 22:02:34.732: INFO: Waiting for observed generation 2 Jun 3 22:02:36.737: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8 Jun 3 22:02:36.739: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8 Jun 3 22:02:36.742: INFO: Waiting for the first rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas Jun 3 22:02:36.748: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0 Jun 3 22:02:36.748: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5 Jun 3 22:02:36.752: INFO: Waiting for the second rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas Jun 3 22:02:36.758: INFO: Verifying that deployment "webserver-deployment" has minimum required number of available replicas Jun 3 22:02:36.758: INFO: Scaling up the deployment "webserver-deployment" from 10 to 30 Jun 3 22:02:36.770: INFO: Updating deployment webserver-deployment Jun 3 22:02:36.770: INFO: Waiting for the replicasets of deployment "webserver-deployment" to have desired number of replicas Jun 3 22:02:36.774: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20 Jun 3 22:02:36.776: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13 [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:80 Jun 3 22:02:38.787: INFO: Deployment "webserver-deployment": &Deployment{ObjectMeta:{webserver-deployment deployment-7506 c27a8ea6-ca48-4b7d-9b0a-27e92dcf5433 42578 3 2022-06-03 22:02:20 +0000 UTC map[name:httpd] map[deployment.kubernetes.io/revision:2] [] [] [{e2e.test Update apps/v1 2022-06-03 22:02:20 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2022-06-03 22:02:34 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:availableReplicas":{},"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{},"f:unavailableReplicas":{},"f:updatedReplicas":{}}}}]},Spec:DeploymentSpec{Replicas:*30,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd] map[] [] [] []} {[] [] [{httpd webserver:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc005bc24e8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:33,UpdatedReplicas:13,AvailableReplicas:8,UnavailableReplicas:25,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2022-06-03 22:02:36 +0000 UTC,LastTransitionTime:2022-06-03 22:02:36 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "webserver-deployment-795d758f88" is progressing.,LastUpdateTime:2022-06-03 22:02:36 +0000 UTC,LastTransitionTime:2022-06-03 22:02:20 +0000 UTC,},},ReadyReplicas:8,CollisionCount:nil,},} Jun 3 22:02:38.790: INFO: New ReplicaSet "webserver-deployment-795d758f88" of Deployment "webserver-deployment": &ReplicaSet{ObjectMeta:{webserver-deployment-795d758f88 deployment-7506 a3a5af25-8594-4678-8f90-e66916150f0e 42551 3 2022-06-03 22:02:34 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment webserver-deployment c27a8ea6-ca48-4b7d-9b0a-27e92dcf5433 0xc005bc2917 0xc005bc2918}] [] [{kube-controller-manager Update apps/v1 2022-06-03 22:02:34 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c27a8ea6-ca48-4b7d-9b0a-27e92dcf5433\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*13,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: 795d758f88,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [] [] []} {[] [] [{httpd webserver:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc005bc29a8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:13,FullyLabeledReplicas:13,ObservedGeneration:3,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Jun 3 22:02:38.790: INFO: All old ReplicaSets of Deployment "webserver-deployment": Jun 3 22:02:38.790: INFO: &ReplicaSet{ObjectMeta:{webserver-deployment-847dcfb7fb deployment-7506 13249a35-0fa1-478f-bc29-eb50c757d697 42577 3 2022-06-03 22:02:20 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment webserver-deployment c27a8ea6-ca48-4b7d-9b0a-27e92dcf5433 0xc005bc2a07 0xc005bc2a08}] [] [{kube-controller-manager Update apps/v1 2022-06-03 22:02:27 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c27a8ea6-ca48-4b7d-9b0a-27e92dcf5433\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*20,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: 847dcfb7fb,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[] [] [] []} {[] [] [{httpd k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc005bc2a88 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:20,FullyLabeledReplicas:20,ObservedGeneration:3,ReadyReplicas:8,AvailableReplicas:8,Conditions:[]ReplicaSetCondition{},},} Jun 3 22:02:38.797: INFO: Pod "webserver-deployment-795d758f88-22dcx" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-22dcx webserver-deployment-795d758f88- deployment-7506 9a3ca470-2b0a-48d3-8a06-1cc1ecdb0812 42591 0 2022-06-03 22:02:34 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[k8s.v1.cni.cncf.io/network-status:[{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.3.21" ], "mac": "d6:02:f5:17:34:c5", "default": true, "dns": {} }] k8s.v1.cni.cncf.io/networks-status:[{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.3.21" ], "mac": "d6:02:f5:17:34:c5", "default": true, "dns": {} }] kubernetes.io/psp:collectd] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 a3a5af25-8594-4678-8f90-e66916150f0e 0xc005b97a5f 0xc005b97a70}] [] [{kube-controller-manager Update v1 2022-06-03 22:02:34 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a3a5af25-8594-4678-8f90-e66916150f0e\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2022-06-03 22:02:34 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:hostIP":{},"f:startTime":{}}}} {multus Update v1 2022-06-03 22:02:37 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:k8s.v1.cni.cncf.io/network-status":{},"f:k8s.v1.cni.cncf.io/networks-status":{}}},"f:status":{"f:containerStatuses":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-zkddl,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-zkddl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node1,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-06-03 22:02:34 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-06-03 22:02:34 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-06-03 22:02:34 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-06-03 22:02:34 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.10.190.207,PodIP:,StartTime:2022-06-03 22:02:34 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:nil,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jun 3 22:02:38.798: INFO: Pod "webserver-deployment-795d758f88-2h9qw" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-2h9qw webserver-deployment-795d758f88- deployment-7506 cff49f99-287a-4a1a-a018-90a35eb14539 42617 0 2022-06-03 22:02:36 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[kubernetes.io/psp:collectd] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 a3a5af25-8594-4678-8f90-e66916150f0e 0xc005b97c8f 0xc005b97ca0}] [] [{kube-controller-manager Update v1 2022-06-03 22:02:36 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a3a5af25-8594-4678-8f90-e66916150f0e\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2022-06-03 22:02:38 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-5lqx8,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-5lqx8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node1,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-06-03 22:02:36 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-06-03 22:02:36 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-06-03 22:02:36 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-06-03 22:02:36 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.10.190.207,PodIP:,StartTime:2022-06-03 22:02:36 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jun 3 22:02:38.799: INFO: Pod "webserver-deployment-795d758f88-6p228" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-6p228 webserver-deployment-795d758f88- deployment-7506 31c38bd6-3955-4643-97bc-939521d37b89 42412 0 2022-06-03 22:02:34 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[kubernetes.io/psp:collectd] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 a3a5af25-8594-4678-8f90-e66916150f0e 0xc005b97ebf 0xc005b97ef0}] [] [{kube-controller-manager Update v1 2022-06-03 22:02:34 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a3a5af25-8594-4678-8f90-e66916150f0e\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2022-06-03 22:02:34 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-x76bb,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-x76bb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node1,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-06-03 22:02:34 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-06-03 22:02:34 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-06-03 22:02:34 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-06-03 22:02:34 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.10.190.207,PodIP:,StartTime:2022-06-03 22:02:34 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jun 3 22:02:38.799: INFO: Pod "webserver-deployment-795d758f88-6td2q" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-6td2q webserver-deployment-795d758f88- deployment-7506 c744109f-549a-4d65-a675-e1f0a0f79b1c 42523 0 2022-06-03 22:02:36 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[kubernetes.io/psp:collectd] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 a3a5af25-8594-4678-8f90-e66916150f0e 0xc005c0c0ef 0xc005c0c100}] [] [{kube-controller-manager Update v1 2022-06-03 22:02:36 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a3a5af25-8594-4678-8f90-e66916150f0e\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-g27vg,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-g27vg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-06-03 22:02:36 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jun 3 22:02:38.799: INFO: Pod "webserver-deployment-795d758f88-8kpht" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-8kpht webserver-deployment-795d758f88- deployment-7506 3badf094-1434-4ba7-b789-51aa6b9289fc 42563 0 2022-06-03 22:02:36 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[kubernetes.io/psp:collectd] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 a3a5af25-8594-4678-8f90-e66916150f0e 0xc005c0c2cf 0xc005c0c2e0}] [] [{kube-controller-manager Update v1 2022-06-03 22:02:36 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a3a5af25-8594-4678-8f90-e66916150f0e\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2022-06-03 22:02:36 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-x9qpg,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-x9qpg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node1,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-06-03 22:02:36 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-06-03 22:02:36 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-06-03 22:02:36 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-06-03 22:02:36 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.10.190.207,PodIP:,StartTime:2022-06-03 22:02:36 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jun 3 22:02:38.800: INFO: Pod "webserver-deployment-795d758f88-bxn52" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-bxn52 webserver-deployment-795d758f88- deployment-7506 66fd0c6d-ee99-4cb8-9ddc-1ad9fe801730 42513 0 2022-06-03 22:02:36 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[kubernetes.io/psp:collectd] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 a3a5af25-8594-4678-8f90-e66916150f0e 0xc005c0c53f 0xc005c0c550}] [] [{kube-controller-manager Update v1 2022-06-03 22:02:36 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a3a5af25-8594-4678-8f90-e66916150f0e\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-jh49c,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-jh49c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-06-03 22:02:36 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jun 3 22:02:38.800: INFO: Pod "webserver-deployment-795d758f88-cc7qp" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-cc7qp webserver-deployment-795d758f88- deployment-7506 803623cb-5328-4232-8e41-af724d5d2afc 42540 0 2022-06-03 22:02:36 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[kubernetes.io/psp:collectd] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 a3a5af25-8594-4678-8f90-e66916150f0e 0xc005c0c6df 0xc005c0c6f0}] [] [{kube-controller-manager Update v1 2022-06-03 22:02:36 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a3a5af25-8594-4678-8f90-e66916150f0e\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2022-06-03 22:02:36 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-grqj2,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-grqj2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node1,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-06-03 22:02:36 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-06-03 22:02:36 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-06-03 22:02:36 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-06-03 22:02:36 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.10.190.207,PodIP:,StartTime:2022-06-03 22:02:36 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jun 3 22:02:38.801: INFO: Pod "webserver-deployment-795d758f88-cjfvg" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-cjfvg webserver-deployment-795d758f88- deployment-7506 38f396ae-e634-48d5-af9b-dc38b601b34d 42602 0 2022-06-03 22:02:34 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[k8s.v1.cni.cncf.io/network-status:[{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.3.23" ], "mac": "da:6d:1e:46:bd:cf", "default": true, "dns": {} }] k8s.v1.cni.cncf.io/networks-status:[{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.3.23" ], "mac": "da:6d:1e:46:bd:cf", "default": true, "dns": {} }] kubernetes.io/psp:collectd] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 a3a5af25-8594-4678-8f90-e66916150f0e 0xc005c0c94f 0xc005c0c960}] [] [{kube-controller-manager Update v1 2022-06-03 22:02:34 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a3a5af25-8594-4678-8f90-e66916150f0e\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2022-06-03 22:02:34 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:hostIP":{},"f:startTime":{}}}} {multus Update v1 2022-06-03 22:02:38 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:k8s.v1.cni.cncf.io/network-status":{},"f:k8s.v1.cni.cncf.io/networks-status":{}}},"f:status":{"f:containerStatuses":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-pq2wg,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-pq2wg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node1,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-06-03 22:02:34 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-06-03 22:02:34 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-06-03 22:02:34 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-06-03 22:02:34 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.10.190.207,PodIP:,StartTime:2022-06-03 22:02:34 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:nil,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jun 3 22:02:38.801: INFO: Pod "webserver-deployment-795d758f88-jpbgq" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-jpbgq webserver-deployment-795d758f88- deployment-7506 6f36f63a-22c4-4f3b-8348-53d40695ba9b 42581 0 2022-06-03 22:02:34 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[k8s.v1.cni.cncf.io/network-status:[{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.4.102" ], "mac": "9e:d4:0f:56:88:a9", "default": true, "dns": {} }] k8s.v1.cni.cncf.io/networks-status:[{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.4.102" ], "mac": "9e:d4:0f:56:88:a9", "default": true, "dns": {} }] kubernetes.io/psp:collectd] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 a3a5af25-8594-4678-8f90-e66916150f0e 0xc005c0cc1f 0xc005c0cc30}] [] [{kube-controller-manager Update v1 2022-06-03 22:02:34 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a3a5af25-8594-4678-8f90-e66916150f0e\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {multus Update v1 2022-06-03 22:02:37 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:k8s.v1.cni.cncf.io/network-status":{},"f:k8s.v1.cni.cncf.io/networks-status":{}}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-ndj7m,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-ndj7m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-06-03 22:02:34 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jun 3 22:02:38.802: INFO: Pod "webserver-deployment-795d758f88-mgjxk" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-mgjxk webserver-deployment-795d758f88- deployment-7506 04791fc8-e15d-4757-bb73-5227f9ce309a 42506 0 2022-06-03 22:02:36 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[kubernetes.io/psp:collectd] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 a3a5af25-8594-4678-8f90-e66916150f0e 0xc005c0ce6f 0xc005c0ce80}] [] [{kube-controller-manager Update v1 2022-06-03 22:02:36 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a3a5af25-8594-4678-8f90-e66916150f0e\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-rvgnc,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-rvgnc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-06-03 22:02:36 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jun 3 22:02:38.802: INFO: Pod "webserver-deployment-795d758f88-n26q4" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-n26q4 webserver-deployment-795d758f88- deployment-7506 97368bf2-1353-40f3-b194-6fa88a6ee67e 42499 0 2022-06-03 22:02:36 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[kubernetes.io/psp:collectd] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 a3a5af25-8594-4678-8f90-e66916150f0e 0xc005c0d04f 0xc005c0d080}] [] [{kube-controller-manager Update v1 2022-06-03 22:02:36 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a3a5af25-8594-4678-8f90-e66916150f0e\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-mkkl6,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-mkkl6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-06-03 22:02:36 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jun 3 22:02:38.803: INFO: Pod "webserver-deployment-795d758f88-qq6kf" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-qq6kf webserver-deployment-795d758f88- deployment-7506 39ab8868-98d1-459d-86cc-65601c2a6cc6 42598 0 2022-06-03 22:02:34 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[k8s.v1.cni.cncf.io/network-status:[{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.3.22" ], "mac": "66:67:85:40:22:9d", "default": true, "dns": {} }] k8s.v1.cni.cncf.io/networks-status:[{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.3.22" ], "mac": "66:67:85:40:22:9d", "default": true, "dns": {} }] kubernetes.io/psp:collectd] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 a3a5af25-8594-4678-8f90-e66916150f0e 0xc005c0d27f 0xc005c0d290}] [] [{kube-controller-manager Update v1 2022-06-03 22:02:34 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a3a5af25-8594-4678-8f90-e66916150f0e\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2022-06-03 22:02:34 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:hostIP":{},"f:startTime":{}}}} {multus Update v1 2022-06-03 22:02:38 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:k8s.v1.cni.cncf.io/network-status":{},"f:k8s.v1.cni.cncf.io/networks-status":{}}},"f:status":{"f:containerStatuses":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-xrkq4,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-xrkq4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node1,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-06-03 22:02:34 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-06-03 22:02:34 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-06-03 22:02:34 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-06-03 22:02:34 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.10.190.207,PodIP:,StartTime:2022-06-03 22:02:34 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:nil,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jun 3 22:02:38.803: INFO: Pod "webserver-deployment-795d758f88-wts6z" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-wts6z webserver-deployment-795d758f88- deployment-7506 d1ed17df-acae-4675-ba72-2de93c4614f7 42508 0 2022-06-03 22:02:36 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[kubernetes.io/psp:collectd] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 a3a5af25-8594-4678-8f90-e66916150f0e 0xc005c0d4df 0xc005c0d4f0}] [] [{kube-controller-manager Update v1 2022-06-03 22:02:36 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a3a5af25-8594-4678-8f90-e66916150f0e\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-q7lnz,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-q7lnz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-06-03 22:02:36 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jun 3 22:02:38.803: INFO: Pod "webserver-deployment-847dcfb7fb-2nn9t" is available: &Pod{ObjectMeta:{webserver-deployment-847dcfb7fb-2nn9t webserver-deployment-847dcfb7fb- deployment-7506 2e7e9108-03c6-4c59-8c08-0d0b31764d41 42266 0 2022-06-03 22:02:20 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[k8s.v1.cni.cncf.io/network-status:[{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.4.96" ], "mac": "62:e8:76:cf:9f:a9", "default": true, "dns": {} }] k8s.v1.cni.cncf.io/networks-status:[{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.4.96" ], "mac": "62:e8:76:cf:9f:a9", "default": true, "dns": {} }] kubernetes.io/psp:collectd] [{apps/v1 ReplicaSet webserver-deployment-847dcfb7fb 13249a35-0fa1-478f-bc29-eb50c757d697 0xc005c0d68f 0xc005c0d6a0}] [] [{kube-controller-manager Update v1 2022-06-03 22:02:20 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"13249a35-0fa1-478f-bc29-eb50c757d697\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {multus Update v1 2022-06-03 22:02:25 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:k8s.v1.cni.cncf.io/network-status":{},"f:k8s.v1.cni.cncf.io/networks-status":{}}}}} {kubelet Update v1 2022-06-03 22:02:29 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.4.96\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-4tb6w,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-4tb6w,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-06-03 22:02:20 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-06-03 22:02:28 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-06-03 22:02:28 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-06-03 22:02:20 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.10.190.208,PodIP:10.244.4.96,StartTime:2022-06-03 22:02:20 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2022-06-03 22:02:27 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,ImageID:docker-pullable://k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50,ContainerID:docker://b05bcc677fbdb49ec9215f8200c313cc3292c665ced7affad5ae1cac0a007b4f,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.4.96,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Jun 3 22:02:38.804: INFO: Pod "webserver-deployment-847dcfb7fb-4m5gr" is not available: &Pod{ObjectMeta:{webserver-deployment-847dcfb7fb-4m5gr webserver-deployment-847dcfb7fb- deployment-7506 263040c2-9ced-489c-ab29-841b40874799 42566 0 2022-06-03 22:02:36 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[kubernetes.io/psp:collectd] [{apps/v1 ReplicaSet webserver-deployment-847dcfb7fb 13249a35-0fa1-478f-bc29-eb50c757d697 0xc005c0d88f 0xc005c0d8a0}] [] [{kube-controller-manager Update v1 2022-06-03 22:02:36 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"13249a35-0fa1-478f-bc29-eb50c757d697\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-rkg27,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-rkg27,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node1,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-06-03 22:02:36 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jun 3 22:02:38.804: INFO: Pod "webserver-deployment-847dcfb7fb-6d9ck" is not available: &Pod{ObjectMeta:{webserver-deployment-847dcfb7fb-6d9ck webserver-deployment-847dcfb7fb- deployment-7506 e9af781c-7563-439f-bcce-59bdef30576c 42552 0 2022-06-03 22:02:36 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[kubernetes.io/psp:collectd] [{apps/v1 ReplicaSet webserver-deployment-847dcfb7fb 13249a35-0fa1-478f-bc29-eb50c757d697 0xc005c0da4f 0xc005c0da60}] [] [{kube-controller-manager Update v1 2022-06-03 22:02:36 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"13249a35-0fa1-478f-bc29-eb50c757d697\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-mmjx4,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-mmjx4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-06-03 22:02:36 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jun 3 22:02:38.805: INFO: Pod "webserver-deployment-847dcfb7fb-6pxdj" is available: &Pod{ObjectMeta:{webserver-deployment-847dcfb7fb-6pxdj webserver-deployment-847dcfb7fb- deployment-7506 947d5ec4-3915-4900-a46f-911676e81677 42247 0 2022-06-03 22:02:20 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[k8s.v1.cni.cncf.io/network-status:[{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.3.18" ], "mac": "9e:9c:1f:f9:3b:f1", "default": true, "dns": {} }] k8s.v1.cni.cncf.io/networks-status:[{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.3.18" ], "mac": "9e:9c:1f:f9:3b:f1", "default": true, "dns": {} }] kubernetes.io/psp:collectd] [{apps/v1 ReplicaSet webserver-deployment-847dcfb7fb 13249a35-0fa1-478f-bc29-eb50c757d697 0xc005c0dbcf 0xc005c0dbe0}] [] [{kube-controller-manager Update v1 2022-06-03 22:02:20 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"13249a35-0fa1-478f-bc29-eb50c757d697\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {multus Update v1 2022-06-03 22:02:26 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:k8s.v1.cni.cncf.io/network-status":{},"f:k8s.v1.cni.cncf.io/networks-status":{}}}}} {kubelet Update v1 2022-06-03 22:02:29 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.3.18\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-wsgg4,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-wsgg4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node1,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-06-03 22:02:20 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-06-03 22:02:28 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-06-03 22:02:28 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-06-03 22:02:20 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.10.190.207,PodIP:10.244.3.18,StartTime:2022-06-03 22:02:20 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2022-06-03 22:02:27 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,ImageID:docker-pullable://k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50,ContainerID:docker://9273556a0e4e49c7a26c9e4f81b754d436ea9eb0f3b2a29372b4c48583ef5cbc,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.3.18,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Jun 3 22:02:38.805: INFO: Pod "webserver-deployment-847dcfb7fb-9f6jp" is available: &Pod{ObjectMeta:{webserver-deployment-847dcfb7fb-9f6jp webserver-deployment-847dcfb7fb- deployment-7506 c800b9d1-ad41-40ca-ad99-42f88c1f6c8c 42279 0 2022-06-03 22:02:20 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[k8s.v1.cni.cncf.io/network-status:[{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.4.95" ], "mac": "1a:e5:dc:ad:db:1c", "default": true, "dns": {} }] k8s.v1.cni.cncf.io/networks-status:[{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.4.95" ], "mac": "1a:e5:dc:ad:db:1c", "default": true, "dns": {} }] kubernetes.io/psp:collectd] [{apps/v1 ReplicaSet webserver-deployment-847dcfb7fb 13249a35-0fa1-478f-bc29-eb50c757d697 0xc005c0de2f 0xc005c0de40}] [] [{kube-controller-manager Update v1 2022-06-03 22:02:20 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"13249a35-0fa1-478f-bc29-eb50c757d697\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {multus Update v1 2022-06-03 22:02:24 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:k8s.v1.cni.cncf.io/network-status":{},"f:k8s.v1.cni.cncf.io/networks-status":{}}}}} {kubelet Update v1 2022-06-03 22:02:31 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.4.95\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-scscp,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-scscp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-06-03 22:02:20 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-06-03 22:02:29 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-06-03 22:02:29 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-06-03 22:02:20 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.10.190.208,PodIP:10.244.4.95,StartTime:2022-06-03 22:02:20 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2022-06-03 22:02:27 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,ImageID:docker-pullable://k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50,ContainerID:docker://0282e64b92d1a52be3801c1fe693238a86fb722a8d626833abd61abaaed54858,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.4.95,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Jun 3 22:02:38.805: INFO: Pod "webserver-deployment-847dcfb7fb-9nz48" is not available: &Pod{ObjectMeta:{webserver-deployment-847dcfb7fb-9nz48 webserver-deployment-847dcfb7fb- deployment-7506 8a15914f-74e7-45af-8cdb-c2e11fe90838 42568 0 2022-06-03 22:02:36 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[kubernetes.io/psp:collectd] [{apps/v1 ReplicaSet webserver-deployment-847dcfb7fb 13249a35-0fa1-478f-bc29-eb50c757d697 0xc005c4202f 0xc005c42040}] [] [{kube-controller-manager Update v1 2022-06-03 22:02:36 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"13249a35-0fa1-478f-bc29-eb50c757d697\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-v7dmv,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-v7dmv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-06-03 22:02:36 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jun 3 22:02:38.806: INFO: Pod "webserver-deployment-847dcfb7fb-crb92" is available: &Pod{ObjectMeta:{webserver-deployment-847dcfb7fb-crb92 webserver-deployment-847dcfb7fb- deployment-7506 d6e9a53e-564f-4fa3-bec9-193fb5fdef82 42249 0 2022-06-03 22:02:20 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[k8s.v1.cni.cncf.io/network-status:[{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.3.17" ], "mac": "be:a0:44:51:1f:90", "default": true, "dns": {} }] k8s.v1.cni.cncf.io/networks-status:[{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.3.17" ], "mac": "be:a0:44:51:1f:90", "default": true, "dns": {} }] kubernetes.io/psp:collectd] [{apps/v1 ReplicaSet webserver-deployment-847dcfb7fb 13249a35-0fa1-478f-bc29-eb50c757d697 0xc005c4219f 0xc005c421b0}] [] [{kube-controller-manager Update v1 2022-06-03 22:02:20 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"13249a35-0fa1-478f-bc29-eb50c757d697\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {multus Update v1 2022-06-03 22:02:25 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:k8s.v1.cni.cncf.io/network-status":{},"f:k8s.v1.cni.cncf.io/networks-status":{}}}}} {kubelet Update v1 2022-06-03 22:02:29 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.3.17\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-zf52v,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-zf52v,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node1,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-06-03 22:02:20 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-06-03 22:02:28 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-06-03 22:02:28 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-06-03 22:02:20 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.10.190.207,PodIP:10.244.3.17,StartTime:2022-06-03 22:02:20 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2022-06-03 22:02:26 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,ImageID:docker-pullable://k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50,ContainerID:docker://fb46a32f175495fe3c8d82033ba2e813e97fdb906199156d868110d608879ccf,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.3.17,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Jun 3 22:02:38.806: INFO: Pod "webserver-deployment-847dcfb7fb-dd6w9" is available: &Pod{ObjectMeta:{webserver-deployment-847dcfb7fb-dd6w9 webserver-deployment-847dcfb7fb- deployment-7506 f9782695-05a1-422a-8b93-6c381bec0dc9 42222 0 2022-06-03 22:02:20 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[k8s.v1.cni.cncf.io/network-status:[{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.3.16" ], "mac": "72:b0:b1:2e:70:9c", "default": true, "dns": {} }] k8s.v1.cni.cncf.io/networks-status:[{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.3.16" ], "mac": "72:b0:b1:2e:70:9c", "default": true, "dns": {} }] kubernetes.io/psp:collectd] [{apps/v1 ReplicaSet webserver-deployment-847dcfb7fb 13249a35-0fa1-478f-bc29-eb50c757d697 0xc005c4239f 0xc005c423b0}] [] [{kube-controller-manager Update v1 2022-06-03 22:02:20 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"13249a35-0fa1-478f-bc29-eb50c757d697\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {multus Update v1 2022-06-03 22:02:25 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:k8s.v1.cni.cncf.io/network-status":{},"f:k8s.v1.cni.cncf.io/networks-status":{}}}}} {kubelet Update v1 2022-06-03 22:02:28 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.3.16\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-pbzkr,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-pbzkr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node1,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-06-03 22:02:20 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-06-03 22:02:28 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-06-03 22:02:28 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-06-03 22:02:20 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.10.190.207,PodIP:10.244.3.16,StartTime:2022-06-03 22:02:20 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2022-06-03 22:02:26 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,ImageID:docker-pullable://k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50,ContainerID:docker://5b7b1767be78b7e198c60e93389067891ac45ee5758719057ff95ab479641c52,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.3.16,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Jun 3 22:02:38.806: INFO: Pod "webserver-deployment-847dcfb7fb-dlnx8" is not available: &Pod{ObjectMeta:{webserver-deployment-847dcfb7fb-dlnx8 webserver-deployment-847dcfb7fb- deployment-7506 1d27ae7d-14e4-4938-893b-16ffe764cafb 42572 0 2022-06-03 22:02:36 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[kubernetes.io/psp:collectd] [{apps/v1 ReplicaSet webserver-deployment-847dcfb7fb 13249a35-0fa1-478f-bc29-eb50c757d697 0xc005c4259f 0xc005c425b0}] [] [{kube-controller-manager Update v1 2022-06-03 22:02:36 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"13249a35-0fa1-478f-bc29-eb50c757d697\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-qn5c5,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-qn5c5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-06-03 22:02:36 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jun 3 22:02:38.807: INFO: Pod "webserver-deployment-847dcfb7fb-dvlc6" is not available: &Pod{ObjectMeta:{webserver-deployment-847dcfb7fb-dvlc6 webserver-deployment-847dcfb7fb- deployment-7506 e7291915-7521-443c-98a0-85eea58e5a46 42554 0 2022-06-03 22:02:36 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[kubernetes.io/psp:collectd] [{apps/v1 ReplicaSet webserver-deployment-847dcfb7fb 13249a35-0fa1-478f-bc29-eb50c757d697 0xc005c4274f 0xc005c42760}] [] [{kube-controller-manager Update v1 2022-06-03 22:02:36 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"13249a35-0fa1-478f-bc29-eb50c757d697\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-c68fj,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-c68fj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node1,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-06-03 22:02:36 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jun 3 22:02:38.807: INFO: Pod "webserver-deployment-847dcfb7fb-ktlmn" is not available: &Pod{ObjectMeta:{webserver-deployment-847dcfb7fb-ktlmn webserver-deployment-847dcfb7fb- deployment-7506 cdb93d28-402b-4659-b103-997ed485cbb2 42564 0 2022-06-03 22:02:36 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[kubernetes.io/psp:collectd] [{apps/v1 ReplicaSet webserver-deployment-847dcfb7fb 13249a35-0fa1-478f-bc29-eb50c757d697 0xc005c428cf 0xc005c428e0}] [] [{kube-controller-manager Update v1 2022-06-03 22:02:36 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"13249a35-0fa1-478f-bc29-eb50c757d697\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-qlsp4,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-qlsp4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-06-03 22:02:36 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jun 3 22:02:38.807: INFO: Pod "webserver-deployment-847dcfb7fb-l98fx" is not available: &Pod{ObjectMeta:{webserver-deployment-847dcfb7fb-l98fx webserver-deployment-847dcfb7fb- deployment-7506 1b979e6c-c6e4-4b03-9230-4a9cdc77839b 42548 0 2022-06-03 22:02:36 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[kubernetes.io/psp:collectd] [{apps/v1 ReplicaSet webserver-deployment-847dcfb7fb 13249a35-0fa1-478f-bc29-eb50c757d697 0xc005c42a6f 0xc005c42a90}] [] [{kube-controller-manager Update v1 2022-06-03 22:02:36 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"13249a35-0fa1-478f-bc29-eb50c757d697\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-sq8c6,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-sq8c6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-06-03 22:02:36 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jun 3 22:02:38.807: INFO: Pod "webserver-deployment-847dcfb7fb-prgfk" is available: &Pod{ObjectMeta:{webserver-deployment-847dcfb7fb-prgfk webserver-deployment-847dcfb7fb- deployment-7506 247302a1-bfc0-4edd-a7ad-05bc71691696 42285 0 2022-06-03 22:02:20 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[k8s.v1.cni.cncf.io/network-status:[{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.4.98" ], "mac": "82:98:c1:67:63:5a", "default": true, "dns": {} }] k8s.v1.cni.cncf.io/networks-status:[{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.4.98" ], "mac": "82:98:c1:67:63:5a", "default": true, "dns": {} }] kubernetes.io/psp:collectd] [{apps/v1 ReplicaSet webserver-deployment-847dcfb7fb 13249a35-0fa1-478f-bc29-eb50c757d697 0xc005c42c0f 0xc005c42c20}] [] [{kube-controller-manager Update v1 2022-06-03 22:02:20 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"13249a35-0fa1-478f-bc29-eb50c757d697\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {multus Update v1 2022-06-03 22:02:27 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:k8s.v1.cni.cncf.io/network-status":{},"f:k8s.v1.cni.cncf.io/networks-status":{}}}}} {kubelet Update v1 2022-06-03 22:02:31 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.4.98\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-jxrdk,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-jxrdk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-06-03 22:02:20 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-06-03 22:02:29 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-06-03 22:02:29 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-06-03 22:02:20 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.10.190.208,PodIP:10.244.4.98,StartTime:2022-06-03 22:02:20 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2022-06-03 22:02:28 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,ImageID:docker-pullable://k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50,ContainerID:docker://362ba625bd099a680ef8d390e621ded2126b162cc7f9dd3a0a5b12f7f05a87c4,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.4.98,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Jun 3 22:02:38.808: INFO: Pod "webserver-deployment-847dcfb7fb-q9tkl" is not available: &Pod{ObjectMeta:{webserver-deployment-847dcfb7fb-q9tkl webserver-deployment-847dcfb7fb- deployment-7506 994281df-d4a3-45a7-9432-323d358236f6 42592 0 2022-06-03 22:02:36 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[kubernetes.io/psp:collectd] [{apps/v1 ReplicaSet webserver-deployment-847dcfb7fb 13249a35-0fa1-478f-bc29-eb50c757d697 0xc005c42e5f 0xc005c42e70}] [] [{kube-controller-manager Update v1 2022-06-03 22:02:36 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"13249a35-0fa1-478f-bc29-eb50c757d697\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2022-06-03 22:02:38 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-xkwcn,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-xkwcn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node1,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-06-03 22:02:36 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-06-03 22:02:36 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-06-03 22:02:36 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-06-03 22:02:36 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.10.190.207,PodIP:,StartTime:2022-06-03 22:02:36 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jun 3 22:02:38.808: INFO: Pod "webserver-deployment-847dcfb7fb-v928k" is not available: &Pod{ObjectMeta:{webserver-deployment-847dcfb7fb-v928k webserver-deployment-847dcfb7fb- deployment-7506 acb80578-53c8-41bf-8f0c-515425f0dc1f 42544 0 2022-06-03 22:02:36 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[kubernetes.io/psp:collectd] [{apps/v1 ReplicaSet webserver-deployment-847dcfb7fb 13249a35-0fa1-478f-bc29-eb50c757d697 0xc005c4306f 0xc005c43080}] [] [{kube-controller-manager Update v1 2022-06-03 22:02:36 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"13249a35-0fa1-478f-bc29-eb50c757d697\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-g55sw,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-g55sw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-06-03 22:02:36 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jun 3 22:02:38.808: INFO: Pod "webserver-deployment-847dcfb7fb-vsmdr" is not available: &Pod{ObjectMeta:{webserver-deployment-847dcfb7fb-vsmdr webserver-deployment-847dcfb7fb- deployment-7506 1d7e7f12-e26c-4d26-b730-2c4ffea01a26 42561 0 2022-06-03 22:02:36 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[kubernetes.io/psp:collectd] [{apps/v1 ReplicaSet webserver-deployment-847dcfb7fb 13249a35-0fa1-478f-bc29-eb50c757d697 0xc005c431ff 0xc005c43220}] [] [{kube-controller-manager Update v1 2022-06-03 22:02:36 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"13249a35-0fa1-478f-bc29-eb50c757d697\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-cr2b7,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-cr2b7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-06-03 22:02:36 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jun 3 22:02:38.809: INFO: Pod "webserver-deployment-847dcfb7fb-w4f7k" is available: &Pod{ObjectMeta:{webserver-deployment-847dcfb7fb-w4f7k webserver-deployment-847dcfb7fb- deployment-7506 eaf5d417-db86-466f-9263-42df42f2fe99 42205 0 2022-06-03 22:02:20 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[k8s.v1.cni.cncf.io/network-status:[{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.3.14" ], "mac": "8a:b7:e7:0d:07:e5", "default": true, "dns": {} }] k8s.v1.cni.cncf.io/networks-status:[{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.3.14" ], "mac": "8a:b7:e7:0d:07:e5", "default": true, "dns": {} }] kubernetes.io/psp:collectd] [{apps/v1 ReplicaSet webserver-deployment-847dcfb7fb 13249a35-0fa1-478f-bc29-eb50c757d697 0xc005c433af 0xc005c433c0}] [] [{kube-controller-manager Update v1 2022-06-03 22:02:20 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"13249a35-0fa1-478f-bc29-eb50c757d697\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {multus Update v1 2022-06-03 22:02:24 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:k8s.v1.cni.cncf.io/network-status":{},"f:k8s.v1.cni.cncf.io/networks-status":{}}}}} {kubelet Update v1 2022-06-03 22:02:27 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.3.14\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-x74dp,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-x74dp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node1,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-06-03 22:02:20 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-06-03 22:02:27 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-06-03 22:02:27 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-06-03 22:02:20 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.10.190.207,PodIP:10.244.3.14,StartTime:2022-06-03 22:02:20 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2022-06-03 22:02:26 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,ImageID:docker-pullable://k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50,ContainerID:docker://e6b9327a1863cc2eb12ca4a9c1c924d50e055853ccfab9a7fc96c3648d110664,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.3.14,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Jun 3 22:02:38.809: INFO: Pod "webserver-deployment-847dcfb7fb-x529k" is not available: &Pod{ObjectMeta:{webserver-deployment-847dcfb7fb-x529k webserver-deployment-847dcfb7fb- deployment-7506 6cb444b7-4dcf-4179-86f8-a93cbcef8eee 42518 0 2022-06-03 22:02:36 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[kubernetes.io/psp:collectd] [{apps/v1 ReplicaSet webserver-deployment-847dcfb7fb 13249a35-0fa1-478f-bc29-eb50c757d697 0xc005c435ef 0xc005c43600}] [] [{kube-controller-manager Update v1 2022-06-03 22:02:36 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"13249a35-0fa1-478f-bc29-eb50c757d697\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-wqr5l,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-wqr5l,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-06-03 22:02:36 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jun 3 22:02:38.809: INFO: Pod "webserver-deployment-847dcfb7fb-x7srb" is not available: &Pod{ObjectMeta:{webserver-deployment-847dcfb7fb-x7srb webserver-deployment-847dcfb7fb- deployment-7506 69789198-d661-4387-86f2-48f19c766fb0 42534 0 2022-06-03 22:02:36 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[kubernetes.io/psp:collectd] [{apps/v1 ReplicaSet webserver-deployment-847dcfb7fb 13249a35-0fa1-478f-bc29-eb50c757d697 0xc005c4378f 0xc005c437a0}] [] [{kube-controller-manager Update v1 2022-06-03 22:02:36 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"13249a35-0fa1-478f-bc29-eb50c757d697\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-zhx7v,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-zhx7v,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-06-03 22:02:36 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jun 3 22:02:38.809: INFO: Pod "webserver-deployment-847dcfb7fb-zxx8s" is available: &Pod{ObjectMeta:{webserver-deployment-847dcfb7fb-zxx8s webserver-deployment-847dcfb7fb- deployment-7506 699f5aaa-c02c-44e5-8249-d3a61eade285 42229 0 2022-06-03 22:02:20 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[k8s.v1.cni.cncf.io/network-status:[{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.4.94" ], "mac": "7e:81:87:e6:39:06", "default": true, "dns": {} }] k8s.v1.cni.cncf.io/networks-status:[{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.4.94" ], "mac": "7e:81:87:e6:39:06", "default": true, "dns": {} }] kubernetes.io/psp:collectd] [{apps/v1 ReplicaSet webserver-deployment-847dcfb7fb 13249a35-0fa1-478f-bc29-eb50c757d697 0xc005c4392f 0xc005c43940}] [] [{kube-controller-manager Update v1 2022-06-03 22:02:20 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"13249a35-0fa1-478f-bc29-eb50c757d697\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {multus Update v1 2022-06-03 22:02:23 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:k8s.v1.cni.cncf.io/network-status":{},"f:k8s.v1.cni.cncf.io/networks-status":{}}}}} {kubelet Update v1 2022-06-03 22:02:28 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.4.94\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-tdlm6,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-tdlm6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-06-03 22:02:20 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-06-03 22:02:27 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-06-03 22:02:27 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-06-03 22:02:20 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.10.190.208,PodIP:10.244.4.94,StartTime:2022-06-03 22:02:20 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2022-06-03 22:02:25 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,ImageID:docker-pullable://k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50,ContainerID:docker://21c2636e639a588832ba96cefddecf9db4bbf6568fac6190bdf5e1324b707ebf,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.4.94,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 3 22:02:38.810: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-7506" for this suite. • [SLOW TEST:18.137 seconds] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support proportional scaling [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should support proportional scaling [Conformance]","total":-1,"completed":19,"skipped":269,"failed":1,"failures":["[sig-network] Services should be able to create a functioning NodePort service [Conformance]"]} SS ------------------------------ [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 3 22:02:33.671: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test emptydir 0666 on tmpfs Jun 3 22:02:33.707: INFO: Waiting up to 5m0s for pod "pod-43093364-f64f-40c3-bd3d-312bd3ccde58" in namespace "emptydir-4605" to be "Succeeded or Failed" Jun 3 22:02:33.711: INFO: Pod "pod-43093364-f64f-40c3-bd3d-312bd3ccde58": Phase="Pending", Reason="", readiness=false. Elapsed: 3.57709ms Jun 3 22:02:35.718: INFO: Pod "pod-43093364-f64f-40c3-bd3d-312bd3ccde58": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010094189s Jun 3 22:02:37.722: INFO: Pod "pod-43093364-f64f-40c3-bd3d-312bd3ccde58": Phase="Pending", Reason="", readiness=false. Elapsed: 4.014453632s Jun 3 22:02:39.726: INFO: Pod "pod-43093364-f64f-40c3-bd3d-312bd3ccde58": Phase="Pending", Reason="", readiness=false. Elapsed: 6.01868428s Jun 3 22:02:41.730: INFO: Pod "pod-43093364-f64f-40c3-bd3d-312bd3ccde58": Phase="Pending", Reason="", readiness=false. Elapsed: 8.022519897s Jun 3 22:02:43.737: INFO: Pod "pod-43093364-f64f-40c3-bd3d-312bd3ccde58": Phase="Pending", Reason="", readiness=false. Elapsed: 10.029490056s Jun 3 22:02:45.741: INFO: Pod "pod-43093364-f64f-40c3-bd3d-312bd3ccde58": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.033836531s STEP: Saw pod success Jun 3 22:02:45.741: INFO: Pod "pod-43093364-f64f-40c3-bd3d-312bd3ccde58" satisfied condition "Succeeded or Failed" Jun 3 22:02:45.744: INFO: Trying to get logs from node node1 pod pod-43093364-f64f-40c3-bd3d-312bd3ccde58 container test-container: STEP: delete the pod Jun 3 22:02:45.758: INFO: Waiting for pod pod-43093364-f64f-40c3-bd3d-312bd3ccde58 to disappear Jun 3 22:02:45.761: INFO: Pod pod-43093364-f64f-40c3-bd3d-312bd3ccde58 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 3 22:02:45.761: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-4605" for this suite. • [SLOW TEST:12.097 seconds] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":17,"skipped":358,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 3 22:02:35.892: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test emptydir 0666 on node default medium Jun 3 22:02:35.925: INFO: Waiting up to 5m0s for pod "pod-317461a7-32c0-4db0-a9df-c53e26f4de24" in namespace "emptydir-8191" to be "Succeeded or Failed" Jun 3 22:02:35.929: INFO: Pod "pod-317461a7-32c0-4db0-a9df-c53e26f4de24": Phase="Pending", Reason="", readiness=false. Elapsed: 3.702966ms Jun 3 22:02:37.934: INFO: Pod "pod-317461a7-32c0-4db0-a9df-c53e26f4de24": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008171881s Jun 3 22:02:39.938: INFO: Pod "pod-317461a7-32c0-4db0-a9df-c53e26f4de24": Phase="Pending", Reason="", readiness=false. Elapsed: 4.01218637s Jun 3 22:02:41.941: INFO: Pod "pod-317461a7-32c0-4db0-a9df-c53e26f4de24": Phase="Pending", Reason="", readiness=false. Elapsed: 6.016014473s Jun 3 22:02:43.945: INFO: Pod "pod-317461a7-32c0-4db0-a9df-c53e26f4de24": Phase="Pending", Reason="", readiness=false. Elapsed: 8.019319958s Jun 3 22:02:45.948: INFO: Pod "pod-317461a7-32c0-4db0-a9df-c53e26f4de24": Phase="Pending", Reason="", readiness=false. Elapsed: 10.022736147s Jun 3 22:02:47.952: INFO: Pod "pod-317461a7-32c0-4db0-a9df-c53e26f4de24": Phase="Pending", Reason="", readiness=false. Elapsed: 12.026509976s Jun 3 22:02:49.955: INFO: Pod "pod-317461a7-32c0-4db0-a9df-c53e26f4de24": Phase="Pending", Reason="", readiness=false. Elapsed: 14.02997141s Jun 3 22:02:51.959: INFO: Pod "pod-317461a7-32c0-4db0-a9df-c53e26f4de24": Phase="Succeeded", Reason="", readiness=false. Elapsed: 16.033337768s STEP: Saw pod success Jun 3 22:02:51.959: INFO: Pod "pod-317461a7-32c0-4db0-a9df-c53e26f4de24" satisfied condition "Succeeded or Failed" Jun 3 22:02:51.961: INFO: Trying to get logs from node node2 pod pod-317461a7-32c0-4db0-a9df-c53e26f4de24 container test-container: STEP: delete the pod Jun 3 22:02:51.976: INFO: Waiting for pod pod-317461a7-32c0-4db0-a9df-c53e26f4de24 to disappear Jun 3 22:02:51.978: INFO: Pod pod-317461a7-32c0-4db0-a9df-c53e26f4de24 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 3 22:02:51.978: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-8191" for this suite. • [SLOW TEST:16.095 seconds] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":43,"skipped":766,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 3 22:02:36.492: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should verify ResourceQuota with terminating scopes. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a ResourceQuota with terminating scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a ResourceQuota with not terminating scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a long running pod STEP: Ensuring resource quota with not terminating scope captures the pod usage STEP: Ensuring resource quota with terminating scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage STEP: Creating a terminating pod STEP: Ensuring resource quota with terminating scope captures the pod usage STEP: Ensuring resource quota with not terminating scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 3 22:02:52.593: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-3384" for this suite. • [SLOW TEST:16.108 seconds] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should verify ResourceQuota with terminating scopes. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance]","total":-1,"completed":14,"skipped":221,"failed":1,"failures":["[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 3 22:02:38.838: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test emptydir volume type on node default medium Jun 3 22:02:38.870: INFO: Waiting up to 5m0s for pod "pod-6bb3ee36-0c80-46a4-8b83-f617d869391d" in namespace "emptydir-1817" to be "Succeeded or Failed" Jun 3 22:02:38.873: INFO: Pod "pod-6bb3ee36-0c80-46a4-8b83-f617d869391d": Phase="Pending", Reason="", readiness=false. Elapsed: 3.137953ms Jun 3 22:02:40.877: INFO: Pod "pod-6bb3ee36-0c80-46a4-8b83-f617d869391d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007218773s Jun 3 22:02:42.881: INFO: Pod "pod-6bb3ee36-0c80-46a4-8b83-f617d869391d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.01160904s Jun 3 22:02:44.885: INFO: Pod "pod-6bb3ee36-0c80-46a4-8b83-f617d869391d": Phase="Pending", Reason="", readiness=false. Elapsed: 6.015030955s Jun 3 22:02:46.891: INFO: Pod "pod-6bb3ee36-0c80-46a4-8b83-f617d869391d": Phase="Pending", Reason="", readiness=false. Elapsed: 8.020978497s Jun 3 22:02:48.894: INFO: Pod "pod-6bb3ee36-0c80-46a4-8b83-f617d869391d": Phase="Pending", Reason="", readiness=false. Elapsed: 10.024879152s Jun 3 22:02:50.903: INFO: Pod "pod-6bb3ee36-0c80-46a4-8b83-f617d869391d": Phase="Pending", Reason="", readiness=false. Elapsed: 12.033149353s Jun 3 22:02:52.907: INFO: Pod "pod-6bb3ee36-0c80-46a4-8b83-f617d869391d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.03785817s STEP: Saw pod success Jun 3 22:02:52.907: INFO: Pod "pod-6bb3ee36-0c80-46a4-8b83-f617d869391d" satisfied condition "Succeeded or Failed" Jun 3 22:02:52.910: INFO: Trying to get logs from node node2 pod pod-6bb3ee36-0c80-46a4-8b83-f617d869391d container test-container: STEP: delete the pod Jun 3 22:02:52.925: INFO: Waiting for pod pod-6bb3ee36-0c80-46a4-8b83-f617d869391d to disappear Jun 3 22:02:52.928: INFO: Pod pod-6bb3ee36-0c80-46a4-8b83-f617d869391d no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 3 22:02:52.928: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-1817" for this suite. • [SLOW TEST:14.099 seconds] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":20,"skipped":271,"failed":1,"failures":["[sig-network] Services should be able to create a functioning NodePort service [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 3 22:02:45.860: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating configMap with name configmap-test-volume-0b9dff30-2271-4976-bce4-613625c78d5f STEP: Creating a pod to test consume configMaps Jun 3 22:02:45.903: INFO: Waiting up to 5m0s for pod "pod-configmaps-ce46a981-7389-479a-990f-1daeeb1ce145" in namespace "configmap-7017" to be "Succeeded or Failed" Jun 3 22:02:45.910: INFO: Pod "pod-configmaps-ce46a981-7389-479a-990f-1daeeb1ce145": Phase="Pending", Reason="", readiness=false. Elapsed: 6.842494ms Jun 3 22:02:47.915: INFO: Pod "pod-configmaps-ce46a981-7389-479a-990f-1daeeb1ce145": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011910932s Jun 3 22:02:49.918: INFO: Pod "pod-configmaps-ce46a981-7389-479a-990f-1daeeb1ce145": Phase="Pending", Reason="", readiness=false. Elapsed: 4.014853601s Jun 3 22:02:51.923: INFO: Pod "pod-configmaps-ce46a981-7389-479a-990f-1daeeb1ce145": Phase="Pending", Reason="", readiness=false. Elapsed: 6.019676732s Jun 3 22:02:53.930: INFO: Pod "pod-configmaps-ce46a981-7389-479a-990f-1daeeb1ce145": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.02673635s STEP: Saw pod success Jun 3 22:02:53.930: INFO: Pod "pod-configmaps-ce46a981-7389-479a-990f-1daeeb1ce145" satisfied condition "Succeeded or Failed" Jun 3 22:02:53.932: INFO: Trying to get logs from node node1 pod pod-configmaps-ce46a981-7389-479a-990f-1daeeb1ce145 container agnhost-container: STEP: delete the pod Jun 3 22:02:53.946: INFO: Waiting for pod pod-configmaps-ce46a981-7389-479a-990f-1daeeb1ce145 to disappear Jun 3 22:02:53.948: INFO: Pod pod-configmaps-ce46a981-7389-479a-990f-1daeeb1ce145 no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 3 22:02:53.948: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-7017" for this suite. • [SLOW TEST:8.095 seconds] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":-1,"completed":18,"skipped":401,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 3 22:02:54.029: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be immutable if `immutable` field is set [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 3 22:02:54.078: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-5421" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be immutable if `immutable` field is set [Conformance]","total":-1,"completed":19,"skipped":438,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 3 22:02:27.838: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a configMap. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ConfigMap STEP: Ensuring resource quota status captures configMap creation STEP: Deleting a ConfigMap STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 3 22:02:55.904: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-5625" for this suite. • [SLOW TEST:28.078 seconds] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a configMap. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance]","total":-1,"completed":36,"skipped":617,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Docker Containers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 3 22:02:52.640: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should use the image defaults if command and args are blank [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [AfterEach] [sig-node] Docker Containers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 3 22:03:01.398: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-809" for this suite. • [SLOW TEST:8.766 seconds] [sig-node] Docker Containers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should use the image defaults if command and args are blank [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance]","total":-1,"completed":15,"skipped":237,"failed":1,"failures":["[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 3 22:02:53.044: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/init_container.go:162 [It] should invoke init containers on a RestartNever pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating the pod Jun 3 22:02:53.066: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [sig-node] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 3 22:03:02.621: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-3637" for this suite. • [SLOW TEST:9.586 seconds] [sig-node] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should invoke init containers on a RestartNever pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance]","total":-1,"completed":21,"skipped":322,"failed":1,"failures":["[sig-network] Services should be able to create a functioning NodePort service [Conformance]"]} SS ------------------------------ [BeforeEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 3 22:02:36.023: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for ExternalName services [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a test externalName service STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-6154.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-6154.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-6154.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-6154.svc.cluster.local; sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Jun 3 22:02:54.080: INFO: DNS probes using dns-test-658fe68a-cb78-47ac-808d-854c4a631269 succeeded STEP: deleting the pod STEP: changing the externalName to bar.example.com STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-6154.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-6154.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-6154.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-6154.svc.cluster.local; sleep 1; done STEP: creating a second pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Jun 3 22:03:00.121: INFO: DNS probes using dns-test-93d0fdd6-34bb-47d2-b934-62e8aed33a93 succeeded STEP: deleting the pod STEP: changing the service to type=ClusterIP STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-6154.svc.cluster.local A > /results/wheezy_udp@dns-test-service-3.dns-6154.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-6154.svc.cluster.local A > /results/jessie_udp@dns-test-service-3.dns-6154.svc.cluster.local; sleep 1; done STEP: creating a third pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Jun 3 22:03:06.168: INFO: DNS probes using dns-test-704e1333-6713-42c3-be72-4091d6eeccea succeeded STEP: deleting the pod STEP: deleting the test externalName service [AfterEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 3 22:03:06.183: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-6154" for this suite. • [SLOW TEST:30.168 seconds] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should provide DNS for ExternalName services [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for ExternalName services [Conformance]","total":-1,"completed":15,"skipped":260,"failed":0} SSSSS ------------------------------ [BeforeEach] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 3 22:02:54.151: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should fail substituting values in a volume subpath with backticks [Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Jun 3 22:03:00.189: INFO: Deleting pod "var-expansion-32cb544d-1ade-454f-8cb3-474037953999" in namespace "var-expansion-418" Jun 3 22:03:00.193: INFO: Wait up to 5m0s for pod "var-expansion-32cb544d-1ade-454f-8cb3-474037953999" to be fully deleted [AfterEach] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 3 22:03:06.199: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-418" for this suite. • [SLOW TEST:12.055 seconds] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should fail substituting values in a volume subpath with backticks [Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ S ------------------------------ {"msg":"PASSED [sig-node] Variable Expansion should fail substituting values in a volume subpath with backticks [Slow] [Conformance]","total":-1,"completed":20,"skipped":468,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 3 22:03:02.637: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41 [It] should update annotations on modification [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating the pod Jun 3 22:03:02.676: INFO: The status of Pod annotationupdateaff489cc-5369-4708-8b9a-1c1d137d68b0 is Pending, waiting for it to be Running (with Ready = true) Jun 3 22:03:04.679: INFO: The status of Pod annotationupdateaff489cc-5369-4708-8b9a-1c1d137d68b0 is Pending, waiting for it to be Running (with Ready = true) Jun 3 22:03:06.679: INFO: The status of Pod annotationupdateaff489cc-5369-4708-8b9a-1c1d137d68b0 is Running (Ready = true) Jun 3 22:03:07.197: INFO: Successfully updated pod "annotationupdateaff489cc-5369-4708-8b9a-1c1d137d68b0" [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 3 22:03:11.218: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1117" for this suite. • [SLOW TEST:8.591 seconds] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should update annotations on modification [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance]","total":-1,"completed":22,"skipped":324,"failed":1,"failures":["[sig-network] Services should be able to create a functioning NodePort service [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 3 22:03:06.245: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] creating/deleting custom resource definition objects works [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Jun 3 22:03:06.270: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 3 22:03:12.285: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-5614" for this suite. • [SLOW TEST:6.047 seconds] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Simple CustomResourceDefinition /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:48 creating/deleting custom resource definition objects works [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance]","total":-1,"completed":16,"skipped":276,"failed":0} SSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 3 22:02:55.972: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD with validation schema [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Jun 3 22:02:56.001: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with known and required properties Jun 3 22:03:04.677: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2205 --namespace=crd-publish-openapi-2205 create -f -' Jun 3 22:03:05.248: INFO: stderr: "" Jun 3 22:03:05.248: INFO: stdout: "e2e-test-crd-publish-openapi-9856-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n" Jun 3 22:03:05.248: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2205 --namespace=crd-publish-openapi-2205 delete e2e-test-crd-publish-openapi-9856-crds test-foo' Jun 3 22:03:05.433: INFO: stderr: "" Jun 3 22:03:05.433: INFO: stdout: "e2e-test-crd-publish-openapi-9856-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n" Jun 3 22:03:05.433: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2205 --namespace=crd-publish-openapi-2205 apply -f -' Jun 3 22:03:05.786: INFO: stderr: "" Jun 3 22:03:05.786: INFO: stdout: "e2e-test-crd-publish-openapi-9856-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n" Jun 3 22:03:05.786: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2205 --namespace=crd-publish-openapi-2205 delete e2e-test-crd-publish-openapi-9856-crds test-foo' Jun 3 22:03:05.973: INFO: stderr: "" Jun 3 22:03:05.974: INFO: stdout: "e2e-test-crd-publish-openapi-9856-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n" STEP: client-side validation (kubectl create and apply) rejects request with unknown properties when disallowed by the schema Jun 3 22:03:05.974: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2205 --namespace=crd-publish-openapi-2205 create -f -' Jun 3 22:03:06.330: INFO: rc: 1 Jun 3 22:03:06.330: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2205 --namespace=crd-publish-openapi-2205 apply -f -' Jun 3 22:03:06.643: INFO: rc: 1 STEP: client-side validation (kubectl create and apply) rejects request without required properties Jun 3 22:03:06.643: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2205 --namespace=crd-publish-openapi-2205 create -f -' Jun 3 22:03:06.939: INFO: rc: 1 Jun 3 22:03:06.939: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2205 --namespace=crd-publish-openapi-2205 apply -f -' Jun 3 22:03:07.218: INFO: rc: 1 STEP: kubectl explain works to explain CR properties Jun 3 22:03:07.218: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2205 explain e2e-test-crd-publish-openapi-9856-crds' Jun 3 22:03:07.570: INFO: stderr: "" Jun 3 22:03:07.570: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-9856-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nDESCRIPTION:\n Foo CRD for Testing\n\nFIELDS:\n apiVersion\t\n APIVersion defines the versioned schema of this representation of an\n object. Servers should convert recognized schemas to the latest internal\n value, and may reject unrecognized values. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n kind\t\n Kind is a string value representing the REST resource this object\n represents. Servers may infer this from the endpoint the client submits\n requests to. Cannot be updated. In CamelCase. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n metadata\t\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n spec\t\n Specification of Foo\n\n status\t\n Status of Foo\n\n" STEP: kubectl explain works to explain CR properties recursively Jun 3 22:03:07.571: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2205 explain e2e-test-crd-publish-openapi-9856-crds.metadata' Jun 3 22:03:07.946: INFO: stderr: "" Jun 3 22:03:07.946: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-9856-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: metadata \n\nDESCRIPTION:\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n ObjectMeta is metadata that all persisted resources must have, which\n includes all objects users must create.\n\nFIELDS:\n annotations\t\n Annotations is an unstructured key value map stored with a resource that\n may be set by external tools to store and retrieve arbitrary metadata. They\n are not queryable and should be preserved when modifying objects. More\n info: http://kubernetes.io/docs/user-guide/annotations\n\n clusterName\t\n The name of the cluster which the object belongs to. This is used to\n distinguish resources with same name and namespace in different clusters.\n This field is not set anywhere right now and apiserver is going to ignore\n it if set in create or update request.\n\n creationTimestamp\t\n CreationTimestamp is a timestamp representing the server time when this\n object was created. It is not guaranteed to be set in happens-before order\n across separate operations. Clients may not set this value. It is\n represented in RFC3339 form and is in UTC.\n\n Populated by the system. Read-only. Null for lists. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n deletionGracePeriodSeconds\t\n Number of seconds allowed for this object to gracefully terminate before it\n will be removed from the system. Only set when deletionTimestamp is also\n set. May only be shortened. Read-only.\n\n deletionTimestamp\t\n DeletionTimestamp is RFC 3339 date and time at which this resource will be\n deleted. This field is set by the server when a graceful deletion is\n requested by the user, and is not directly settable by a client. The\n resource is expected to be deleted (no longer visible from resource lists,\n and not reachable by name) after the time in this field, once the\n finalizers list is empty. As long as the finalizers list contains items,\n deletion is blocked. Once the deletionTimestamp is set, this value may not\n be unset or be set further into the future, although it may be shortened or\n the resource may be deleted prior to this time. For example, a user may\n request that a pod is deleted in 30 seconds. The Kubelet will react by\n sending a graceful termination signal to the containers in the pod. After\n that 30 seconds, the Kubelet will send a hard termination signal (SIGKILL)\n to the container and after cleanup, remove the pod from the API. In the\n presence of network partitions, this object may still exist after this\n timestamp, until an administrator or automated process can determine the\n resource is fully terminated. If not set, graceful deletion of the object\n has not been requested.\n\n Populated by the system when a graceful deletion is requested. Read-only.\n More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n finalizers\t<[]string>\n Must be empty before the object is deleted from the registry. Each entry is\n an identifier for the responsible component that will remove the entry from\n the list. If the deletionTimestamp of the object is non-nil, entries in\n this list can only be removed. Finalizers may be processed and removed in\n any order. Order is NOT enforced because it introduces significant risk of\n stuck finalizers. finalizers is a shared field, any actor with permission\n can reorder it. If the finalizer list is processed in order, then this can\n lead to a situation in which the component responsible for the first\n finalizer in the list is waiting for a signal (field value, external\n system, or other) produced by a component responsible for a finalizer later\n in the list, resulting in a deadlock. Without enforced ordering finalizers\n are free to order amongst themselves and are not vulnerable to ordering\n changes in the list.\n\n generateName\t\n GenerateName is an optional prefix, used by the server, to generate a\n unique name ONLY IF the Name field has not been provided. If this field is\n used, the name returned to the client will be different than the name\n passed. This value will also be combined with a unique suffix. The provided\n value has the same validation rules as the Name field, and may be truncated\n by the length of the suffix required to make the value unique on the\n server.\n\n If this field is specified and the generated name exists, the server will\n NOT return a 409 - instead, it will either return 201 Created or 500 with\n Reason ServerTimeout indicating a unique name could not be found in the\n time allotted, and the client should retry (optionally after the time\n indicated in the Retry-After header).\n\n Applied only if Name is not specified. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#idempotency\n\n generation\t\n A sequence number representing a specific generation of the desired state.\n Populated by the system. Read-only.\n\n labels\t\n Map of string keys and values that can be used to organize and categorize\n (scope and select) objects. May match selectors of replication controllers\n and services. More info: http://kubernetes.io/docs/user-guide/labels\n\n managedFields\t<[]Object>\n ManagedFields maps workflow-id and version to the set of fields that are\n managed by that workflow. This is mostly for internal housekeeping, and\n users typically shouldn't need to set or understand this field. A workflow\n can be the user's name, a controller's name, or the name of a specific\n apply path like \"ci-cd\". The set of fields is always in the version that\n the workflow used when modifying the object.\n\n name\t\n Name must be unique within a namespace. Is required when creating\n resources, although some resources may allow a client to request the\n generation of an appropriate name automatically. Name is primarily intended\n for creation idempotence and configuration definition. Cannot be updated.\n More info: http://kubernetes.io/docs/user-guide/identifiers#names\n\n namespace\t\n Namespace defines the space within which each name must be unique. An empty\n namespace is equivalent to the \"default\" namespace, but \"default\" is the\n canonical representation. Not all objects are required to be scoped to a\n namespace - the value of this field for those objects will be empty.\n\n Must be a DNS_LABEL. Cannot be updated. More info:\n http://kubernetes.io/docs/user-guide/namespaces\n\n ownerReferences\t<[]Object>\n List of objects depended by this object. If ALL objects in the list have\n been deleted, this object will be garbage collected. If this object is\n managed by a controller, then an entry in this list will point to this\n controller, with the controller field set to true. There cannot be more\n than one managing controller.\n\n resourceVersion\t\n An opaque value that represents the internal version of this object that\n can be used by clients to determine when objects have changed. May be used\n for optimistic concurrency, change detection, and the watch operation on a\n resource or set of resources. Clients must treat these values as opaque and\n passed unmodified back to the server. They may only be valid for a\n particular resource or set of resources.\n\n Populated by the system. Read-only. Value must be treated as opaque by\n clients and . More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency\n\n selfLink\t\n SelfLink is a URL representing this object. Populated by the system.\n Read-only.\n\n DEPRECATED Kubernetes will stop propagating this field in 1.20 release and\n the field is planned to be removed in 1.21 release.\n\n uid\t\n UID is the unique in time and space value for this object. It is typically\n generated by the server on successful creation of a resource and is not\n allowed to change on PUT operations.\n\n Populated by the system. Read-only. More info:\n http://kubernetes.io/docs/user-guide/identifiers#uids\n\n" Jun 3 22:03:07.947: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2205 explain e2e-test-crd-publish-openapi-9856-crds.spec' Jun 3 22:03:08.303: INFO: stderr: "" Jun 3 22:03:08.303: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-9856-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: spec \n\nDESCRIPTION:\n Specification of Foo\n\nFIELDS:\n bars\t<[]Object>\n List of Bars and their specs.\n\n" Jun 3 22:03:08.304: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2205 explain e2e-test-crd-publish-openapi-9856-crds.spec.bars' Jun 3 22:03:08.665: INFO: stderr: "" Jun 3 22:03:08.665: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-9856-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: bars <[]Object>\n\nDESCRIPTION:\n List of Bars and their specs.\n\nFIELDS:\n age\t\n Age of Bar.\n\n bazs\t<[]string>\n List of Bazs.\n\n name\t -required-\n Name of Bar.\n\n" STEP: kubectl explain works to return error when explain is called on property that doesn't exist Jun 3 22:03:08.665: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2205 explain e2e-test-crd-publish-openapi-9856-crds.spec.bars2' Jun 3 22:03:09.016: INFO: rc: 1 [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 3 22:03:12.651: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-2205" for this suite. • [SLOW TEST:16.703 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD with validation schema [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance]","total":-1,"completed":37,"skipped":642,"failed":0} SSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] Servers with support for Table transformation /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 3 22:03:12.698: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename tables STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Servers with support for Table transformation /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/table_conversion.go:47 [It] should return a 406 for a backend which does not implement metadata [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [AfterEach] [sig-api-machinery] Servers with support for Table transformation /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 3 22:03:12.724: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "tables-8852" for this suite. • ------------------------------ {"msg":"PASSED [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance]","total":-1,"completed":38,"skipped":650,"failed":0} SSSSSSS ------------------------------ [BeforeEach] version v1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 3 22:03:01.454: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy through a service and a pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: starting an echo server on multiple ports STEP: creating replication controller proxy-service-6fhgs in namespace proxy-80 I0603 22:03:01.485482 35 runners.go:190] Created replication controller with name: proxy-service-6fhgs, namespace: proxy-80, replica count: 1 I0603 22:03:02.537213 35 runners.go:190] proxy-service-6fhgs Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0603 22:03:03.538370 35 runners.go:190] proxy-service-6fhgs Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0603 22:03:04.539286 35 runners.go:190] proxy-service-6fhgs Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0603 22:03:05.540144 35 runners.go:190] proxy-service-6fhgs Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jun 3 22:03:05.543: INFO: setup took 4.06754908s, starting test cases STEP: running 16 cases, 20 attempts per case, 320 total attempts Jun 3 22:03:05.548: INFO: (0) /api/v1/namespaces/proxy-80/pods/proxy-service-6fhgs-hpfp5:162/proxy/: bar (200; 4.481701ms) Jun 3 22:03:05.548: INFO: (0) /api/v1/namespaces/proxy-80/pods/http:proxy-service-6fhgs-hpfp5:160/proxy/: foo (200; 4.296008ms) Jun 3 22:03:05.548: INFO: (0) /api/v1/namespaces/proxy-80/pods/proxy-service-6fhgs-hpfp5:160/proxy/: foo (200; 4.541231ms) Jun 3 22:03:05.548: INFO: (0) /api/v1/namespaces/proxy-80/pods/http:proxy-service-6fhgs-hpfp5:1080/proxy/: te... (200; 5.036528ms) Jun 3 22:03:05.548: INFO: (0) /api/v1/namespaces/proxy-80/pods/proxy-service-6fhgs-hpfp5/proxy/: test (200; 5.073293ms) Jun 3 22:03:05.549: INFO: (0) /api/v1/namespaces/proxy-80/pods/proxy-service-6fhgs-hpfp5:1080/proxy/: testtest (200; 2.222181ms) Jun 3 22:03:05.556: INFO: (1) /api/v1/namespaces/proxy-80/pods/https:proxy-service-6fhgs-hpfp5:460/proxy/: tls baz (200; 2.536325ms) Jun 3 22:03:05.556: INFO: (1) /api/v1/namespaces/proxy-80/pods/https:proxy-service-6fhgs-hpfp5:443/proxy/: testte... (200; 2.984375ms) Jun 3 22:03:05.557: INFO: (1) /api/v1/namespaces/proxy-80/pods/http:proxy-service-6fhgs-hpfp5:160/proxy/: foo (200; 3.047128ms) Jun 3 22:03:05.557: INFO: (1) /api/v1/namespaces/proxy-80/services/proxy-service-6fhgs:portname1/proxy/: foo (200; 3.233043ms) Jun 3 22:03:05.557: INFO: (1) /api/v1/namespaces/proxy-80/services/http:proxy-service-6fhgs:portname1/proxy/: foo (200; 3.503343ms) Jun 3 22:03:05.557: INFO: (1) /api/v1/namespaces/proxy-80/services/https:proxy-service-6fhgs:tlsportname2/proxy/: tls qux (200; 3.428618ms) Jun 3 22:03:05.557: INFO: (1) /api/v1/namespaces/proxy-80/pods/proxy-service-6fhgs-hpfp5:162/proxy/: bar (200; 3.379429ms) Jun 3 22:03:05.558: INFO: (1) /api/v1/namespaces/proxy-80/services/proxy-service-6fhgs:portname2/proxy/: bar (200; 3.709143ms) Jun 3 22:03:05.558: INFO: (1) /api/v1/namespaces/proxy-80/services/http:proxy-service-6fhgs:portname2/proxy/: bar (200; 3.825618ms) Jun 3 22:03:05.560: INFO: (2) /api/v1/namespaces/proxy-80/pods/proxy-service-6fhgs-hpfp5:1080/proxy/: testte... (200; 2.987443ms) Jun 3 22:03:05.561: INFO: (2) /api/v1/namespaces/proxy-80/pods/http:proxy-service-6fhgs-hpfp5:160/proxy/: foo (200; 3.080522ms) Jun 3 22:03:05.561: INFO: (2) /api/v1/namespaces/proxy-80/pods/https:proxy-service-6fhgs-hpfp5:462/proxy/: tls qux (200; 3.166574ms) Jun 3 22:03:05.561: INFO: (2) /api/v1/namespaces/proxy-80/pods/http:proxy-service-6fhgs-hpfp5:162/proxy/: bar (200; 3.061848ms) Jun 3 22:03:05.561: INFO: (2) /api/v1/namespaces/proxy-80/services/proxy-service-6fhgs:portname2/proxy/: bar (200; 3.26322ms) Jun 3 22:03:05.561: INFO: (2) /api/v1/namespaces/proxy-80/pods/proxy-service-6fhgs-hpfp5/proxy/: test (200; 3.434125ms) Jun 3 22:03:05.561: INFO: (2) /api/v1/namespaces/proxy-80/services/proxy-service-6fhgs:portname1/proxy/: foo (200; 3.699523ms) Jun 3 22:03:05.561: INFO: (2) /api/v1/namespaces/proxy-80/services/https:proxy-service-6fhgs:tlsportname1/proxy/: tls baz (200; 3.497634ms) Jun 3 22:03:05.561: INFO: (2) /api/v1/namespaces/proxy-80/pods/proxy-service-6fhgs-hpfp5:160/proxy/: foo (200; 3.619004ms) Jun 3 22:03:05.561: INFO: (2) /api/v1/namespaces/proxy-80/services/http:proxy-service-6fhgs:portname2/proxy/: bar (200; 3.717405ms) Jun 3 22:03:05.562: INFO: (2) /api/v1/namespaces/proxy-80/services/https:proxy-service-6fhgs:tlsportname2/proxy/: tls qux (200; 4.213892ms) Jun 3 22:03:05.562: INFO: (2) /api/v1/namespaces/proxy-80/services/http:proxy-service-6fhgs:portname1/proxy/: foo (200; 4.251548ms) Jun 3 22:03:05.564: INFO: (3) /api/v1/namespaces/proxy-80/pods/http:proxy-service-6fhgs-hpfp5:160/proxy/: foo (200; 2.281136ms) Jun 3 22:03:05.565: INFO: (3) /api/v1/namespaces/proxy-80/pods/http:proxy-service-6fhgs-hpfp5:162/proxy/: bar (200; 2.457113ms) Jun 3 22:03:05.565: INFO: (3) /api/v1/namespaces/proxy-80/pods/proxy-service-6fhgs-hpfp5:162/proxy/: bar (200; 2.386496ms) Jun 3 22:03:05.565: INFO: (3) /api/v1/namespaces/proxy-80/pods/proxy-service-6fhgs-hpfp5:1080/proxy/: testtest (200; 2.663159ms) Jun 3 22:03:05.565: INFO: (3) /api/v1/namespaces/proxy-80/pods/proxy-service-6fhgs-hpfp5:160/proxy/: foo (200; 3.037809ms) Jun 3 22:03:05.565: INFO: (3) /api/v1/namespaces/proxy-80/pods/https:proxy-service-6fhgs-hpfp5:443/proxy/: te... (200; 3.588043ms) Jun 3 22:03:05.566: INFO: (3) /api/v1/namespaces/proxy-80/services/https:proxy-service-6fhgs:tlsportname1/proxy/: tls baz (200; 3.862323ms) Jun 3 22:03:05.566: INFO: (3) /api/v1/namespaces/proxy-80/services/https:proxy-service-6fhgs:tlsportname2/proxy/: tls qux (200; 4.290186ms) Jun 3 22:03:05.567: INFO: (3) /api/v1/namespaces/proxy-80/services/http:proxy-service-6fhgs:portname2/proxy/: bar (200; 5.393603ms) Jun 3 22:03:05.568: INFO: (3) /api/v1/namespaces/proxy-80/services/proxy-service-6fhgs:portname2/proxy/: bar (200; 6.012106ms) Jun 3 22:03:05.568: INFO: (3) /api/v1/namespaces/proxy-80/services/http:proxy-service-6fhgs:portname1/proxy/: foo (200; 6.149849ms) Jun 3 22:03:05.571: INFO: (4) /api/v1/namespaces/proxy-80/pods/proxy-service-6fhgs-hpfp5:162/proxy/: bar (200; 2.308129ms) Jun 3 22:03:05.571: INFO: (4) /api/v1/namespaces/proxy-80/pods/http:proxy-service-6fhgs-hpfp5:160/proxy/: foo (200; 2.427547ms) Jun 3 22:03:05.571: INFO: (4) /api/v1/namespaces/proxy-80/pods/proxy-service-6fhgs-hpfp5:160/proxy/: foo (200; 2.667805ms) Jun 3 22:03:05.571: INFO: (4) /api/v1/namespaces/proxy-80/pods/https:proxy-service-6fhgs-hpfp5:460/proxy/: tls baz (200; 2.699108ms) Jun 3 22:03:05.572: INFO: (4) /api/v1/namespaces/proxy-80/pods/proxy-service-6fhgs-hpfp5/proxy/: test (200; 2.993131ms) Jun 3 22:03:05.572: INFO: (4) /api/v1/namespaces/proxy-80/pods/http:proxy-service-6fhgs-hpfp5:162/proxy/: bar (200; 2.955608ms) Jun 3 22:03:05.572: INFO: (4) /api/v1/namespaces/proxy-80/pods/proxy-service-6fhgs-hpfp5:1080/proxy/: testte... (200; 3.49672ms) Jun 3 22:03:05.572: INFO: (4) /api/v1/namespaces/proxy-80/services/https:proxy-service-6fhgs:tlsportname2/proxy/: tls qux (200; 3.72778ms) Jun 3 22:03:05.572: INFO: (4) /api/v1/namespaces/proxy-80/pods/https:proxy-service-6fhgs-hpfp5:443/proxy/: test (200; 2.603118ms) Jun 3 22:03:05.576: INFO: (5) /api/v1/namespaces/proxy-80/pods/http:proxy-service-6fhgs-hpfp5:160/proxy/: foo (200; 2.647589ms) Jun 3 22:03:05.576: INFO: (5) /api/v1/namespaces/proxy-80/services/proxy-service-6fhgs:portname1/proxy/: foo (200; 2.975634ms) Jun 3 22:03:05.576: INFO: (5) /api/v1/namespaces/proxy-80/pods/https:proxy-service-6fhgs-hpfp5:462/proxy/: tls qux (200; 2.953346ms) Jun 3 22:03:05.576: INFO: (5) /api/v1/namespaces/proxy-80/pods/proxy-service-6fhgs-hpfp5:1080/proxy/: testte... (200; 3.383051ms) Jun 3 22:03:05.577: INFO: (5) /api/v1/namespaces/proxy-80/services/proxy-service-6fhgs:portname2/proxy/: bar (200; 3.997419ms) Jun 3 22:03:05.577: INFO: (5) /api/v1/namespaces/proxy-80/services/https:proxy-service-6fhgs:tlsportname1/proxy/: tls baz (200; 4.00626ms) Jun 3 22:03:05.577: INFO: (5) /api/v1/namespaces/proxy-80/services/https:proxy-service-6fhgs:tlsportname2/proxy/: tls qux (200; 4.141621ms) Jun 3 22:03:05.580: INFO: (6) /api/v1/namespaces/proxy-80/pods/proxy-service-6fhgs-hpfp5:160/proxy/: foo (200; 2.41395ms) Jun 3 22:03:05.580: INFO: (6) /api/v1/namespaces/proxy-80/pods/proxy-service-6fhgs-hpfp5:1080/proxy/: testtest (200; 2.60993ms) Jun 3 22:03:05.580: INFO: (6) /api/v1/namespaces/proxy-80/pods/https:proxy-service-6fhgs-hpfp5:443/proxy/: te... (200; 3.03078ms) Jun 3 22:03:05.581: INFO: (6) /api/v1/namespaces/proxy-80/services/http:proxy-service-6fhgs:portname2/proxy/: bar (200; 3.277074ms) Jun 3 22:03:05.581: INFO: (6) /api/v1/namespaces/proxy-80/pods/proxy-service-6fhgs-hpfp5:162/proxy/: bar (200; 3.48623ms) Jun 3 22:03:05.581: INFO: (6) /api/v1/namespaces/proxy-80/services/proxy-service-6fhgs:portname1/proxy/: foo (200; 3.510683ms) Jun 3 22:03:05.581: INFO: (6) /api/v1/namespaces/proxy-80/services/https:proxy-service-6fhgs:tlsportname2/proxy/: tls qux (200; 3.770965ms) Jun 3 22:03:05.581: INFO: (6) /api/v1/namespaces/proxy-80/services/http:proxy-service-6fhgs:portname1/proxy/: foo (200; 4.003129ms) Jun 3 22:03:05.582: INFO: (6) /api/v1/namespaces/proxy-80/services/proxy-service-6fhgs:portname2/proxy/: bar (200; 4.436559ms) Jun 3 22:03:05.582: INFO: (6) /api/v1/namespaces/proxy-80/services/https:proxy-service-6fhgs:tlsportname1/proxy/: tls baz (200; 4.354543ms) Jun 3 22:03:05.584: INFO: (7) /api/v1/namespaces/proxy-80/pods/proxy-service-6fhgs-hpfp5/proxy/: test (200; 2.048241ms) Jun 3 22:03:05.584: INFO: (7) /api/v1/namespaces/proxy-80/pods/http:proxy-service-6fhgs-hpfp5:160/proxy/: foo (200; 2.135156ms) Jun 3 22:03:05.584: INFO: (7) /api/v1/namespaces/proxy-80/pods/proxy-service-6fhgs-hpfp5:160/proxy/: foo (200; 2.083041ms) Jun 3 22:03:05.584: INFO: (7) /api/v1/namespaces/proxy-80/pods/https:proxy-service-6fhgs-hpfp5:462/proxy/: tls qux (200; 2.347ms) Jun 3 22:03:05.585: INFO: (7) /api/v1/namespaces/proxy-80/pods/http:proxy-service-6fhgs-hpfp5:1080/proxy/: te... (200; 2.752379ms) Jun 3 22:03:05.585: INFO: (7) /api/v1/namespaces/proxy-80/pods/https:proxy-service-6fhgs-hpfp5:460/proxy/: tls baz (200; 2.894842ms) Jun 3 22:03:05.585: INFO: (7) /api/v1/namespaces/proxy-80/pods/proxy-service-6fhgs-hpfp5:162/proxy/: bar (200; 2.859868ms) Jun 3 22:03:05.585: INFO: (7) /api/v1/namespaces/proxy-80/pods/http:proxy-service-6fhgs-hpfp5:162/proxy/: bar (200; 3.063239ms) Jun 3 22:03:05.585: INFO: (7) /api/v1/namespaces/proxy-80/pods/https:proxy-service-6fhgs-hpfp5:443/proxy/: testtesttest (200; 2.586567ms) Jun 3 22:03:05.589: INFO: (8) /api/v1/namespaces/proxy-80/pods/http:proxy-service-6fhgs-hpfp5:1080/proxy/: te... (200; 2.701116ms) Jun 3 22:03:05.589: INFO: (8) /api/v1/namespaces/proxy-80/pods/http:proxy-service-6fhgs-hpfp5:162/proxy/: bar (200; 2.628303ms) Jun 3 22:03:05.589: INFO: (8) /api/v1/namespaces/proxy-80/pods/http:proxy-service-6fhgs-hpfp5:160/proxy/: foo (200; 2.848806ms) Jun 3 22:03:05.589: INFO: (8) /api/v1/namespaces/proxy-80/services/proxy-service-6fhgs:portname2/proxy/: bar (200; 2.884002ms) Jun 3 22:03:05.590: INFO: (8) /api/v1/namespaces/proxy-80/services/https:proxy-service-6fhgs:tlsportname1/proxy/: tls baz (200; 3.280737ms) Jun 3 22:03:05.590: INFO: (8) /api/v1/namespaces/proxy-80/pods/https:proxy-service-6fhgs-hpfp5:462/proxy/: tls qux (200; 3.112304ms) Jun 3 22:03:05.590: INFO: (8) /api/v1/namespaces/proxy-80/pods/proxy-service-6fhgs-hpfp5:160/proxy/: foo (200; 3.216944ms) Jun 3 22:03:05.590: INFO: (8) /api/v1/namespaces/proxy-80/pods/proxy-service-6fhgs-hpfp5:162/proxy/: bar (200; 3.507625ms) Jun 3 22:03:05.590: INFO: (8) /api/v1/namespaces/proxy-80/pods/https:proxy-service-6fhgs-hpfp5:443/proxy/: testte... (200; 2.977555ms) Jun 3 22:03:05.594: INFO: (9) /api/v1/namespaces/proxy-80/services/proxy-service-6fhgs:portname1/proxy/: foo (200; 3.419909ms) Jun 3 22:03:05.594: INFO: (9) /api/v1/namespaces/proxy-80/services/https:proxy-service-6fhgs:tlsportname1/proxy/: tls baz (200; 3.468805ms) Jun 3 22:03:05.594: INFO: (9) /api/v1/namespaces/proxy-80/pods/proxy-service-6fhgs-hpfp5/proxy/: test (200; 3.367411ms) Jun 3 22:03:05.594: INFO: (9) /api/v1/namespaces/proxy-80/services/https:proxy-service-6fhgs:tlsportname2/proxy/: tls qux (200; 3.407483ms) Jun 3 22:03:05.594: INFO: (9) /api/v1/namespaces/proxy-80/pods/https:proxy-service-6fhgs-hpfp5:460/proxy/: tls baz (200; 3.421328ms) Jun 3 22:03:05.594: INFO: (9) /api/v1/namespaces/proxy-80/services/proxy-service-6fhgs:portname2/proxy/: bar (200; 3.6512ms) Jun 3 22:03:05.594: INFO: (9) /api/v1/namespaces/proxy-80/services/http:proxy-service-6fhgs:portname1/proxy/: foo (200; 3.590387ms) Jun 3 22:03:05.595: INFO: (9) /api/v1/namespaces/proxy-80/services/http:proxy-service-6fhgs:portname2/proxy/: bar (200; 3.827041ms) Jun 3 22:03:05.597: INFO: (10) /api/v1/namespaces/proxy-80/pods/proxy-service-6fhgs-hpfp5:162/proxy/: bar (200; 2.625751ms) Jun 3 22:03:05.597: INFO: (10) /api/v1/namespaces/proxy-80/pods/https:proxy-service-6fhgs-hpfp5:460/proxy/: tls baz (200; 2.602072ms) Jun 3 22:03:05.597: INFO: (10) /api/v1/namespaces/proxy-80/pods/https:proxy-service-6fhgs-hpfp5:443/proxy/: testtest (200; 3.247522ms) Jun 3 22:03:05.598: INFO: (10) /api/v1/namespaces/proxy-80/pods/http:proxy-service-6fhgs-hpfp5:162/proxy/: bar (200; 3.402574ms) Jun 3 22:03:05.598: INFO: (10) /api/v1/namespaces/proxy-80/pods/http:proxy-service-6fhgs-hpfp5:1080/proxy/: te... (200; 3.470426ms) Jun 3 22:03:05.598: INFO: (10) /api/v1/namespaces/proxy-80/pods/proxy-service-6fhgs-hpfp5:160/proxy/: foo (200; 3.491342ms) Jun 3 22:03:05.598: INFO: (10) /api/v1/namespaces/proxy-80/services/proxy-service-6fhgs:portname2/proxy/: bar (200; 3.550801ms) Jun 3 22:03:05.599: INFO: (10) /api/v1/namespaces/proxy-80/services/http:proxy-service-6fhgs:portname1/proxy/: foo (200; 3.968566ms) Jun 3 22:03:05.599: INFO: (10) /api/v1/namespaces/proxy-80/services/proxy-service-6fhgs:portname1/proxy/: foo (200; 4.008969ms) Jun 3 22:03:05.599: INFO: (10) /api/v1/namespaces/proxy-80/services/https:proxy-service-6fhgs:tlsportname2/proxy/: tls qux (200; 4.126709ms) Jun 3 22:03:05.599: INFO: (10) /api/v1/namespaces/proxy-80/services/https:proxy-service-6fhgs:tlsportname1/proxy/: tls baz (200; 4.099168ms) Jun 3 22:03:05.601: INFO: (11) /api/v1/namespaces/proxy-80/pods/https:proxy-service-6fhgs-hpfp5:462/proxy/: tls qux (200; 2.225721ms) Jun 3 22:03:05.602: INFO: (11) /api/v1/namespaces/proxy-80/pods/https:proxy-service-6fhgs-hpfp5:460/proxy/: tls baz (200; 2.616921ms) Jun 3 22:03:05.602: INFO: (11) /api/v1/namespaces/proxy-80/pods/proxy-service-6fhgs-hpfp5/proxy/: test (200; 2.957015ms) Jun 3 22:03:05.602: INFO: (11) /api/v1/namespaces/proxy-80/pods/https:proxy-service-6fhgs-hpfp5:443/proxy/: te... (200; 3.15123ms) Jun 3 22:03:05.602: INFO: (11) /api/v1/namespaces/proxy-80/pods/http:proxy-service-6fhgs-hpfp5:160/proxy/: foo (200; 3.236497ms) Jun 3 22:03:05.602: INFO: (11) /api/v1/namespaces/proxy-80/pods/proxy-service-6fhgs-hpfp5:160/proxy/: foo (200; 3.380189ms) Jun 3 22:03:05.602: INFO: (11) /api/v1/namespaces/proxy-80/pods/proxy-service-6fhgs-hpfp5:1080/proxy/: testtest (200; 2.397729ms) Jun 3 22:03:05.606: INFO: (12) /api/v1/namespaces/proxy-80/pods/https:proxy-service-6fhgs-hpfp5:460/proxy/: tls baz (200; 2.37838ms) Jun 3 22:03:05.606: INFO: (12) /api/v1/namespaces/proxy-80/pods/proxy-service-6fhgs-hpfp5:1080/proxy/: testte... (200; 2.871503ms) Jun 3 22:03:05.607: INFO: (12) /api/v1/namespaces/proxy-80/pods/proxy-service-6fhgs-hpfp5:160/proxy/: foo (200; 3.129362ms) Jun 3 22:03:05.607: INFO: (12) /api/v1/namespaces/proxy-80/pods/proxy-service-6fhgs-hpfp5:162/proxy/: bar (200; 3.08233ms) Jun 3 22:03:05.607: INFO: (12) /api/v1/namespaces/proxy-80/pods/https:proxy-service-6fhgs-hpfp5:443/proxy/: test (200; 2.230647ms) Jun 3 22:03:05.611: INFO: (13) /api/v1/namespaces/proxy-80/pods/proxy-service-6fhgs-hpfp5:162/proxy/: bar (200; 2.636607ms) Jun 3 22:03:05.611: INFO: (13) /api/v1/namespaces/proxy-80/pods/https:proxy-service-6fhgs-hpfp5:462/proxy/: tls qux (200; 2.523046ms) Jun 3 22:03:05.611: INFO: (13) /api/v1/namespaces/proxy-80/pods/proxy-service-6fhgs-hpfp5:1080/proxy/: testte... (200; 3.255704ms) Jun 3 22:03:05.612: INFO: (13) /api/v1/namespaces/proxy-80/services/http:proxy-service-6fhgs:portname2/proxy/: bar (200; 3.571282ms) Jun 3 22:03:05.612: INFO: (13) /api/v1/namespaces/proxy-80/services/https:proxy-service-6fhgs:tlsportname1/proxy/: tls baz (200; 3.633405ms) Jun 3 22:03:05.612: INFO: (13) /api/v1/namespaces/proxy-80/services/proxy-service-6fhgs:portname1/proxy/: foo (200; 3.861736ms) Jun 3 22:03:05.612: INFO: (13) /api/v1/namespaces/proxy-80/services/http:proxy-service-6fhgs:portname1/proxy/: foo (200; 3.993194ms) Jun 3 22:03:05.614: INFO: (14) /api/v1/namespaces/proxy-80/pods/http:proxy-service-6fhgs-hpfp5:160/proxy/: foo (200; 1.900111ms) Jun 3 22:03:05.615: INFO: (14) /api/v1/namespaces/proxy-80/pods/http:proxy-service-6fhgs-hpfp5:1080/proxy/: te... (200; 2.666414ms) Jun 3 22:03:05.615: INFO: (14) /api/v1/namespaces/proxy-80/pods/https:proxy-service-6fhgs-hpfp5:462/proxy/: tls qux (200; 2.533013ms) Jun 3 22:03:05.615: INFO: (14) /api/v1/namespaces/proxy-80/pods/proxy-service-6fhgs-hpfp5/proxy/: test (200; 2.733494ms) Jun 3 22:03:05.615: INFO: (14) /api/v1/namespaces/proxy-80/pods/proxy-service-6fhgs-hpfp5:1080/proxy/: testtestte... (200; 2.800437ms) Jun 3 22:03:05.620: INFO: (15) /api/v1/namespaces/proxy-80/services/http:proxy-service-6fhgs:portname2/proxy/: bar (200; 3.154729ms) Jun 3 22:03:05.620: INFO: (15) /api/v1/namespaces/proxy-80/services/http:proxy-service-6fhgs:portname1/proxy/: foo (200; 3.634739ms) Jun 3 22:03:05.620: INFO: (15) /api/v1/namespaces/proxy-80/pods/proxy-service-6fhgs-hpfp5/proxy/: test (200; 3.392364ms) Jun 3 22:03:05.620: INFO: (15) /api/v1/namespaces/proxy-80/services/https:proxy-service-6fhgs:tlsportname1/proxy/: tls baz (200; 3.411757ms) Jun 3 22:03:05.621: INFO: (15) /api/v1/namespaces/proxy-80/services/proxy-service-6fhgs:portname2/proxy/: bar (200; 3.858835ms) Jun 3 22:03:05.621: INFO: (15) /api/v1/namespaces/proxy-80/services/proxy-service-6fhgs:portname1/proxy/: foo (200; 3.726982ms) Jun 3 22:03:05.621: INFO: (15) /api/v1/namespaces/proxy-80/services/https:proxy-service-6fhgs:tlsportname2/proxy/: tls qux (200; 3.941062ms) Jun 3 22:03:05.623: INFO: (16) /api/v1/namespaces/proxy-80/pods/http:proxy-service-6fhgs-hpfp5:1080/proxy/: te... (200; 2.039682ms) Jun 3 22:03:05.623: INFO: (16) /api/v1/namespaces/proxy-80/pods/https:proxy-service-6fhgs-hpfp5:443/proxy/: testtest (200; 3.027015ms) Jun 3 22:03:05.624: INFO: (16) /api/v1/namespaces/proxy-80/services/proxy-service-6fhgs:portname2/proxy/: bar (200; 3.100611ms) Jun 3 22:03:05.624: INFO: (16) /api/v1/namespaces/proxy-80/pods/proxy-service-6fhgs-hpfp5:162/proxy/: bar (200; 3.00547ms) Jun 3 22:03:05.624: INFO: (16) /api/v1/namespaces/proxy-80/pods/https:proxy-service-6fhgs-hpfp5:460/proxy/: tls baz (200; 3.053052ms) Jun 3 22:03:05.624: INFO: (16) /api/v1/namespaces/proxy-80/services/proxy-service-6fhgs:portname1/proxy/: foo (200; 3.435216ms) Jun 3 22:03:05.624: INFO: (16) /api/v1/namespaces/proxy-80/pods/http:proxy-service-6fhgs-hpfp5:162/proxy/: bar (200; 3.269213ms) Jun 3 22:03:05.625: INFO: (16) /api/v1/namespaces/proxy-80/services/https:proxy-service-6fhgs:tlsportname2/proxy/: tls qux (200; 3.827207ms) Jun 3 22:03:05.625: INFO: (16) /api/v1/namespaces/proxy-80/services/http:proxy-service-6fhgs:portname1/proxy/: foo (200; 3.772611ms) Jun 3 22:03:05.625: INFO: (16) /api/v1/namespaces/proxy-80/services/http:proxy-service-6fhgs:portname2/proxy/: bar (200; 4.018937ms) Jun 3 22:03:05.625: INFO: (16) /api/v1/namespaces/proxy-80/services/https:proxy-service-6fhgs:tlsportname1/proxy/: tls baz (200; 4.399401ms) Jun 3 22:03:05.627: INFO: (17) /api/v1/namespaces/proxy-80/pods/proxy-service-6fhgs-hpfp5:162/proxy/: bar (200; 1.919167ms) Jun 3 22:03:05.628: INFO: (17) /api/v1/namespaces/proxy-80/pods/http:proxy-service-6fhgs-hpfp5:160/proxy/: foo (200; 2.1732ms) Jun 3 22:03:05.628: INFO: (17) /api/v1/namespaces/proxy-80/pods/http:proxy-service-6fhgs-hpfp5:1080/proxy/: te... (200; 2.311871ms) Jun 3 22:03:05.628: INFO: (17) /api/v1/namespaces/proxy-80/pods/proxy-service-6fhgs-hpfp5/proxy/: test (200; 2.269313ms) Jun 3 22:03:05.628: INFO: (17) /api/v1/namespaces/proxy-80/pods/https:proxy-service-6fhgs-hpfp5:460/proxy/: tls baz (200; 2.589547ms) Jun 3 22:03:05.628: INFO: (17) /api/v1/namespaces/proxy-80/pods/proxy-service-6fhgs-hpfp5:1080/proxy/: testtesttest (200; 3.105565ms) Jun 3 22:03:05.633: INFO: (18) /api/v1/namespaces/proxy-80/pods/http:proxy-service-6fhgs-hpfp5:1080/proxy/: te... (200; 3.213223ms) Jun 3 22:03:05.633: INFO: (18) /api/v1/namespaces/proxy-80/pods/https:proxy-service-6fhgs-hpfp5:443/proxy/: te... (200; 2.109129ms) Jun 3 22:03:05.636: INFO: (19) /api/v1/namespaces/proxy-80/pods/proxy-service-6fhgs-hpfp5:1080/proxy/: testtest (200; 2.592606ms) Jun 3 22:03:05.637: INFO: (19) /api/v1/namespaces/proxy-80/pods/proxy-service-6fhgs-hpfp5:160/proxy/: foo (200; 2.59327ms) Jun 3 22:03:05.637: INFO: (19) /api/v1/namespaces/proxy-80/pods/http:proxy-service-6fhgs-hpfp5:162/proxy/: bar (200; 2.696693ms) Jun 3 22:03:05.637: INFO: (19) /api/v1/namespaces/proxy-80/pods/proxy-service-6fhgs-hpfp5:162/proxy/: bar (200; 3.03482ms) Jun 3 22:03:05.637: INFO: (19) /api/v1/namespaces/proxy-80/pods/https:proxy-service-6fhgs-hpfp5:460/proxy/: tls baz (200; 2.977777ms) Jun 3 22:03:05.638: INFO: (19) /api/v1/namespaces/proxy-80/services/proxy-service-6fhgs:portname2/proxy/: bar (200; 3.428096ms) Jun 3 22:03:05.638: INFO: (19) /api/v1/namespaces/proxy-80/pods/https:proxy-service-6fhgs-hpfp5:443/proxy/: >> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:241 [BeforeEach] Kubectl replace /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1548 [It] should update a single-container pod's image [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: running the image k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 Jun 3 22:03:12.775: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-6747 run e2e-test-httpd-pod --image=k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 --labels=run=e2e-test-httpd-pod' Jun 3 22:03:12.927: INFO: stderr: "" Jun 3 22:03:12.927: INFO: stdout: "pod/e2e-test-httpd-pod created\n" STEP: verifying the pod e2e-test-httpd-pod is running STEP: verifying the pod e2e-test-httpd-pod was created Jun 3 22:03:17.979: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-6747 get pod e2e-test-httpd-pod -o json' Jun 3 22:03:18.164: INFO: stderr: "" Jun 3 22:03:18.164: INFO: stdout: "{\n \"apiVersion\": \"v1\",\n \"kind\": \"Pod\",\n \"metadata\": {\n \"annotations\": {\n \"k8s.v1.cni.cncf.io/network-status\": \"[{\\n \\\"name\\\": \\\"default-cni-network\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.244.3.36\\\"\\n ],\\n \\\"mac\\\": \\\"e2:59:19:f3:8b:02\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\n \"k8s.v1.cni.cncf.io/networks-status\": \"[{\\n \\\"name\\\": \\\"default-cni-network\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.244.3.36\\\"\\n ],\\n \\\"mac\\\": \\\"e2:59:19:f3:8b:02\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\n \"kubernetes.io/psp\": \"collectd\"\n },\n \"creationTimestamp\": \"2022-06-03T22:03:12Z\",\n \"labels\": {\n \"run\": \"e2e-test-httpd-pod\"\n },\n \"name\": \"e2e-test-httpd-pod\",\n \"namespace\": \"kubectl-6747\",\n \"resourceVersion\": \"43647\",\n \"uid\": \"53157f5b-6205-4ef1-9724-38f7b5e25271\"\n },\n \"spec\": {\n \"containers\": [\n {\n \"image\": \"k8s.gcr.io/e2e-test-images/httpd:2.4.38-1\",\n \"imagePullPolicy\": \"Always\",\n \"name\": \"e2e-test-httpd-pod\",\n \"resources\": {},\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"volumeMounts\": [\n {\n \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n \"name\": \"kube-api-access-69xmj\",\n \"readOnly\": true\n }\n ]\n }\n ],\n \"dnsPolicy\": \"ClusterFirst\",\n \"enableServiceLinks\": true,\n \"nodeName\": \"node1\",\n \"preemptionPolicy\": \"PreemptLowerPriority\",\n \"priority\": 0,\n \"restartPolicy\": \"Always\",\n \"schedulerName\": \"default-scheduler\",\n \"securityContext\": {},\n \"serviceAccount\": \"default\",\n \"serviceAccountName\": \"default\",\n \"terminationGracePeriodSeconds\": 30,\n \"tolerations\": [\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/not-ready\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n },\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/unreachable\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n }\n ],\n \"volumes\": [\n {\n \"name\": \"kube-api-access-69xmj\",\n \"projected\": {\n \"defaultMode\": 420,\n \"sources\": [\n {\n \"serviceAccountToken\": {\n \"expirationSeconds\": 3607,\n \"path\": \"token\"\n }\n },\n {\n \"configMap\": {\n \"items\": [\n {\n \"key\": \"ca.crt\",\n \"path\": \"ca.crt\"\n }\n ],\n \"name\": \"kube-root-ca.crt\"\n }\n },\n {\n \"downwardAPI\": {\n \"items\": [\n {\n \"fieldRef\": {\n \"apiVersion\": \"v1\",\n \"fieldPath\": \"metadata.namespace\"\n },\n \"path\": \"namespace\"\n }\n ]\n }\n }\n ]\n }\n }\n ]\n },\n \"status\": {\n \"conditions\": [\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2022-06-03T22:03:12Z\",\n \"status\": \"True\",\n \"type\": \"Initialized\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2022-06-03T22:03:16Z\",\n \"status\": \"True\",\n \"type\": \"Ready\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2022-06-03T22:03:16Z\",\n \"status\": \"True\",\n \"type\": \"ContainersReady\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2022-06-03T22:03:12Z\",\n \"status\": \"True\",\n \"type\": \"PodScheduled\"\n }\n ],\n \"containerStatuses\": [\n {\n \"containerID\": \"docker://cb7e6adeeed0a72ebd3ac5e9b2feefcccfa5d34e0c16861ca62d110c11425ac3\",\n \"image\": \"k8s.gcr.io/e2e-test-images/httpd:2.4.38-1\",\n \"imageID\": \"docker-pullable://k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50\",\n \"lastState\": {},\n \"name\": \"e2e-test-httpd-pod\",\n \"ready\": true,\n \"restartCount\": 0,\n \"started\": true,\n \"state\": {\n \"running\": {\n \"startedAt\": \"2022-06-03T22:03:15Z\"\n }\n }\n }\n ],\n \"hostIP\": \"10.10.190.207\",\n \"phase\": \"Running\",\n \"podIP\": \"10.244.3.36\",\n \"podIPs\": [\n {\n \"ip\": \"10.244.3.36\"\n }\n ],\n \"qosClass\": \"BestEffort\",\n \"startTime\": \"2022-06-03T22:03:12Z\"\n }\n}\n" STEP: replace the image in the pod Jun 3 22:03:18.164: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-6747 replace -f -' Jun 3 22:03:18.554: INFO: stderr: "" Jun 3 22:03:18.554: INFO: stdout: "pod/e2e-test-httpd-pod replaced\n" STEP: verifying the pod e2e-test-httpd-pod has the right image k8s.gcr.io/e2e-test-images/busybox:1.29-1 [AfterEach] Kubectl replace /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1552 Jun 3 22:03:18.557: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-6747 delete pods e2e-test-httpd-pod' Jun 3 22:03:32.131: INFO: stderr: "" Jun 3 22:03:32.131: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 3 22:03:32.131: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6747" for this suite. • [SLOW TEST:19.389 seconds] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl replace /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1545 should update a single-container pod's image [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image [Conformance]","total":-1,"completed":39,"skipped":657,"failed":0} SSS ------------------------------ [BeforeEach] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 3 22:03:11.304: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with configmap pod [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating pod pod-subpath-test-configmap-fwv4 STEP: Creating a pod to test atomic-volume-subpath Jun 3 22:03:11.343: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-fwv4" in namespace "subpath-8739" to be "Succeeded or Failed" Jun 3 22:03:11.356: INFO: Pod "pod-subpath-test-configmap-fwv4": Phase="Pending", Reason="", readiness=false. Elapsed: 12.278948ms Jun 3 22:03:13.358: INFO: Pod "pod-subpath-test-configmap-fwv4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014929244s Jun 3 22:03:15.363: INFO: Pod "pod-subpath-test-configmap-fwv4": Phase="Running", Reason="", readiness=true. Elapsed: 4.020130975s Jun 3 22:03:17.367: INFO: Pod "pod-subpath-test-configmap-fwv4": Phase="Running", Reason="", readiness=true. Elapsed: 6.02415997s Jun 3 22:03:19.371: INFO: Pod "pod-subpath-test-configmap-fwv4": Phase="Running", Reason="", readiness=true. Elapsed: 8.027427189s Jun 3 22:03:21.375: INFO: Pod "pod-subpath-test-configmap-fwv4": Phase="Running", Reason="", readiness=true. Elapsed: 10.031723564s Jun 3 22:03:23.379: INFO: Pod "pod-subpath-test-configmap-fwv4": Phase="Running", Reason="", readiness=true. Elapsed: 12.035743236s Jun 3 22:03:25.383: INFO: Pod "pod-subpath-test-configmap-fwv4": Phase="Running", Reason="", readiness=true. Elapsed: 14.039668372s Jun 3 22:03:27.389: INFO: Pod "pod-subpath-test-configmap-fwv4": Phase="Running", Reason="", readiness=true. Elapsed: 16.045708525s Jun 3 22:03:29.392: INFO: Pod "pod-subpath-test-configmap-fwv4": Phase="Running", Reason="", readiness=true. Elapsed: 18.049175832s Jun 3 22:03:31.397: INFO: Pod "pod-subpath-test-configmap-fwv4": Phase="Running", Reason="", readiness=true. Elapsed: 20.05334655s Jun 3 22:03:33.401: INFO: Pod "pod-subpath-test-configmap-fwv4": Phase="Running", Reason="", readiness=true. Elapsed: 22.057509347s Jun 3 22:03:35.404: INFO: Pod "pod-subpath-test-configmap-fwv4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.060949474s STEP: Saw pod success Jun 3 22:03:35.404: INFO: Pod "pod-subpath-test-configmap-fwv4" satisfied condition "Succeeded or Failed" Jun 3 22:03:35.407: INFO: Trying to get logs from node node2 pod pod-subpath-test-configmap-fwv4 container test-container-subpath-configmap-fwv4: STEP: delete the pod Jun 3 22:03:35.421: INFO: Waiting for pod pod-subpath-test-configmap-fwv4 to disappear Jun 3 22:03:35.423: INFO: Pod pod-subpath-test-configmap-fwv4 no longer exists STEP: Deleting pod pod-subpath-test-configmap-fwv4 Jun 3 22:03:35.423: INFO: Deleting pod "pod-subpath-test-configmap-fwv4" in namespace "subpath-8739" [AfterEach] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 3 22:03:35.425: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-8739" for this suite. • [SLOW TEST:24.131 seconds] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with configmap pod [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance]","total":-1,"completed":23,"skipped":355,"failed":1,"failures":["[sig-network] Services should be able to create a functioning NodePort service [Conformance]"]} SSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 3 22:03:32.155: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context STEP: Waiting for a default service account to be provisioned in namespace [It] should support container.SecurityContext.RunAsUser And container.SecurityContext.RunAsGroup [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test pod.Spec.SecurityContext.RunAsUser Jun 3 22:03:32.191: INFO: Waiting up to 5m0s for pod "security-context-abdd4c98-8d7f-472f-9a15-6c6b8b34d372" in namespace "security-context-7550" to be "Succeeded or Failed" Jun 3 22:03:32.194: INFO: Pod "security-context-abdd4c98-8d7f-472f-9a15-6c6b8b34d372": Phase="Pending", Reason="", readiness=false. Elapsed: 2.848904ms Jun 3 22:03:34.198: INFO: Pod "security-context-abdd4c98-8d7f-472f-9a15-6c6b8b34d372": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00626317s Jun 3 22:03:36.200: INFO: Pod "security-context-abdd4c98-8d7f-472f-9a15-6c6b8b34d372": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.008963421s STEP: Saw pod success Jun 3 22:03:36.200: INFO: Pod "security-context-abdd4c98-8d7f-472f-9a15-6c6b8b34d372" satisfied condition "Succeeded or Failed" Jun 3 22:03:36.203: INFO: Trying to get logs from node node1 pod security-context-abdd4c98-8d7f-472f-9a15-6c6b8b34d372 container test-container: STEP: delete the pod Jun 3 22:03:36.215: INFO: Waiting for pod security-context-abdd4c98-8d7f-472f-9a15-6c6b8b34d372 to disappear Jun 3 22:03:36.217: INFO: Pod security-context-abdd4c98-8d7f-472f-9a15-6c6b8b34d372 no longer exists [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 3 22:03:36.217: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-7550" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Security Context should support container.SecurityContext.RunAsUser And container.SecurityContext.RunAsGroup [LinuxOnly] [Conformance]","total":-1,"completed":40,"skipped":660,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 3 22:03:36.264: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should patch a secret [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating a secret STEP: listing secrets in all namespaces to ensure that there are more than zero STEP: patching the secret STEP: deleting the secret using a LabelSelector STEP: listing secrets in all namespaces, searching for label name and value in patch [AfterEach] [sig-node] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 3 22:03:36.331: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-8242" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Secrets should patch a secret [Conformance]","total":-1,"completed":41,"skipped":676,"failed":0} SSSS ------------------------------ [BeforeEach] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 3 22:03:36.352: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test downward api env vars Jun 3 22:03:36.385: INFO: Waiting up to 5m0s for pod "downward-api-d4606a52-d7c3-4769-ac84-eff099deea2e" in namespace "downward-api-6230" to be "Succeeded or Failed" Jun 3 22:03:36.389: INFO: Pod "downward-api-d4606a52-d7c3-4769-ac84-eff099deea2e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.005534ms Jun 3 22:03:38.392: INFO: Pod "downward-api-d4606a52-d7c3-4769-ac84-eff099deea2e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007320864s Jun 3 22:03:40.396: INFO: Pod "downward-api-d4606a52-d7c3-4769-ac84-eff099deea2e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011297736s STEP: Saw pod success Jun 3 22:03:40.396: INFO: Pod "downward-api-d4606a52-d7c3-4769-ac84-eff099deea2e" satisfied condition "Succeeded or Failed" Jun 3 22:03:40.402: INFO: Trying to get logs from node node1 pod downward-api-d4606a52-d7c3-4769-ac84-eff099deea2e container dapi-container: STEP: delete the pod Jun 3 22:03:40.416: INFO: Waiting for pod downward-api-d4606a52-d7c3-4769-ac84-eff099deea2e to disappear Jun 3 22:03:40.418: INFO: Pod downward-api-d4606a52-d7c3-4769-ac84-eff099deea2e no longer exists [AfterEach] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 3 22:03:40.418: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-6230" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]","total":-1,"completed":42,"skipped":680,"failed":0} SSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 3 22:03:12.320: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should run with the expected status [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpa': should get the expected 'State' STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpof': should get the expected 'State' STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpn': should get the expected 'State' STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance] [AfterEach] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 3 22:03:41.544: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-8148" for this suite. • [SLOW TEST:29.232 seconds] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 blackbox test /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:41 when starting a container that exits /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:42 should run with the expected status [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance]","total":-1,"completed":17,"skipped":287,"failed":0} SSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 3 22:03:35.466: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41 [It] should update labels on modification [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating the pod Jun 3 22:03:35.506: INFO: The status of Pod labelsupdatebb6b8ee1-5c9f-4979-b1ef-20711e636fac is Pending, waiting for it to be Running (with Ready = true) Jun 3 22:03:37.508: INFO: The status of Pod labelsupdatebb6b8ee1-5c9f-4979-b1ef-20711e636fac is Pending, waiting for it to be Running (with Ready = true) Jun 3 22:03:39.509: INFO: The status of Pod labelsupdatebb6b8ee1-5c9f-4979-b1ef-20711e636fac is Running (Ready = true) Jun 3 22:03:40.026: INFO: Successfully updated pod "labelsupdatebb6b8ee1-5c9f-4979-b1ef-20711e636fac" [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 3 22:03:44.060: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-8090" for this suite. • [SLOW TEST:8.604 seconds] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should update labels on modification [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance]","total":-1,"completed":24,"skipped":367,"failed":1,"failures":["[sig-network] Services should be able to create a functioning NodePort service [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Docker Containers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 3 22:03:44.146: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command and arguments [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test override all Jun 3 22:03:44.178: INFO: Waiting up to 5m0s for pod "client-containers-cba27c50-890b-4c47-ade7-b62377f3d7cc" in namespace "containers-136" to be "Succeeded or Failed" Jun 3 22:03:44.184: INFO: Pod "client-containers-cba27c50-890b-4c47-ade7-b62377f3d7cc": Phase="Pending", Reason="", readiness=false. Elapsed: 5.365392ms Jun 3 22:03:46.188: INFO: Pod "client-containers-cba27c50-890b-4c47-ade7-b62377f3d7cc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009112773s Jun 3 22:03:48.190: INFO: Pod "client-containers-cba27c50-890b-4c47-ade7-b62377f3d7cc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011724761s STEP: Saw pod success Jun 3 22:03:48.190: INFO: Pod "client-containers-cba27c50-890b-4c47-ade7-b62377f3d7cc" satisfied condition "Succeeded or Failed" Jun 3 22:03:48.192: INFO: Trying to get logs from node node2 pod client-containers-cba27c50-890b-4c47-ade7-b62377f3d7cc container agnhost-container: STEP: delete the pod Jun 3 22:03:48.208: INFO: Waiting for pod client-containers-cba27c50-890b-4c47-ade7-b62377f3d7cc to disappear Jun 3 22:03:48.210: INFO: Pod client-containers-cba27c50-890b-4c47-ade7-b62377f3d7cc no longer exists [AfterEach] [sig-node] Docker Containers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 3 22:03:48.210: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-136" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance]","total":-1,"completed":25,"skipped":398,"failed":1,"failures":["[sig-network] Services should be able to create a functioning NodePort service [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 3 22:03:40.459: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:186 [It] should run through the lifecycle of Pods and PodStatus [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating a Pod with a static label STEP: watching for Pod to be ready Jun 3 22:03:40.498: INFO: observed Pod pod-test in namespace pods-8494 in phase Pending with labels: map[test-pod-static:true] & conditions [] Jun 3 22:03:40.500: INFO: observed Pod pod-test in namespace pods-8494 in phase Pending with labels: map[test-pod-static:true] & conditions [{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-06-03 22:03:40 +0000 UTC }] Jun 3 22:03:40.508: INFO: observed Pod pod-test in namespace pods-8494 in phase Pending with labels: map[test-pod-static:true] & conditions [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-06-03 22:03:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-06-03 22:03:40 +0000 UTC ContainersNotReady containers with unready status: [pod-test]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-06-03 22:03:40 +0000 UTC ContainersNotReady containers with unready status: [pod-test]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-06-03 22:03:40 +0000 UTC }] Jun 3 22:03:42.534: INFO: observed Pod pod-test in namespace pods-8494 in phase Pending with labels: map[test-pod-static:true] & conditions [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-06-03 22:03:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-06-03 22:03:40 +0000 UTC ContainersNotReady containers with unready status: [pod-test]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-06-03 22:03:40 +0000 UTC ContainersNotReady containers with unready status: [pod-test]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-06-03 22:03:40 +0000 UTC }] Jun 3 22:03:45.423: INFO: observed Pod pod-test in namespace pods-8494 in phase Pending with labels: map[test-pod-static:true] & conditions [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-06-03 22:03:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-06-03 22:03:40 +0000 UTC ContainersNotReady containers with unready status: [pod-test]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-06-03 22:03:40 +0000 UTC ContainersNotReady containers with unready status: [pod-test]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-06-03 22:03:40 +0000 UTC }] Jun 3 22:03:48.743: INFO: Found Pod pod-test in namespace pods-8494 in phase Running with labels: map[test-pod-static:true] & conditions [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-06-03 22:03:40 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2022-06-03 22:03:48 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-06-03 22:03:48 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-06-03 22:03:40 +0000 UTC }] STEP: patching the Pod with a new Label and updated data Jun 3 22:03:48.754: INFO: observed event type ADDED STEP: getting the Pod and ensuring that it's patched STEP: getting the PodStatus STEP: replacing the Pod's status Ready condition to False STEP: check the Pod again to ensure its Ready conditions are False STEP: deleting the Pod via a Collection with a LabelSelector STEP: watching for the Pod to be deleted Jun 3 22:03:48.775: INFO: observed event type ADDED Jun 3 22:03:48.775: INFO: observed event type MODIFIED Jun 3 22:03:48.775: INFO: observed event type MODIFIED Jun 3 22:03:48.775: INFO: observed event type MODIFIED Jun 3 22:03:48.775: INFO: observed event type MODIFIED Jun 3 22:03:48.775: INFO: observed event type MODIFIED Jun 3 22:03:48.775: INFO: observed event type MODIFIED Jun 3 22:03:48.775: INFO: observed event type MODIFIED Jun 3 22:03:48.776: INFO: observed event type MODIFIED [AfterEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 3 22:03:48.776: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-8494" for this suite. • [SLOW TEST:8.325 seconds] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should run through the lifecycle of Pods and PodStatus [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Pods should run through the lifecycle of Pods and PodStatus [Conformance]","total":-1,"completed":43,"skipped":691,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 3 22:03:48.851: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should run through a ConfigMap lifecycle [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating a ConfigMap STEP: fetching the ConfigMap STEP: patching the ConfigMap STEP: listing all ConfigMaps in all namespaces with a label selector STEP: deleting the ConfigMap by collection with a label selector STEP: listing all ConfigMaps in test namespace [AfterEach] [sig-node] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 3 22:03:48.953: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-8520" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] ConfigMap should run through a ConfigMap lifecycle [Conformance]","total":-1,"completed":44,"skipped":718,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 3 22:03:20.215: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: http [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Performing setup for networking test in namespace pod-network-test-8417 STEP: creating a selector STEP: Creating the service pods in kubernetes Jun 3 22:03:20.235: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Jun 3 22:03:20.270: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jun 3 22:03:22.274: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jun 3 22:03:24.275: INFO: The status of Pod netserver-0 is Running (Ready = false) Jun 3 22:03:26.275: INFO: The status of Pod netserver-0 is Running (Ready = false) Jun 3 22:03:28.275: INFO: The status of Pod netserver-0 is Running (Ready = false) Jun 3 22:03:30.275: INFO: The status of Pod netserver-0 is Running (Ready = false) Jun 3 22:03:32.274: INFO: The status of Pod netserver-0 is Running (Ready = false) Jun 3 22:03:34.275: INFO: The status of Pod netserver-0 is Running (Ready = false) Jun 3 22:03:36.273: INFO: The status of Pod netserver-0 is Running (Ready = false) Jun 3 22:03:38.276: INFO: The status of Pod netserver-0 is Running (Ready = false) Jun 3 22:03:40.273: INFO: The status of Pod netserver-0 is Running (Ready = false) Jun 3 22:03:42.274: INFO: The status of Pod netserver-0 is Running (Ready = true) Jun 3 22:03:42.279: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods Jun 3 22:03:52.302: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2 Jun 3 22:03:52.302: INFO: Breadth first check of 10.244.3.38 on host 10.10.190.207... Jun 3 22:03:52.305: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.3.49:9080/dial?request=hostname&protocol=http&host=10.244.3.38&port=8080&tries=1'] Namespace:pod-network-test-8417 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 3 22:03:52.305: INFO: >>> kubeConfig: /root/.kube/config Jun 3 22:03:52.399: INFO: Waiting for responses: map[] Jun 3 22:03:52.399: INFO: reached 10.244.3.38 after 0/1 tries Jun 3 22:03:52.399: INFO: Breadth first check of 10.244.4.121 on host 10.10.190.208... Jun 3 22:03:52.402: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.3.49:9080/dial?request=hostname&protocol=http&host=10.244.4.121&port=8080&tries=1'] Namespace:pod-network-test-8417 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 3 22:03:52.402: INFO: >>> kubeConfig: /root/.kube/config Jun 3 22:03:52.692: INFO: Waiting for responses: map[] Jun 3 22:03:52.692: INFO: reached 10.244.4.121 after 0/1 tries Jun 3 22:03:52.692: INFO: Going to retry 0 out of 2 pods.... [AfterEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 3 22:03:52.692: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-8417" for this suite. • [SLOW TEST:32.490 seconds] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/network/framework.go:23 Granular Checks: Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/network/networking.go:30 should function for intra-pod communication: http [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","total":-1,"completed":17,"skipped":259,"failed":1,"failures":["[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 3 22:03:52.723: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41 [It] should provide container's cpu limit [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test downward API volume plugin Jun 3 22:03:52.758: INFO: Waiting up to 5m0s for pod "downwardapi-volume-3f382593-a8bc-4b90-a13e-26fa1f9d4197" in namespace "projected-334" to be "Succeeded or Failed" Jun 3 22:03:52.762: INFO: Pod "downwardapi-volume-3f382593-a8bc-4b90-a13e-26fa1f9d4197": Phase="Pending", Reason="", readiness=false. Elapsed: 3.699325ms Jun 3 22:03:54.765: INFO: Pod "downwardapi-volume-3f382593-a8bc-4b90-a13e-26fa1f9d4197": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006933602s Jun 3 22:03:56.768: INFO: Pod "downwardapi-volume-3f382593-a8bc-4b90-a13e-26fa1f9d4197": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010259667s STEP: Saw pod success Jun 3 22:03:56.769: INFO: Pod "downwardapi-volume-3f382593-a8bc-4b90-a13e-26fa1f9d4197" satisfied condition "Succeeded or Failed" Jun 3 22:03:56.771: INFO: Trying to get logs from node node2 pod downwardapi-volume-3f382593-a8bc-4b90-a13e-26fa1f9d4197 container client-container: STEP: delete the pod Jun 3 22:03:56.784: INFO: Waiting for pod downwardapi-volume-3f382593-a8bc-4b90-a13e-26fa1f9d4197 to disappear Jun 3 22:03:56.786: INFO: Pod downwardapi-volume-3f382593-a8bc-4b90-a13e-26fa1f9d4197 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 3 22:03:56.786: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-334" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance]","total":-1,"completed":18,"skipped":265,"failed":1,"failures":["[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SS ------------------------------ [BeforeEach] [sig-apps] ReplicaSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 3 22:03:49.002: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] Replace and Patch tests [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Jun 3 22:03:49.033: INFO: Pod name sample-pod: Found 0 pods out of 1 Jun 3 22:03:54.038: INFO: Pod name sample-pod: Found 1 pods out of 1 STEP: ensuring each pod is running STEP: Scaling up "test-rs" replicaset Jun 3 22:03:54.044: INFO: Updating replica set "test-rs" STEP: patching the ReplicaSet Jun 3 22:03:54.050: INFO: observed ReplicaSet test-rs in namespace replicaset-7453 with ReadyReplicas 1, AvailableReplicas 1 Jun 3 22:03:54.059: INFO: observed ReplicaSet test-rs in namespace replicaset-7453 with ReadyReplicas 1, AvailableReplicas 1 Jun 3 22:03:54.073: INFO: observed ReplicaSet test-rs in namespace replicaset-7453 with ReadyReplicas 1, AvailableReplicas 1 Jun 3 22:03:54.078: INFO: observed ReplicaSet test-rs in namespace replicaset-7453 with ReadyReplicas 1, AvailableReplicas 1 Jun 3 22:03:57.238: INFO: observed ReplicaSet test-rs in namespace replicaset-7453 with ReadyReplicas 2, AvailableReplicas 2 Jun 3 22:03:58.287: INFO: observed Replicaset test-rs in namespace replicaset-7453 with ReadyReplicas 3 found true [AfterEach] [sig-apps] ReplicaSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 3 22:03:58.287: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-7453" for this suite. • [SLOW TEST:9.295 seconds] [sig-apps] ReplicaSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 Replace and Patch tests [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] ReplicaSet Replace and Patch tests [Conformance]","total":-1,"completed":45,"skipped":735,"failed":0} SSSSSSSS ------------------------------ [BeforeEach] [sig-apps] ReplicaSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 3 22:03:48.289: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Jun 3 22:03:48.313: INFO: Creating ReplicaSet my-hostname-basic-5d024d87-45dd-4c7c-b100-6f67063d5856 Jun 3 22:03:48.318: INFO: Pod name my-hostname-basic-5d024d87-45dd-4c7c-b100-6f67063d5856: Found 0 pods out of 1 Jun 3 22:03:53.322: INFO: Pod name my-hostname-basic-5d024d87-45dd-4c7c-b100-6f67063d5856: Found 1 pods out of 1 Jun 3 22:03:53.322: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-5d024d87-45dd-4c7c-b100-6f67063d5856" is running Jun 3 22:03:53.324: INFO: Pod "my-hostname-basic-5d024d87-45dd-4c7c-b100-6f67063d5856-t7r6t" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-03 22:03:48 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-03 22:03:51 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-03 22:03:51 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-03 22:03:48 +0000 UTC Reason: Message:}]) Jun 3 22:03:53.325: INFO: Trying to dial the pod Jun 3 22:03:58.334: INFO: Controller my-hostname-basic-5d024d87-45dd-4c7c-b100-6f67063d5856: Got expected result from replica 1 [my-hostname-basic-5d024d87-45dd-4c7c-b100-6f67063d5856-t7r6t]: "my-hostname-basic-5d024d87-45dd-4c7c-b100-6f67063d5856-t7r6t", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicaSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 3 22:03:58.334: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-4806" for this suite. • [SLOW TEST:10.052 seconds] [sig-apps] ReplicaSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance]","total":-1,"completed":26,"skipped":429,"failed":1,"failures":["[sig-network] Services should be able to create a functioning NodePort service [Conformance]"]} SSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 3 22:03:56.801: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41 [It] should provide container's memory request [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test downward API volume plugin Jun 3 22:03:56.835: INFO: Waiting up to 5m0s for pod "downwardapi-volume-b951ddf4-540f-440a-9d46-25af0621f9bd" in namespace "downward-api-3732" to be "Succeeded or Failed" Jun 3 22:03:56.838: INFO: Pod "downwardapi-volume-b951ddf4-540f-440a-9d46-25af0621f9bd": Phase="Pending", Reason="", readiness=false. Elapsed: 3.154218ms Jun 3 22:03:58.842: INFO: Pod "downwardapi-volume-b951ddf4-540f-440a-9d46-25af0621f9bd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007004773s Jun 3 22:04:00.845: INFO: Pod "downwardapi-volume-b951ddf4-540f-440a-9d46-25af0621f9bd": Phase="Pending", Reason="", readiness=false. Elapsed: 4.010361467s Jun 3 22:04:02.849: INFO: Pod "downwardapi-volume-b951ddf4-540f-440a-9d46-25af0621f9bd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.014383966s STEP: Saw pod success Jun 3 22:04:02.849: INFO: Pod "downwardapi-volume-b951ddf4-540f-440a-9d46-25af0621f9bd" satisfied condition "Succeeded or Failed" Jun 3 22:04:02.852: INFO: Trying to get logs from node node1 pod downwardapi-volume-b951ddf4-540f-440a-9d46-25af0621f9bd container client-container: STEP: delete the pod Jun 3 22:04:02.865: INFO: Waiting for pod downwardapi-volume-b951ddf4-540f-440a-9d46-25af0621f9bd to disappear Jun 3 22:04:02.867: INFO: Pod downwardapi-volume-b951ddf4-540f-440a-9d46-25af0621f9bd no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 3 22:04:02.867: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-3732" for this suite. • [SLOW TEST:6.074 seconds] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should provide container's memory request [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance]","total":-1,"completed":19,"skipped":267,"failed":1,"failures":["[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 3 22:04:02.931: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/kubelet.go:38 [It] should print the output to logs [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Jun 3 22:04:02.965: INFO: The status of Pod busybox-scheduling-2fceb1fa-b3c2-4801-9db3-106a4e840a56 is Pending, waiting for it to be Running (with Ready = true) Jun 3 22:04:04.969: INFO: The status of Pod busybox-scheduling-2fceb1fa-b3c2-4801-9db3-106a4e840a56 is Pending, waiting for it to be Running (with Ready = true) Jun 3 22:04:06.968: INFO: The status of Pod busybox-scheduling-2fceb1fa-b3c2-4801-9db3-106a4e840a56 is Running (Ready = true) [AfterEach] [sig-node] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 3 22:04:06.977: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-6865" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance]","total":-1,"completed":20,"skipped":292,"failed":1,"failures":["[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSSSSSSSSS ------------------------------ [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 3 22:03:58.365: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:86 [It] deployment should delete old replica sets [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Jun 3 22:03:58.397: INFO: Pod name cleanup-pod: Found 0 pods out of 1 Jun 3 22:04:03.402: INFO: Pod name cleanup-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Jun 3 22:04:03.402: INFO: Creating deployment test-cleanup-deployment STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:80 Jun 3 22:04:11.424: INFO: Deployment "test-cleanup-deployment": &Deployment{ObjectMeta:{test-cleanup-deployment deployment-465 e5749a06-9661-4034-b103-f2c0bab90726 44879 1 2022-06-03 22:04:03 +0000 UTC map[name:cleanup-pod] map[deployment.kubernetes.io/revision:1] [] [] [{e2e.test Update apps/v1 2022-06-03 22:04:03 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2022-06-03 22:04:11 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:availableReplicas":{},"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{},"f:updatedReplicas":{}}}}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.32 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc0090b4cc8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2022-06-03 22:04:03 +0000 UTC,LastTransitionTime:2022-06-03 22:04:03 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-cleanup-deployment-5b4d99b59b" has successfully progressed.,LastUpdateTime:2022-06-03 22:04:11 +0000 UTC,LastTransitionTime:2022-06-03 22:04:03 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} Jun 3 22:04:11.427: INFO: New ReplicaSet "test-cleanup-deployment-5b4d99b59b" of Deployment "test-cleanup-deployment": &ReplicaSet{ObjectMeta:{test-cleanup-deployment-5b4d99b59b deployment-465 7647fa9a-65ba-41cf-8a45-45846dd77ab4 44868 1 2022-06-03 22:04:03 +0000 UTC map[name:cleanup-pod pod-template-hash:5b4d99b59b] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-cleanup-deployment e5749a06-9661-4034-b103-f2c0bab90726 0xc0090b50b7 0xc0090b50b8}] [] [{kube-controller-manager Update apps/v1 2022-06-03 22:04:11 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e5749a06-9661-4034-b103-f2c0bab90726\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod-template-hash: 5b4d99b59b,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod pod-template-hash:5b4d99b59b] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.32 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc0090b5178 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} Jun 3 22:04:11.430: INFO: Pod "test-cleanup-deployment-5b4d99b59b-4mgfd" is available: &Pod{ObjectMeta:{test-cleanup-deployment-5b4d99b59b-4mgfd test-cleanup-deployment-5b4d99b59b- deployment-465 eef55f83-afc3-4f9e-a145-2fbba2584d29 44867 0 2022-06-03 22:04:03 +0000 UTC map[name:cleanup-pod pod-template-hash:5b4d99b59b] map[k8s.v1.cni.cncf.io/network-status:[{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.4.139" ], "mac": "ea:28:e1:3b:07:ec", "default": true, "dns": {} }] k8s.v1.cni.cncf.io/networks-status:[{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.4.139" ], "mac": "ea:28:e1:3b:07:ec", "default": true, "dns": {} }] kubernetes.io/psp:collectd] [{apps/v1 ReplicaSet test-cleanup-deployment-5b4d99b59b 7647fa9a-65ba-41cf-8a45-45846dd77ab4 0xc0090b54ff 0xc0090b5510}] [] [{kube-controller-manager Update v1 2022-06-03 22:04:03 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7647fa9a-65ba-41cf-8a45-45846dd77ab4\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {multus Update v1 2022-06-03 22:04:05 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:k8s.v1.cni.cncf.io/network-status":{},"f:k8s.v1.cni.cncf.io/networks-status":{}}}}} {kubelet Update v1 2022-06-03 22:04:11 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.4.139\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-hncsc,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:k8s.gcr.io/e2e-test-images/agnhost:2.32,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-hncsc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-06-03 22:04:03 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-06-03 22:04:11 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-06-03 22:04:11 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-06-03 22:04:03 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.10.190.208,PodIP:10.244.4.139,StartTime:2022-06-03 22:04:03 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2022-06-03 22:04:10 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/agnhost:2.32,ImageID:docker-pullable://k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1,ContainerID:docker://c26ed9dfb4a10931983f4a45a44d11dcec833b713f99804470d6e5e9d7940336,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.4.139,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 3 22:04:11.430: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-465" for this suite. • [SLOW TEST:13.073 seconds] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should delete old replica sets [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should delete old replica sets [Conformance]","total":-1,"completed":27,"skipped":440,"failed":1,"failures":["[sig-network] Services should be able to create a functioning NodePort service [Conformance]"]} SSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] Aggregator /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 3 22:03:58.319: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename aggregator STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Aggregator /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:77 Jun 3 22:03:58.341: INFO: >>> kubeConfig: /root/.kube/config [It] Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Registering the sample API server. Jun 3 22:03:58.733: INFO: deployment "sample-apiserver-deployment" doesn't have the required revision set Jun 3 22:04:00.764: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63789890638, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63789890638, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63789890638, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63789890638, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-64f6b9dc99\" is progressing."}}, CollisionCount:(*int32)(nil)} Jun 3 22:04:02.772: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63789890638, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63789890638, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63789890638, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63789890638, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-64f6b9dc99\" is progressing."}}, CollisionCount:(*int32)(nil)} Jun 3 22:04:04.767: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63789890638, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63789890638, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63789890638, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63789890638, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-64f6b9dc99\" is progressing."}}, CollisionCount:(*int32)(nil)} Jun 3 22:04:06.769: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63789890638, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63789890638, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63789890638, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63789890638, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-64f6b9dc99\" is progressing."}}, CollisionCount:(*int32)(nil)} Jun 3 22:04:08.774: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63789890638, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63789890638, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63789890638, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63789890638, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-64f6b9dc99\" is progressing."}}, CollisionCount:(*int32)(nil)} Jun 3 22:04:10.767: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63789890638, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63789890638, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63789890638, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63789890638, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-64f6b9dc99\" is progressing."}}, CollisionCount:(*int32)(nil)} Jun 3 22:04:14.980: INFO: Waited 2.207383743s for the sample-apiserver to be ready to handle requests. STEP: Read Status for v1alpha1.wardle.example.com STEP: kubectl patch apiservice v1alpha1.wardle.example.com -p '{"spec":{"versionPriority": 400}}' STEP: List APIServices Jun 3 22:04:15.385: INFO: Found v1alpha1.wardle.example.com in APIServiceList [AfterEach] [sig-api-machinery] Aggregator /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:68 [AfterEach] [sig-api-machinery] Aggregator /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 3 22:04:16.174: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "aggregator-3004" for this suite. • [SLOW TEST:17.956 seconds] [sig-api-machinery] Aggregator /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","total":-1,"completed":46,"skipped":743,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 3 22:03:41.580: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan pods created by rc if delete options say so [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods STEP: Gathering metrics Jun 3 22:04:21.651: INFO: The status of Pod kube-controller-manager-master3 is Running (Ready = true) Jun 3 22:04:21.835: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: Jun 3 22:04:21.835: INFO: Deleting pod "simpletest.rc-2n2rq" in namespace "gc-9496" Jun 3 22:04:21.844: INFO: Deleting pod "simpletest.rc-48ndn" in namespace "gc-9496" Jun 3 22:04:21.850: INFO: Deleting pod "simpletest.rc-8vdfk" in namespace "gc-9496" Jun 3 22:04:21.856: INFO: Deleting pod "simpletest.rc-br7kt" in namespace "gc-9496" Jun 3 22:04:21.862: INFO: Deleting pod "simpletest.rc-dmfvd" in namespace "gc-9496" Jun 3 22:04:21.868: INFO: Deleting pod "simpletest.rc-llc6g" in namespace "gc-9496" Jun 3 22:04:21.873: INFO: Deleting pod "simpletest.rc-ntbx9" in namespace "gc-9496" Jun 3 22:04:21.880: INFO: Deleting pod "simpletest.rc-nzr2p" in namespace "gc-9496" Jun 3 22:04:21.886: INFO: Deleting pod "simpletest.rc-vfzbq" in namespace "gc-9496" Jun 3 22:04:21.891: INFO: Deleting pod "simpletest.rc-vq5fp" in namespace "gc-9496" [AfterEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 3 22:04:21.896: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-9496" for this suite. • [SLOW TEST:40.324 seconds] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should orphan pods created by rc if delete options say so [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance]","total":-1,"completed":18,"skipped":298,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] Service endpoints latency /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 3 22:04:11.463: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svc-latency STEP: Waiting for a default service account to be provisioned in namespace [It] should not be very high [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Jun 3 22:04:11.485: INFO: >>> kubeConfig: /root/.kube/config STEP: creating replication controller svc-latency-rc in namespace svc-latency-663 I0603 22:04:11.500885 40 runners.go:190] Created replication controller with name: svc-latency-rc, namespace: svc-latency-663, replica count: 1 I0603 22:04:12.552163 40 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0603 22:04:13.552653 40 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0603 22:04:14.553085 40 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0603 22:04:15.553679 40 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jun 3 22:04:15.660: INFO: Created: latency-svc-54nlh Jun 3 22:04:15.664: INFO: Got endpoints: latency-svc-54nlh [10.518895ms] Jun 3 22:04:15.670: INFO: Created: latency-svc-7wcc5 Jun 3 22:04:15.672: INFO: Got endpoints: latency-svc-7wcc5 [8.050439ms] Jun 3 22:04:15.672: INFO: Created: latency-svc-n628l Jun 3 22:04:15.675: INFO: Got endpoints: latency-svc-n628l [10.710286ms] Jun 3 22:04:15.675: INFO: Created: latency-svc-wcthm Jun 3 22:04:15.678: INFO: Got endpoints: latency-svc-wcthm [13.561719ms] Jun 3 22:04:15.678: INFO: Created: latency-svc-55vb5 Jun 3 22:04:15.680: INFO: Created: latency-svc-792xl Jun 3 22:04:15.680: INFO: Got endpoints: latency-svc-55vb5 [16.22869ms] Jun 3 22:04:15.683: INFO: Got endpoints: latency-svc-792xl [18.251281ms] Jun 3 22:04:15.683: INFO: Created: latency-svc-qmsg6 Jun 3 22:04:15.686: INFO: Got endpoints: latency-svc-qmsg6 [21.186753ms] Jun 3 22:04:15.686: INFO: Created: latency-svc-pm2r2 Jun 3 22:04:15.689: INFO: Got endpoints: latency-svc-pm2r2 [24.1456ms] Jun 3 22:04:15.689: INFO: Created: latency-svc-hs7fl Jun 3 22:04:15.691: INFO: Got endpoints: latency-svc-hs7fl [26.551002ms] Jun 3 22:04:15.692: INFO: Created: latency-svc-4rwmz Jun 3 22:04:15.694: INFO: Got endpoints: latency-svc-4rwmz [29.093685ms] Jun 3 22:04:15.695: INFO: Created: latency-svc-7lm64 Jun 3 22:04:15.697: INFO: Created: latency-svc-zxtb9 Jun 3 22:04:15.698: INFO: Got endpoints: latency-svc-7lm64 [33.25992ms] Jun 3 22:04:15.699: INFO: Got endpoints: latency-svc-zxtb9 [34.694629ms] Jun 3 22:04:15.701: INFO: Created: latency-svc-scc4f Jun 3 22:04:15.703: INFO: Got endpoints: latency-svc-scc4f [38.630172ms] Jun 3 22:04:15.703: INFO: Created: latency-svc-lpbcp Jun 3 22:04:15.706: INFO: Created: latency-svc-2s4pb Jun 3 22:04:15.706: INFO: Got endpoints: latency-svc-lpbcp [41.614992ms] Jun 3 22:04:15.709: INFO: Got endpoints: latency-svc-2s4pb [43.61177ms] Jun 3 22:04:15.709: INFO: Created: latency-svc-pfh9q Jun 3 22:04:15.711: INFO: Got endpoints: latency-svc-pfh9q [46.225173ms] Jun 3 22:04:15.713: INFO: Created: latency-svc-n8ljt Jun 3 22:04:15.714: INFO: Created: latency-svc-nfflc Jun 3 22:04:15.716: INFO: Got endpoints: latency-svc-n8ljt [43.579877ms] Jun 3 22:04:15.716: INFO: Got endpoints: latency-svc-nfflc [41.337232ms] Jun 3 22:04:15.717: INFO: Created: latency-svc-bsvgl Jun 3 22:04:15.719: INFO: Got endpoints: latency-svc-bsvgl [41.370948ms] Jun 3 22:04:15.720: INFO: Created: latency-svc-ngprs Jun 3 22:04:15.722: INFO: Created: latency-svc-c726m Jun 3 22:04:15.723: INFO: Got endpoints: latency-svc-ngprs [41.980019ms] Jun 3 22:04:15.725: INFO: Got endpoints: latency-svc-c726m [42.36948ms] Jun 3 22:04:15.726: INFO: Created: latency-svc-qgbdh Jun 3 22:04:15.728: INFO: Got endpoints: latency-svc-qgbdh [42.508165ms] Jun 3 22:04:15.729: INFO: Created: latency-svc-2s9hn Jun 3 22:04:15.732: INFO: Created: latency-svc-wgcst Jun 3 22:04:15.732: INFO: Got endpoints: latency-svc-2s9hn [43.482867ms] Jun 3 22:04:15.734: INFO: Created: latency-svc-t9fcg Jun 3 22:04:15.734: INFO: Got endpoints: latency-svc-wgcst [43.211643ms] Jun 3 22:04:15.736: INFO: Got endpoints: latency-svc-t9fcg [42.681189ms] Jun 3 22:04:15.737: INFO: Created: latency-svc-7vksq Jun 3 22:04:15.751: INFO: Got endpoints: latency-svc-7vksq [52.809376ms] Jun 3 22:04:15.751: INFO: Created: latency-svc-ltlvl Jun 3 22:04:15.754: INFO: Got endpoints: latency-svc-ltlvl [55.055193ms] Jun 3 22:04:15.757: INFO: Created: latency-svc-nt4h5 Jun 3 22:04:15.760: INFO: Got endpoints: latency-svc-nt4h5 [23.261879ms] Jun 3 22:04:15.762: INFO: Created: latency-svc-g9hl9 Jun 3 22:04:15.767: INFO: Got endpoints: latency-svc-g9hl9 [63.358356ms] Jun 3 22:04:15.769: INFO: Created: latency-svc-srcf5 Jun 3 22:04:15.771: INFO: Got endpoints: latency-svc-srcf5 [65.204294ms] Jun 3 22:04:15.774: INFO: Created: latency-svc-frq8v Jun 3 22:04:15.780: INFO: Got endpoints: latency-svc-frq8v [71.342823ms] Jun 3 22:04:15.780: INFO: Created: latency-svc-g9djl Jun 3 22:04:15.785: INFO: Got endpoints: latency-svc-g9djl [73.546429ms] Jun 3 22:04:15.786: INFO: Created: latency-svc-99rfn Jun 3 22:04:15.789: INFO: Created: latency-svc-2jmmc Jun 3 22:04:15.791: INFO: Created: latency-svc-zg2nw Jun 3 22:04:15.793: INFO: Created: latency-svc-zz4sm Jun 3 22:04:15.796: INFO: Created: latency-svc-v57f2 Jun 3 22:04:15.798: INFO: Created: latency-svc-8f58x Jun 3 22:04:15.801: INFO: Created: latency-svc-l4jn4 Jun 3 22:04:15.803: INFO: Created: latency-svc-5sf9v Jun 3 22:04:15.805: INFO: Created: latency-svc-sd9l7 Jun 3 22:04:15.809: INFO: Created: latency-svc-55tgv Jun 3 22:04:15.813: INFO: Got endpoints: latency-svc-99rfn [97.314033ms] Jun 3 22:04:15.813: INFO: Created: latency-svc-vxtkx Jun 3 22:04:15.815: INFO: Created: latency-svc-dvtt2 Jun 3 22:04:15.817: INFO: Created: latency-svc-zqsmq Jun 3 22:04:15.820: INFO: Created: latency-svc-6tgj9 Jun 3 22:04:15.823: INFO: Created: latency-svc-fwmff Jun 3 22:04:15.825: INFO: Created: latency-svc-cqq2j Jun 3 22:04:15.863: INFO: Got endpoints: latency-svc-2jmmc [146.940218ms] Jun 3 22:04:15.868: INFO: Created: latency-svc-csrdl Jun 3 22:04:15.914: INFO: Got endpoints: latency-svc-zg2nw [194.478895ms] Jun 3 22:04:15.919: INFO: Created: latency-svc-dxb99 Jun 3 22:04:15.964: INFO: Got endpoints: latency-svc-zz4sm [241.200397ms] Jun 3 22:04:15.970: INFO: Created: latency-svc-znsxf Jun 3 22:04:16.013: INFO: Got endpoints: latency-svc-v57f2 [288.177716ms] Jun 3 22:04:16.018: INFO: Created: latency-svc-2skjl Jun 3 22:04:16.063: INFO: Got endpoints: latency-svc-8f58x [334.946406ms] Jun 3 22:04:16.068: INFO: Created: latency-svc-vt4sp Jun 3 22:04:16.114: INFO: Got endpoints: latency-svc-l4jn4 [381.568365ms] Jun 3 22:04:16.119: INFO: Created: latency-svc-2sj99 Jun 3 22:04:16.163: INFO: Got endpoints: latency-svc-5sf9v [428.237878ms] Jun 3 22:04:16.167: INFO: Created: latency-svc-wtzlk Jun 3 22:04:16.214: INFO: Got endpoints: latency-svc-sd9l7 [463.338376ms] Jun 3 22:04:16.221: INFO: Created: latency-svc-c2gxs Jun 3 22:04:16.264: INFO: Got endpoints: latency-svc-55tgv [509.302476ms] Jun 3 22:04:16.271: INFO: Created: latency-svc-zw2sp Jun 3 22:04:16.315: INFO: Got endpoints: latency-svc-vxtkx [555.0843ms] Jun 3 22:04:16.321: INFO: Created: latency-svc-gc7rt Jun 3 22:04:16.364: INFO: Got endpoints: latency-svc-dvtt2 [597.347405ms] Jun 3 22:04:16.375: INFO: Created: latency-svc-w7cvn Jun 3 22:04:16.413: INFO: Got endpoints: latency-svc-zqsmq [641.929427ms] Jun 3 22:04:16.419: INFO: Created: latency-svc-nmfrl Jun 3 22:04:16.462: INFO: Got endpoints: latency-svc-6tgj9 [682.482467ms] Jun 3 22:04:16.470: INFO: Created: latency-svc-qbp44 Jun 3 22:04:16.514: INFO: Got endpoints: latency-svc-fwmff [728.859005ms] Jun 3 22:04:16.519: INFO: Created: latency-svc-zk427 Jun 3 22:04:16.563: INFO: Got endpoints: latency-svc-cqq2j [750.072834ms] Jun 3 22:04:16.568: INFO: Created: latency-svc-g54zm Jun 3 22:04:16.613: INFO: Got endpoints: latency-svc-csrdl [749.276999ms] Jun 3 22:04:16.621: INFO: Created: latency-svc-mjjjh Jun 3 22:04:16.663: INFO: Got endpoints: latency-svc-dxb99 [749.655063ms] Jun 3 22:04:16.668: INFO: Created: latency-svc-fn5pp Jun 3 22:04:16.714: INFO: Got endpoints: latency-svc-znsxf [749.724413ms] Jun 3 22:04:16.719: INFO: Created: latency-svc-82kfm Jun 3 22:04:16.762: INFO: Got endpoints: latency-svc-2skjl [748.526788ms] Jun 3 22:04:16.766: INFO: Created: latency-svc-fbc6v Jun 3 22:04:16.813: INFO: Got endpoints: latency-svc-vt4sp [749.562658ms] Jun 3 22:04:16.818: INFO: Created: latency-svc-zkvt4 Jun 3 22:04:16.865: INFO: Got endpoints: latency-svc-2sj99 [750.952652ms] Jun 3 22:04:16.870: INFO: Created: latency-svc-cxfsr Jun 3 22:04:16.913: INFO: Got endpoints: latency-svc-wtzlk [750.774066ms] Jun 3 22:04:16.922: INFO: Created: latency-svc-s7kxx Jun 3 22:04:16.964: INFO: Got endpoints: latency-svc-c2gxs [749.593209ms] Jun 3 22:04:16.969: INFO: Created: latency-svc-dg6v6 Jun 3 22:04:17.015: INFO: Got endpoints: latency-svc-zw2sp [750.999799ms] Jun 3 22:04:17.021: INFO: Created: latency-svc-mrvbc Jun 3 22:04:17.065: INFO: Got endpoints: latency-svc-gc7rt [750.423603ms] Jun 3 22:04:17.077: INFO: Created: latency-svc-5wnpq Jun 3 22:04:17.113: INFO: Got endpoints: latency-svc-w7cvn [748.808125ms] Jun 3 22:04:17.120: INFO: Created: latency-svc-qcdgv Jun 3 22:04:17.164: INFO: Got endpoints: latency-svc-nmfrl [750.744924ms] Jun 3 22:04:17.170: INFO: Created: latency-svc-glh25 Jun 3 22:04:17.214: INFO: Got endpoints: latency-svc-qbp44 [751.130287ms] Jun 3 22:04:17.219: INFO: Created: latency-svc-dmm7t Jun 3 22:04:17.263: INFO: Got endpoints: latency-svc-zk427 [749.555083ms] Jun 3 22:04:17.268: INFO: Created: latency-svc-jd278 Jun 3 22:04:17.313: INFO: Got endpoints: latency-svc-g54zm [750.135632ms] Jun 3 22:04:17.319: INFO: Created: latency-svc-6sqnp Jun 3 22:04:17.364: INFO: Got endpoints: latency-svc-mjjjh [751.863489ms] Jun 3 22:04:17.371: INFO: Created: latency-svc-7qsvt Jun 3 22:04:17.414: INFO: Got endpoints: latency-svc-fn5pp [750.382129ms] Jun 3 22:04:17.419: INFO: Created: latency-svc-42p8j Jun 3 22:04:17.464: INFO: Got endpoints: latency-svc-82kfm [750.577373ms] Jun 3 22:04:17.470: INFO: Created: latency-svc-5tvm4 Jun 3 22:04:17.514: INFO: Got endpoints: latency-svc-fbc6v [751.982741ms] Jun 3 22:04:17.519: INFO: Created: latency-svc-shqkb Jun 3 22:04:17.564: INFO: Got endpoints: latency-svc-zkvt4 [751.432628ms] Jun 3 22:04:17.570: INFO: Created: latency-svc-8sbpj Jun 3 22:04:17.614: INFO: Got endpoints: latency-svc-cxfsr [749.375155ms] Jun 3 22:04:17.619: INFO: Created: latency-svc-xwttq Jun 3 22:04:17.664: INFO: Got endpoints: latency-svc-s7kxx [750.024375ms] Jun 3 22:04:17.668: INFO: Created: latency-svc-tdps5 Jun 3 22:04:17.714: INFO: Got endpoints: latency-svc-dg6v6 [750.820551ms] Jun 3 22:04:17.720: INFO: Created: latency-svc-wdkxb Jun 3 22:04:17.763: INFO: Got endpoints: latency-svc-mrvbc [747.995475ms] Jun 3 22:04:17.769: INFO: Created: latency-svc-ww5tz Jun 3 22:04:17.813: INFO: Got endpoints: latency-svc-5wnpq [747.58112ms] Jun 3 22:04:17.820: INFO: Created: latency-svc-vt9q4 Jun 3 22:04:17.864: INFO: Got endpoints: latency-svc-qcdgv [751.119184ms] Jun 3 22:04:17.869: INFO: Created: latency-svc-cbcxr Jun 3 22:04:17.914: INFO: Got endpoints: latency-svc-glh25 [749.424578ms] Jun 3 22:04:17.920: INFO: Created: latency-svc-jdr6z Jun 3 22:04:17.963: INFO: Got endpoints: latency-svc-dmm7t [749.476169ms] Jun 3 22:04:17.969: INFO: Created: latency-svc-4tqvx Jun 3 22:04:18.013: INFO: Got endpoints: latency-svc-jd278 [749.638618ms] Jun 3 22:04:18.020: INFO: Created: latency-svc-w6wf9 Jun 3 22:04:18.064: INFO: Got endpoints: latency-svc-6sqnp [750.099098ms] Jun 3 22:04:18.070: INFO: Created: latency-svc-5snxv Jun 3 22:04:18.114: INFO: Got endpoints: latency-svc-7qsvt [749.570441ms] Jun 3 22:04:18.120: INFO: Created: latency-svc-jhpkj Jun 3 22:04:18.164: INFO: Got endpoints: latency-svc-42p8j [749.885747ms] Jun 3 22:04:18.170: INFO: Created: latency-svc-dk7bg Jun 3 22:04:18.214: INFO: Got endpoints: latency-svc-5tvm4 [750.267432ms] Jun 3 22:04:18.220: INFO: Created: latency-svc-znmvb Jun 3 22:04:18.263: INFO: Got endpoints: latency-svc-shqkb [749.48411ms] Jun 3 22:04:18.271: INFO: Created: latency-svc-q9md4 Jun 3 22:04:18.314: INFO: Got endpoints: latency-svc-8sbpj [749.540556ms] Jun 3 22:04:18.319: INFO: Created: latency-svc-jwljj Jun 3 22:04:18.364: INFO: Got endpoints: latency-svc-xwttq [749.880708ms] Jun 3 22:04:18.370: INFO: Created: latency-svc-8588d Jun 3 22:04:18.413: INFO: Got endpoints: latency-svc-tdps5 [749.480827ms] Jun 3 22:04:18.419: INFO: Created: latency-svc-45dxd Jun 3 22:04:18.463: INFO: Got endpoints: latency-svc-wdkxb [748.744411ms] Jun 3 22:04:18.469: INFO: Created: latency-svc-fwmk9 Jun 3 22:04:18.514: INFO: Got endpoints: latency-svc-ww5tz [750.755641ms] Jun 3 22:04:18.520: INFO: Created: latency-svc-jx7px Jun 3 22:04:18.564: INFO: Got endpoints: latency-svc-vt9q4 [751.176865ms] Jun 3 22:04:18.570: INFO: Created: latency-svc-h9w85 Jun 3 22:04:18.614: INFO: Got endpoints: latency-svc-cbcxr [749.672576ms] Jun 3 22:04:18.620: INFO: Created: latency-svc-djwwk Jun 3 22:04:18.663: INFO: Got endpoints: latency-svc-jdr6z [749.645123ms] Jun 3 22:04:18.670: INFO: Created: latency-svc-xcxgj Jun 3 22:04:18.714: INFO: Got endpoints: latency-svc-4tqvx [750.241185ms] Jun 3 22:04:18.719: INFO: Created: latency-svc-xczvk Jun 3 22:04:18.763: INFO: Got endpoints: latency-svc-w6wf9 [750.255311ms] Jun 3 22:04:18.769: INFO: Created: latency-svc-4b4gf Jun 3 22:04:18.814: INFO: Got endpoints: latency-svc-5snxv [749.981781ms] Jun 3 22:04:18.819: INFO: Created: latency-svc-lg8vg Jun 3 22:04:18.864: INFO: Got endpoints: latency-svc-jhpkj [749.423447ms] Jun 3 22:04:18.869: INFO: Created: latency-svc-9q4wr Jun 3 22:04:18.914: INFO: Got endpoints: latency-svc-dk7bg [750.26678ms] Jun 3 22:04:18.920: INFO: Created: latency-svc-29mqv Jun 3 22:04:18.963: INFO: Got endpoints: latency-svc-znmvb [748.815979ms] Jun 3 22:04:18.969: INFO: Created: latency-svc-v295m Jun 3 22:04:19.014: INFO: Got endpoints: latency-svc-q9md4 [750.192849ms] Jun 3 22:04:19.020: INFO: Created: latency-svc-njmtm Jun 3 22:04:19.063: INFO: Got endpoints: latency-svc-jwljj [749.391833ms] Jun 3 22:04:19.069: INFO: Created: latency-svc-sdz24 Jun 3 22:04:19.113: INFO: Got endpoints: latency-svc-8588d [748.631956ms] Jun 3 22:04:19.118: INFO: Created: latency-svc-n7fcp Jun 3 22:04:19.163: INFO: Got endpoints: latency-svc-45dxd [749.91546ms] Jun 3 22:04:19.168: INFO: Created: latency-svc-mbqvl Jun 3 22:04:19.213: INFO: Got endpoints: latency-svc-fwmk9 [749.558769ms] Jun 3 22:04:19.218: INFO: Created: latency-svc-n4vqg Jun 3 22:04:19.263: INFO: Got endpoints: latency-svc-jx7px [749.447131ms] Jun 3 22:04:19.269: INFO: Created: latency-svc-wvq5t Jun 3 22:04:19.314: INFO: Got endpoints: latency-svc-h9w85 [749.335582ms] Jun 3 22:04:19.320: INFO: Created: latency-svc-h57zl Jun 3 22:04:19.363: INFO: Got endpoints: latency-svc-djwwk [748.977334ms] Jun 3 22:04:19.369: INFO: Created: latency-svc-pdhtf Jun 3 22:04:19.413: INFO: Got endpoints: latency-svc-xcxgj [749.398144ms] Jun 3 22:04:19.419: INFO: Created: latency-svc-g7z45 Jun 3 22:04:19.463: INFO: Got endpoints: latency-svc-xczvk [749.364849ms] Jun 3 22:04:19.468: INFO: Created: latency-svc-stw8d Jun 3 22:04:19.513: INFO: Got endpoints: latency-svc-4b4gf [749.749809ms] Jun 3 22:04:19.519: INFO: Created: latency-svc-bgt2x Jun 3 22:04:19.563: INFO: Got endpoints: latency-svc-lg8vg [749.633658ms] Jun 3 22:04:19.570: INFO: Created: latency-svc-v55g6 Jun 3 22:04:19.612: INFO: Got endpoints: latency-svc-9q4wr [748.880066ms] Jun 3 22:04:19.617: INFO: Created: latency-svc-zbhsw Jun 3 22:04:19.664: INFO: Got endpoints: latency-svc-29mqv [749.714626ms] Jun 3 22:04:19.670: INFO: Created: latency-svc-mcbsk Jun 3 22:04:19.714: INFO: Got endpoints: latency-svc-v295m [750.345026ms] Jun 3 22:04:19.720: INFO: Created: latency-svc-l9cdf Jun 3 22:04:19.762: INFO: Got endpoints: latency-svc-njmtm [748.71952ms] Jun 3 22:04:19.768: INFO: Created: latency-svc-28j65 Jun 3 22:04:19.814: INFO: Got endpoints: latency-svc-sdz24 [750.754347ms] Jun 3 22:04:19.819: INFO: Created: latency-svc-kd45b Jun 3 22:04:19.864: INFO: Got endpoints: latency-svc-n7fcp [751.037626ms] Jun 3 22:04:19.870: INFO: Created: latency-svc-qjdgz Jun 3 22:04:19.913: INFO: Got endpoints: latency-svc-mbqvl [750.000624ms] Jun 3 22:04:19.919: INFO: Created: latency-svc-9jtql Jun 3 22:04:19.963: INFO: Got endpoints: latency-svc-n4vqg [749.645575ms] Jun 3 22:04:19.969: INFO: Created: latency-svc-zznzn Jun 3 22:04:20.013: INFO: Got endpoints: latency-svc-wvq5t [749.895375ms] Jun 3 22:04:20.019: INFO: Created: latency-svc-g4zxr Jun 3 22:04:20.063: INFO: Got endpoints: latency-svc-h57zl [749.421595ms] Jun 3 22:04:20.068: INFO: Created: latency-svc-pw479 Jun 3 22:04:20.114: INFO: Got endpoints: latency-svc-pdhtf [751.004083ms] Jun 3 22:04:20.122: INFO: Created: latency-svc-knt95 Jun 3 22:04:20.163: INFO: Got endpoints: latency-svc-g7z45 [750.558048ms] Jun 3 22:04:20.169: INFO: Created: latency-svc-5l8tx Jun 3 22:04:20.214: INFO: Got endpoints: latency-svc-stw8d [750.89442ms] Jun 3 22:04:20.220: INFO: Created: latency-svc-q4rxp Jun 3 22:04:20.264: INFO: Got endpoints: latency-svc-bgt2x [750.994885ms] Jun 3 22:04:20.270: INFO: Created: latency-svc-bs2gw Jun 3 22:04:20.314: INFO: Got endpoints: latency-svc-v55g6 [750.344082ms] Jun 3 22:04:20.320: INFO: Created: latency-svc-7lmnq Jun 3 22:04:20.367: INFO: Got endpoints: latency-svc-zbhsw [754.228529ms] Jun 3 22:04:20.372: INFO: Created: latency-svc-hb4sc Jun 3 22:04:20.413: INFO: Got endpoints: latency-svc-mcbsk [749.134304ms] Jun 3 22:04:20.419: INFO: Created: latency-svc-tbzp7 Jun 3 22:04:20.464: INFO: Got endpoints: latency-svc-l9cdf [750.292249ms] Jun 3 22:04:20.471: INFO: Created: latency-svc-x6fgl Jun 3 22:04:20.514: INFO: Got endpoints: latency-svc-28j65 [751.465581ms] Jun 3 22:04:20.519: INFO: Created: latency-svc-mbsw9 Jun 3 22:04:20.564: INFO: Got endpoints: latency-svc-kd45b [750.238281ms] Jun 3 22:04:20.570: INFO: Created: latency-svc-2jssb Jun 3 22:04:20.614: INFO: Got endpoints: latency-svc-qjdgz [749.717062ms] Jun 3 22:04:20.619: INFO: Created: latency-svc-jf4hm Jun 3 22:04:20.663: INFO: Got endpoints: latency-svc-9jtql [749.631594ms] Jun 3 22:04:20.668: INFO: Created: latency-svc-9qmpp Jun 3 22:04:20.714: INFO: Got endpoints: latency-svc-zznzn [751.662445ms] Jun 3 22:04:20.719: INFO: Created: latency-svc-zkhb9 Jun 3 22:04:20.764: INFO: Got endpoints: latency-svc-g4zxr [750.633413ms] Jun 3 22:04:20.770: INFO: Created: latency-svc-lh8p5 Jun 3 22:04:20.814: INFO: Got endpoints: latency-svc-pw479 [750.592477ms] Jun 3 22:04:20.819: INFO: Created: latency-svc-qb4xt Jun 3 22:04:20.864: INFO: Got endpoints: latency-svc-knt95 [749.68344ms] Jun 3 22:04:20.869: INFO: Created: latency-svc-rmdjk Jun 3 22:04:20.915: INFO: Got endpoints: latency-svc-5l8tx [751.001853ms] Jun 3 22:04:20.919: INFO: Created: latency-svc-rrb9c Jun 3 22:04:20.963: INFO: Got endpoints: latency-svc-q4rxp [749.215871ms] Jun 3 22:04:20.969: INFO: Created: latency-svc-68b74 Jun 3 22:04:21.013: INFO: Got endpoints: latency-svc-bs2gw [748.334457ms] Jun 3 22:04:21.019: INFO: Created: latency-svc-t4vxw Jun 3 22:04:21.064: INFO: Got endpoints: latency-svc-7lmnq [749.813634ms] Jun 3 22:04:21.069: INFO: Created: latency-svc-whfwk Jun 3 22:04:21.114: INFO: Got endpoints: latency-svc-hb4sc [747.042549ms] Jun 3 22:04:21.120: INFO: Created: latency-svc-lgd5d Jun 3 22:04:21.163: INFO: Got endpoints: latency-svc-tbzp7 [750.245426ms] Jun 3 22:04:21.171: INFO: Created: latency-svc-pclzq Jun 3 22:04:21.214: INFO: Got endpoints: latency-svc-x6fgl [749.688011ms] Jun 3 22:04:21.220: INFO: Created: latency-svc-gf6kj Jun 3 22:04:21.263: INFO: Got endpoints: latency-svc-mbsw9 [749.267611ms] Jun 3 22:04:21.269: INFO: Created: latency-svc-hmfjq Jun 3 22:04:21.313: INFO: Got endpoints: latency-svc-2jssb [748.734619ms] Jun 3 22:04:21.319: INFO: Created: latency-svc-xtlh4 Jun 3 22:04:21.363: INFO: Got endpoints: latency-svc-jf4hm [749.163036ms] Jun 3 22:04:21.369: INFO: Created: latency-svc-zb6gp Jun 3 22:04:21.413: INFO: Got endpoints: latency-svc-9qmpp [750.477223ms] Jun 3 22:04:21.419: INFO: Created: latency-svc-2bc94 Jun 3 22:04:21.464: INFO: Got endpoints: latency-svc-zkhb9 [749.80357ms] Jun 3 22:04:21.469: INFO: Created: latency-svc-jsh4n Jun 3 22:04:21.513: INFO: Got endpoints: latency-svc-lh8p5 [749.012059ms] Jun 3 22:04:21.519: INFO: Created: latency-svc-9j5g7 Jun 3 22:04:21.563: INFO: Got endpoints: latency-svc-qb4xt [749.339756ms] Jun 3 22:04:21.570: INFO: Created: latency-svc-xmpsp Jun 3 22:04:21.614: INFO: Got endpoints: latency-svc-rmdjk [750.052451ms] Jun 3 22:04:21.620: INFO: Created: latency-svc-wbq9r Jun 3 22:04:21.713: INFO: Got endpoints: latency-svc-rrb9c [798.828548ms] Jun 3 22:04:21.719: INFO: Created: latency-svc-pbzs8 Jun 3 22:04:21.763: INFO: Got endpoints: latency-svc-68b74 [800.123358ms] Jun 3 22:04:21.770: INFO: Created: latency-svc-5q5mg Jun 3 22:04:21.813: INFO: Got endpoints: latency-svc-t4vxw [800.771949ms] Jun 3 22:04:21.820: INFO: Created: latency-svc-thsbm Jun 3 22:04:21.863: INFO: Got endpoints: latency-svc-whfwk [799.442297ms] Jun 3 22:04:21.869: INFO: Created: latency-svc-k29sv Jun 3 22:04:21.914: INFO: Got endpoints: latency-svc-lgd5d [799.776648ms] Jun 3 22:04:21.920: INFO: Created: latency-svc-pz99v Jun 3 22:04:21.963: INFO: Got endpoints: latency-svc-pclzq [799.530403ms] Jun 3 22:04:21.970: INFO: Created: latency-svc-jhblp Jun 3 22:04:22.014: INFO: Got endpoints: latency-svc-gf6kj [800.065072ms] Jun 3 22:04:22.019: INFO: Created: latency-svc-f5zd9 Jun 3 22:04:22.064: INFO: Got endpoints: latency-svc-hmfjq [800.440742ms] Jun 3 22:04:22.101: INFO: Created: latency-svc-9lp6m Jun 3 22:04:22.115: INFO: Got endpoints: latency-svc-xtlh4 [802.248706ms] Jun 3 22:04:22.122: INFO: Created: latency-svc-q6dgt Jun 3 22:04:22.165: INFO: Got endpoints: latency-svc-zb6gp [802.453489ms] Jun 3 22:04:22.175: INFO: Created: latency-svc-d2877 Jun 3 22:04:22.214: INFO: Got endpoints: latency-svc-2bc94 [800.229576ms] Jun 3 22:04:22.220: INFO: Created: latency-svc-ldw8l Jun 3 22:04:22.262: INFO: Got endpoints: latency-svc-jsh4n [797.805675ms] Jun 3 22:04:22.267: INFO: Created: latency-svc-qp5f2 Jun 3 22:04:22.314: INFO: Got endpoints: latency-svc-9j5g7 [801.456859ms] Jun 3 22:04:22.320: INFO: Created: latency-svc-ts2q8 Jun 3 22:04:22.364: INFO: Got endpoints: latency-svc-xmpsp [800.634645ms] Jun 3 22:04:22.370: INFO: Created: latency-svc-g79kq Jun 3 22:04:22.413: INFO: Got endpoints: latency-svc-wbq9r [799.549563ms] Jun 3 22:04:22.419: INFO: Created: latency-svc-rqdn2 Jun 3 22:04:22.464: INFO: Got endpoints: latency-svc-pbzs8 [750.390902ms] Jun 3 22:04:22.469: INFO: Created: latency-svc-qxf6d Jun 3 22:04:22.514: INFO: Got endpoints: latency-svc-5q5mg [750.256163ms] Jun 3 22:04:22.520: INFO: Created: latency-svc-22m9h Jun 3 22:04:22.563: INFO: Got endpoints: latency-svc-thsbm [749.129776ms] Jun 3 22:04:22.568: INFO: Created: latency-svc-22pss Jun 3 22:04:22.614: INFO: Got endpoints: latency-svc-k29sv [751.275989ms] Jun 3 22:04:22.621: INFO: Created: latency-svc-z4ctd Jun 3 22:04:22.663: INFO: Got endpoints: latency-svc-pz99v [749.36599ms] Jun 3 22:04:22.669: INFO: Created: latency-svc-bfl74 Jun 3 22:04:22.713: INFO: Got endpoints: latency-svc-jhblp [749.891859ms] Jun 3 22:04:22.718: INFO: Created: latency-svc-x67t9 Jun 3 22:04:22.764: INFO: Got endpoints: latency-svc-f5zd9 [749.973374ms] Jun 3 22:04:22.770: INFO: Created: latency-svc-46h29 Jun 3 22:04:22.813: INFO: Got endpoints: latency-svc-9lp6m [749.285779ms] Jun 3 22:04:22.819: INFO: Created: latency-svc-kkcnb Jun 3 22:04:22.914: INFO: Got endpoints: latency-svc-q6dgt [798.121303ms] Jun 3 22:04:22.920: INFO: Created: latency-svc-7css5 Jun 3 22:04:22.964: INFO: Got endpoints: latency-svc-d2877 [799.04443ms] Jun 3 22:04:22.970: INFO: Created: latency-svc-jbr46 Jun 3 22:04:23.014: INFO: Got endpoints: latency-svc-ldw8l [800.0578ms] Jun 3 22:04:23.021: INFO: Created: latency-svc-8gh65 Jun 3 22:04:23.063: INFO: Got endpoints: latency-svc-qp5f2 [800.579913ms] Jun 3 22:04:23.068: INFO: Created: latency-svc-rvk2s Jun 3 22:04:23.113: INFO: Got endpoints: latency-svc-ts2q8 [799.053464ms] Jun 3 22:04:23.119: INFO: Created: latency-svc-8rgtp Jun 3 22:04:23.163: INFO: Got endpoints: latency-svc-g79kq [799.251674ms] Jun 3 22:04:23.168: INFO: Created: latency-svc-4d6nh Jun 3 22:04:23.214: INFO: Got endpoints: latency-svc-rqdn2 [800.232583ms] Jun 3 22:04:23.220: INFO: Created: latency-svc-q24sp Jun 3 22:04:23.264: INFO: Got endpoints: latency-svc-qxf6d [800.386501ms] Jun 3 22:04:23.270: INFO: Created: latency-svc-s89kf Jun 3 22:04:23.313: INFO: Got endpoints: latency-svc-22m9h [799.57402ms] Jun 3 22:04:23.319: INFO: Created: latency-svc-5r5fm Jun 3 22:04:23.364: INFO: Got endpoints: latency-svc-22pss [801.493835ms] Jun 3 22:04:23.371: INFO: Created: latency-svc-5jnz8 Jun 3 22:04:23.414: INFO: Got endpoints: latency-svc-z4ctd [799.485471ms] Jun 3 22:04:23.420: INFO: Created: latency-svc-zwbdd Jun 3 22:04:23.464: INFO: Got endpoints: latency-svc-bfl74 [800.773755ms] Jun 3 22:04:23.470: INFO: Created: latency-svc-nbvpb Jun 3 22:04:23.514: INFO: Got endpoints: latency-svc-x67t9 [801.499572ms] Jun 3 22:04:23.520: INFO: Created: latency-svc-5bwf7 Jun 3 22:04:23.563: INFO: Got endpoints: latency-svc-46h29 [798.67109ms] Jun 3 22:04:23.568: INFO: Created: latency-svc-jjnpd Jun 3 22:04:23.614: INFO: Got endpoints: latency-svc-kkcnb [800.634591ms] Jun 3 22:04:23.663: INFO: Got endpoints: latency-svc-7css5 [749.068367ms] Jun 3 22:04:23.713: INFO: Got endpoints: latency-svc-jbr46 [748.943721ms] Jun 3 22:04:23.764: INFO: Got endpoints: latency-svc-8gh65 [749.838281ms] Jun 3 22:04:23.813: INFO: Got endpoints: latency-svc-rvk2s [750.661188ms] Jun 3 22:04:23.863: INFO: Got endpoints: latency-svc-8rgtp [749.833537ms] Jun 3 22:04:23.914: INFO: Got endpoints: latency-svc-4d6nh [750.506888ms] Jun 3 22:04:23.964: INFO: Got endpoints: latency-svc-q24sp [750.547619ms] Jun 3 22:04:24.015: INFO: Got endpoints: latency-svc-s89kf [750.540919ms] Jun 3 22:04:24.064: INFO: Got endpoints: latency-svc-5r5fm [751.008834ms] Jun 3 22:04:24.114: INFO: Got endpoints: latency-svc-5jnz8 [749.631446ms] Jun 3 22:04:24.164: INFO: Got endpoints: latency-svc-zwbdd [750.158474ms] Jun 3 22:04:24.214: INFO: Got endpoints: latency-svc-nbvpb [750.14652ms] Jun 3 22:04:24.264: INFO: Got endpoints: latency-svc-5bwf7 [749.082466ms] Jun 3 22:04:24.314: INFO: Got endpoints: latency-svc-jjnpd [751.092574ms] Jun 3 22:04:24.314: INFO: Latencies: [8.050439ms 10.710286ms 13.561719ms 16.22869ms 18.251281ms 21.186753ms 23.261879ms 24.1456ms 26.551002ms 29.093685ms 33.25992ms 34.694629ms 38.630172ms 41.337232ms 41.370948ms 41.614992ms 41.980019ms 42.36948ms 42.508165ms 42.681189ms 43.211643ms 43.482867ms 43.579877ms 43.61177ms 46.225173ms 52.809376ms 55.055193ms 63.358356ms 65.204294ms 71.342823ms 73.546429ms 97.314033ms 146.940218ms 194.478895ms 241.200397ms 288.177716ms 334.946406ms 381.568365ms 428.237878ms 463.338376ms 509.302476ms 555.0843ms 597.347405ms 641.929427ms 682.482467ms 728.859005ms 747.042549ms 747.58112ms 747.995475ms 748.334457ms 748.526788ms 748.631956ms 748.71952ms 748.734619ms 748.744411ms 748.808125ms 748.815979ms 748.880066ms 748.943721ms 748.977334ms 749.012059ms 749.068367ms 749.082466ms 749.129776ms 749.134304ms 749.163036ms 749.215871ms 749.267611ms 749.276999ms 749.285779ms 749.335582ms 749.339756ms 749.364849ms 749.36599ms 749.375155ms 749.391833ms 749.398144ms 749.421595ms 749.423447ms 749.424578ms 749.447131ms 749.476169ms 749.480827ms 749.48411ms 749.540556ms 749.555083ms 749.558769ms 749.562658ms 749.570441ms 749.593209ms 749.631446ms 749.631594ms 749.633658ms 749.638618ms 749.645123ms 749.645575ms 749.655063ms 749.672576ms 749.68344ms 749.688011ms 749.714626ms 749.717062ms 749.724413ms 749.749809ms 749.80357ms 749.813634ms 749.833537ms 749.838281ms 749.880708ms 749.885747ms 749.891859ms 749.895375ms 749.91546ms 749.973374ms 749.981781ms 750.000624ms 750.024375ms 750.052451ms 750.072834ms 750.099098ms 750.135632ms 750.14652ms 750.158474ms 750.192849ms 750.238281ms 750.241185ms 750.245426ms 750.255311ms 750.256163ms 750.26678ms 750.267432ms 750.292249ms 750.344082ms 750.345026ms 750.382129ms 750.390902ms 750.423603ms 750.477223ms 750.506888ms 750.540919ms 750.547619ms 750.558048ms 750.577373ms 750.592477ms 750.633413ms 750.661188ms 750.744924ms 750.754347ms 750.755641ms 750.774066ms 750.820551ms 750.89442ms 750.952652ms 750.994885ms 750.999799ms 751.001853ms 751.004083ms 751.008834ms 751.037626ms 751.092574ms 751.119184ms 751.130287ms 751.176865ms 751.275989ms 751.432628ms 751.465581ms 751.662445ms 751.863489ms 751.982741ms 754.228529ms 797.805675ms 798.121303ms 798.67109ms 798.828548ms 799.04443ms 799.053464ms 799.251674ms 799.442297ms 799.485471ms 799.530403ms 799.549563ms 799.57402ms 799.776648ms 800.0578ms 800.065072ms 800.123358ms 800.229576ms 800.232583ms 800.386501ms 800.440742ms 800.579913ms 800.634591ms 800.634645ms 800.771949ms 800.773755ms 801.456859ms 801.493835ms 801.499572ms 802.248706ms 802.453489ms] Jun 3 22:04:24.314: INFO: 50 %ile: 749.714626ms Jun 3 22:04:24.314: INFO: 90 %ile: 799.549563ms Jun 3 22:04:24.314: INFO: 99 %ile: 802.248706ms Jun 3 22:04:24.314: INFO: Total sample count: 200 [AfterEach] [sig-network] Service endpoints latency /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 3 22:04:24.314: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svc-latency-663" for this suite. • [SLOW TEST:12.868 seconds] [sig-network] Service endpoints latency /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should not be very high [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-network] Service endpoints latency should not be very high [Conformance]","total":-1,"completed":28,"skipped":449,"failed":1,"failures":["[sig-network] Services should be able to create a functioning NodePort service [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 3 22:04:16.330: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Jun 3 22:04:16.891: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Jun 3 22:04:18.900: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63789890656, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63789890656, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63789890656, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63789890656, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jun 3 22:04:21.911: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny attaching pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Registering the webhook via the AdmissionRegistration API STEP: create a pod STEP: 'kubectl attach' the pod, should be denied by the webhook Jun 3 22:04:28.945: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=webhook-5524 attach --namespace=webhook-5524 to-be-attached-pod -i -c=container1' Jun 3 22:04:29.118: INFO: rc: 1 [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 3 22:04:29.123: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-5524" for this suite. STEP: Destroying namespace "webhook-5524-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:12.831 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny attaching pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","total":-1,"completed":47,"skipped":763,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 3 22:04:29.229: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via environment variable [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating configMap configmap-3215/configmap-test-5b4dfe43-4dba-4252-a3e3-73f5be7fdf54 STEP: Creating a pod to test consume configMaps Jun 3 22:04:29.270: INFO: Waiting up to 5m0s for pod "pod-configmaps-a1e77d92-d57f-4862-be1e-9af7831b2548" in namespace "configmap-3215" to be "Succeeded or Failed" Jun 3 22:04:29.275: INFO: Pod "pod-configmaps-a1e77d92-d57f-4862-be1e-9af7831b2548": Phase="Pending", Reason="", readiness=false. Elapsed: 5.247081ms Jun 3 22:04:31.278: INFO: Pod "pod-configmaps-a1e77d92-d57f-4862-be1e-9af7831b2548": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00809994s Jun 3 22:04:33.282: INFO: Pod "pod-configmaps-a1e77d92-d57f-4862-be1e-9af7831b2548": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011471259s STEP: Saw pod success Jun 3 22:04:33.282: INFO: Pod "pod-configmaps-a1e77d92-d57f-4862-be1e-9af7831b2548" satisfied condition "Succeeded or Failed" Jun 3 22:04:33.284: INFO: Trying to get logs from node node1 pod pod-configmaps-a1e77d92-d57f-4862-be1e-9af7831b2548 container env-test: STEP: delete the pod Jun 3 22:04:33.341: INFO: Waiting for pod pod-configmaps-a1e77d92-d57f-4862-be1e-9af7831b2548 to disappear Jun 3 22:04:33.343: INFO: Pod pod-configmaps-a1e77d92-d57f-4862-be1e-9af7831b2548 no longer exists [AfterEach] [sig-node] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 3 22:04:33.343: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-3215" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance]","total":-1,"completed":48,"skipped":789,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 3 22:04:33.394: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/kubelet.go:38 [BeforeEach] when scheduling a busybox command that always fails in a pod /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/kubelet.go:82 [It] should be possible to delete [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [AfterEach] [sig-node] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 3 22:04:33.435: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-2737" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance]","total":-1,"completed":49,"skipped":808,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 3 22:04:33.498: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating secret with name s-test-opt-del-05ba5eb1-7fee-4af8-8bd8-7d3e1b3e9097 STEP: Creating secret with name s-test-opt-upd-80c604d2-d439-4566-bceb-b234be9e8362 STEP: Creating the pod Jun 3 22:04:33.544: INFO: The status of Pod pod-projected-secrets-8a059e68-dde1-4f26-8a21-da876b414cb8 is Pending, waiting for it to be Running (with Ready = true) Jun 3 22:04:35.549: INFO: The status of Pod pod-projected-secrets-8a059e68-dde1-4f26-8a21-da876b414cb8 is Pending, waiting for it to be Running (with Ready = true) Jun 3 22:04:37.548: INFO: The status of Pod pod-projected-secrets-8a059e68-dde1-4f26-8a21-da876b414cb8 is Running (Ready = true) STEP: Deleting secret s-test-opt-del-05ba5eb1-7fee-4af8-8bd8-7d3e1b3e9097 STEP: Updating secret s-test-opt-upd-80c604d2-d439-4566-bceb-b234be9e8362 STEP: Creating secret with name s-test-opt-create-2317b0f0-f4bd-42a3-b897-581d212380f2 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 3 22:04:41.619: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4424" for this suite. • [SLOW TEST:8.130 seconds] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance]","total":-1,"completed":50,"skipped":828,"failed":0} SSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 3 22:04:41.649: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] getting/updating/patching custom resource definition status sub-resource works [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Jun 3 22:04:41.671: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 3 22:04:47.203: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-7335" for this suite. • [SLOW TEST:5.563 seconds] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Simple CustomResourceDefinition /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:48 getting/updating/patching custom resource definition status sub-resource works [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works [Conformance]","total":-1,"completed":51,"skipped":835,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 3 22:04:47.283: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete RS created by deployment when not orphaning [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for all rs to be garbage collected STEP: expected 0 rs, got 1 rs STEP: expected 0 pods, got 2 pods STEP: Gathering metrics Jun 3 22:04:48.356: INFO: The status of Pod kube-controller-manager-master3 is Running (Ready = true) Jun 3 22:04:48.558: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 3 22:04:48.558: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-6189" for this suite. • ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance]","total":-1,"completed":52,"skipped":864,"failed":0} S ------------------------------ [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 3 22:03:06.272: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 [BeforeEach] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:105 STEP: Creating service test in namespace statefulset-5484 [It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Initializing watcher for selector baz=blah,foo=bar STEP: Creating stateful set ss in namespace statefulset-5484 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-5484 Jun 3 22:03:06.307: INFO: Found 0 stateful pods, waiting for 1 Jun 3 22:03:16.311: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod Jun 3 22:03:16.314: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-5484 exec ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Jun 3 22:03:16.591: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" Jun 3 22:03:16.591: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Jun 3 22:03:16.591: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Jun 3 22:03:16.593: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Jun 3 22:03:26.597: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Jun 3 22:03:26.597: INFO: Waiting for statefulset status.replicas updated to 0 Jun 3 22:03:26.609: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.99999948s Jun 3 22:03:27.612: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.997405888s Jun 3 22:03:28.617: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.993356107s Jun 3 22:03:29.621: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.988828489s Jun 3 22:03:30.625: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.984261626s Jun 3 22:03:31.629: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.980807956s Jun 3 22:03:32.635: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.976886282s Jun 3 22:03:33.638: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.97064413s Jun 3 22:03:34.641: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.967622643s Jun 3 22:03:35.645: INFO: Verifying statefulset ss doesn't scale past 1 for another 964.128146ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-5484 Jun 3 22:03:36.648: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-5484 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jun 3 22:03:36.988: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n" Jun 3 22:03:36.988: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Jun 3 22:03:36.988: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Jun 3 22:03:36.992: INFO: Found 1 stateful pods, waiting for 3 Jun 3 22:03:46.999: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Jun 3 22:03:46.999: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Jun 3 22:03:46.999: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Verifying that stateful set ss was scaled up in order STEP: Scale down will halt with unhealthy stateful pod Jun 3 22:03:47.005: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-5484 exec ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Jun 3 22:03:48.196: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" Jun 3 22:03:48.196: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Jun 3 22:03:48.196: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Jun 3 22:03:48.196: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-5484 exec ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Jun 3 22:03:48.449: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" Jun 3 22:03:48.449: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Jun 3 22:03:48.449: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Jun 3 22:03:48.449: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-5484 exec ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Jun 3 22:03:48.808: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" Jun 3 22:03:48.809: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Jun 3 22:03:48.809: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Jun 3 22:03:48.809: INFO: Waiting for statefulset status.replicas updated to 0 Jun 3 22:03:48.811: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 2 Jun 3 22:03:58.818: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Jun 3 22:03:58.818: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Jun 3 22:03:58.818: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Jun 3 22:03:58.827: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999999427s Jun 3 22:03:59.832: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.996651754s Jun 3 22:04:00.836: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.991932158s Jun 3 22:04:01.840: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.987797136s Jun 3 22:04:02.845: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.983665869s Jun 3 22:04:03.849: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.978645906s Jun 3 22:04:04.853: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.974725836s Jun 3 22:04:05.857: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.970441068s Jun 3 22:04:06.863: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.965994984s Jun 3 22:04:07.869: INFO: Verifying statefulset ss doesn't scale past 3 for another 958.9672ms STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-5484 Jun 3 22:04:08.875: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-5484 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jun 3 22:04:10.471: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n" Jun 3 22:04:10.471: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Jun 3 22:04:10.471: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Jun 3 22:04:10.471: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-5484 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jun 3 22:04:10.760: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n" Jun 3 22:04:10.760: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Jun 3 22:04:10.761: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Jun 3 22:04:10.761: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-5484 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jun 3 22:04:11.009: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n" Jun 3 22:04:11.009: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Jun 3 22:04:11.009: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Jun 3 22:04:11.009: INFO: Scaling statefulset ss to 0 STEP: Verifying that stateful set ss was scaled down in reverse order [AfterEach] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:116 Jun 3 22:04:51.023: INFO: Deleting all statefulset in ns statefulset-5484 Jun 3 22:04:51.025: INFO: Scaling statefulset ss to 0 Jun 3 22:04:51.033: INFO: Waiting for statefulset status.replicas updated to 0 Jun 3 22:04:51.035: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 3 22:04:51.045: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-5484" for this suite. • [SLOW TEST:104.781 seconds] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:95 Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]","total":-1,"completed":21,"skipped":489,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 3 22:04:22.076: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:241 [BeforeEach] Update Demo /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:293 [It] should scale a replication controller [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating a replication controller Jun 3 22:04:22.114: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-9175 create -f -' Jun 3 22:04:22.518: INFO: stderr: "" Jun 3 22:04:22.518: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Jun 3 22:04:22.518: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-9175 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Jun 3 22:04:22.700: INFO: stderr: "" Jun 3 22:04:22.700: INFO: stdout: "update-demo-nautilus-7wlzk update-demo-nautilus-cgtrr " Jun 3 22:04:22.700: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-9175 get pods update-demo-nautilus-7wlzk -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Jun 3 22:04:22.875: INFO: stderr: "" Jun 3 22:04:22.875: INFO: stdout: "" Jun 3 22:04:22.875: INFO: update-demo-nautilus-7wlzk is created but not running Jun 3 22:04:27.877: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-9175 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Jun 3 22:04:28.064: INFO: stderr: "" Jun 3 22:04:28.064: INFO: stdout: "update-demo-nautilus-7wlzk update-demo-nautilus-cgtrr " Jun 3 22:04:28.064: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-9175 get pods update-demo-nautilus-7wlzk -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Jun 3 22:04:28.254: INFO: stderr: "" Jun 3 22:04:28.254: INFO: stdout: "" Jun 3 22:04:28.254: INFO: update-demo-nautilus-7wlzk is created but not running Jun 3 22:04:33.256: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-9175 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Jun 3 22:04:33.442: INFO: stderr: "" Jun 3 22:04:33.442: INFO: stdout: "update-demo-nautilus-7wlzk update-demo-nautilus-cgtrr " Jun 3 22:04:33.443: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-9175 get pods update-demo-nautilus-7wlzk -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Jun 3 22:04:33.611: INFO: stderr: "" Jun 3 22:04:33.611: INFO: stdout: "true" Jun 3 22:04:33.611: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-9175 get pods update-demo-nautilus-7wlzk -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' Jun 3 22:04:33.785: INFO: stderr: "" Jun 3 22:04:33.785: INFO: stdout: "k8s.gcr.io/e2e-test-images/nautilus:1.4" Jun 3 22:04:33.785: INFO: validating pod update-demo-nautilus-7wlzk Jun 3 22:04:33.789: INFO: got data: { "image": "nautilus.jpg" } Jun 3 22:04:33.789: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jun 3 22:04:33.789: INFO: update-demo-nautilus-7wlzk is verified up and running Jun 3 22:04:33.789: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-9175 get pods update-demo-nautilus-cgtrr -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Jun 3 22:04:33.956: INFO: stderr: "" Jun 3 22:04:33.956: INFO: stdout: "true" Jun 3 22:04:33.956: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-9175 get pods update-demo-nautilus-cgtrr -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' Jun 3 22:04:34.117: INFO: stderr: "" Jun 3 22:04:34.117: INFO: stdout: "k8s.gcr.io/e2e-test-images/nautilus:1.4" Jun 3 22:04:34.117: INFO: validating pod update-demo-nautilus-cgtrr Jun 3 22:04:34.121: INFO: got data: { "image": "nautilus.jpg" } Jun 3 22:04:34.121: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jun 3 22:04:34.121: INFO: update-demo-nautilus-cgtrr is verified up and running STEP: scaling down the replication controller Jun 3 22:04:34.131: INFO: scanned /root for discovery docs: Jun 3 22:04:34.131: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-9175 scale rc update-demo-nautilus --replicas=1 --timeout=5m' Jun 3 22:04:34.346: INFO: stderr: "" Jun 3 22:04:34.346: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. Jun 3 22:04:34.346: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-9175 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Jun 3 22:04:34.506: INFO: stderr: "" Jun 3 22:04:34.506: INFO: stdout: "update-demo-nautilus-7wlzk update-demo-nautilus-cgtrr " STEP: Replicas for name=update-demo: expected=1 actual=2 Jun 3 22:04:39.507: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-9175 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Jun 3 22:04:39.679: INFO: stderr: "" Jun 3 22:04:39.679: INFO: stdout: "update-demo-nautilus-7wlzk update-demo-nautilus-cgtrr " STEP: Replicas for name=update-demo: expected=1 actual=2 Jun 3 22:04:44.681: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-9175 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Jun 3 22:04:44.845: INFO: stderr: "" Jun 3 22:04:44.845: INFO: stdout: "update-demo-nautilus-cgtrr " Jun 3 22:04:44.845: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-9175 get pods update-demo-nautilus-cgtrr -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Jun 3 22:04:45.024: INFO: stderr: "" Jun 3 22:04:45.024: INFO: stdout: "true" Jun 3 22:04:45.024: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-9175 get pods update-demo-nautilus-cgtrr -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' Jun 3 22:04:45.191: INFO: stderr: "" Jun 3 22:04:45.191: INFO: stdout: "k8s.gcr.io/e2e-test-images/nautilus:1.4" Jun 3 22:04:45.191: INFO: validating pod update-demo-nautilus-cgtrr Jun 3 22:04:45.195: INFO: got data: { "image": "nautilus.jpg" } Jun 3 22:04:45.195: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jun 3 22:04:45.195: INFO: update-demo-nautilus-cgtrr is verified up and running STEP: scaling up the replication controller Jun 3 22:04:45.205: INFO: scanned /root for discovery docs: Jun 3 22:04:45.205: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-9175 scale rc update-demo-nautilus --replicas=2 --timeout=5m' Jun 3 22:04:45.423: INFO: stderr: "" Jun 3 22:04:45.423: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. Jun 3 22:04:45.423: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-9175 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Jun 3 22:04:45.606: INFO: stderr: "" Jun 3 22:04:45.606: INFO: stdout: "update-demo-nautilus-cgtrr update-demo-nautilus-smh99 " Jun 3 22:04:45.606: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-9175 get pods update-demo-nautilus-cgtrr -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Jun 3 22:04:45.780: INFO: stderr: "" Jun 3 22:04:45.780: INFO: stdout: "true" Jun 3 22:04:45.781: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-9175 get pods update-demo-nautilus-cgtrr -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' Jun 3 22:04:45.967: INFO: stderr: "" Jun 3 22:04:45.967: INFO: stdout: "k8s.gcr.io/e2e-test-images/nautilus:1.4" Jun 3 22:04:45.967: INFO: validating pod update-demo-nautilus-cgtrr Jun 3 22:04:45.971: INFO: got data: { "image": "nautilus.jpg" } Jun 3 22:04:45.972: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jun 3 22:04:45.972: INFO: update-demo-nautilus-cgtrr is verified up and running Jun 3 22:04:45.972: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-9175 get pods update-demo-nautilus-smh99 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Jun 3 22:04:46.147: INFO: stderr: "" Jun 3 22:04:46.147: INFO: stdout: "" Jun 3 22:04:46.147: INFO: update-demo-nautilus-smh99 is created but not running Jun 3 22:04:51.148: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-9175 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Jun 3 22:04:51.319: INFO: stderr: "" Jun 3 22:04:51.319: INFO: stdout: "update-demo-nautilus-cgtrr update-demo-nautilus-smh99 " Jun 3 22:04:51.319: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-9175 get pods update-demo-nautilus-cgtrr -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Jun 3 22:04:51.479: INFO: stderr: "" Jun 3 22:04:51.479: INFO: stdout: "true" Jun 3 22:04:51.479: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-9175 get pods update-demo-nautilus-cgtrr -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' Jun 3 22:04:51.637: INFO: stderr: "" Jun 3 22:04:51.637: INFO: stdout: "k8s.gcr.io/e2e-test-images/nautilus:1.4" Jun 3 22:04:51.637: INFO: validating pod update-demo-nautilus-cgtrr Jun 3 22:04:51.641: INFO: got data: { "image": "nautilus.jpg" } Jun 3 22:04:51.641: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jun 3 22:04:51.641: INFO: update-demo-nautilus-cgtrr is verified up and running Jun 3 22:04:51.641: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-9175 get pods update-demo-nautilus-smh99 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Jun 3 22:04:51.795: INFO: stderr: "" Jun 3 22:04:51.795: INFO: stdout: "true" Jun 3 22:04:51.795: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-9175 get pods update-demo-nautilus-smh99 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' Jun 3 22:04:51.946: INFO: stderr: "" Jun 3 22:04:51.946: INFO: stdout: "k8s.gcr.io/e2e-test-images/nautilus:1.4" Jun 3 22:04:51.946: INFO: validating pod update-demo-nautilus-smh99 Jun 3 22:04:51.949: INFO: got data: { "image": "nautilus.jpg" } Jun 3 22:04:51.950: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jun 3 22:04:51.950: INFO: update-demo-nautilus-smh99 is verified up and running STEP: using delete to clean up resources Jun 3 22:04:51.950: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-9175 delete --grace-period=0 --force -f -' Jun 3 22:04:52.098: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jun 3 22:04:52.098: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Jun 3 22:04:52.098: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-9175 get rc,svc -l name=update-demo --no-headers' Jun 3 22:04:52.296: INFO: stderr: "No resources found in kubectl-9175 namespace.\n" Jun 3 22:04:52.296: INFO: stdout: "" Jun 3 22:04:52.297: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-9175 get pods -l name=update-demo -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Jun 3 22:04:52.463: INFO: stderr: "" Jun 3 22:04:52.463: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 3 22:04:52.463: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9175" for this suite. • [SLOW TEST:30.394 seconds] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Update Demo /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:291 should scale a replication controller [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance]","total":-1,"completed":19,"skipped":374,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 3 22:04:52.507: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test downward API volume plugin Jun 3 22:04:52.538: INFO: Waiting up to 5m0s for pod "downwardapi-volume-87f806aa-ec0d-4c82-b8d1-6390628dac48" in namespace "downward-api-5780" to be "Succeeded or Failed" Jun 3 22:04:52.542: INFO: Pod "downwardapi-volume-87f806aa-ec0d-4c82-b8d1-6390628dac48": Phase="Pending", Reason="", readiness=false. Elapsed: 3.77051ms Jun 3 22:04:54.545: INFO: Pod "downwardapi-volume-87f806aa-ec0d-4c82-b8d1-6390628dac48": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007190863s Jun 3 22:04:56.549: INFO: Pod "downwardapi-volume-87f806aa-ec0d-4c82-b8d1-6390628dac48": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010793797s STEP: Saw pod success Jun 3 22:04:56.549: INFO: Pod "downwardapi-volume-87f806aa-ec0d-4c82-b8d1-6390628dac48" satisfied condition "Succeeded or Failed" Jun 3 22:04:56.551: INFO: Trying to get logs from node node2 pod downwardapi-volume-87f806aa-ec0d-4c82-b8d1-6390628dac48 container client-container: STEP: delete the pod Jun 3 22:04:56.565: INFO: Waiting for pod downwardapi-volume-87f806aa-ec0d-4c82-b8d1-6390628dac48 to disappear Jun 3 22:04:56.568: INFO: Pod downwardapi-volume-87f806aa-ec0d-4c82-b8d1-6390628dac48 no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 3 22:04:56.569: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-5780" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":20,"skipped":389,"failed":0} SSSSS ------------------------------ [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 3 22:00:56.908: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:54 [It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating pod busybox-031adb13-b270-44e5-b052-30ffbac3d9c3 in namespace container-probe-1477 Jun 3 22:01:00.946: INFO: Started pod busybox-031adb13-b270-44e5-b052-30ffbac3d9c3 in namespace container-probe-1477 STEP: checking the pod's current state and verifying that restartCount is present Jun 3 22:01:00.949: INFO: Initial restart count of pod busybox-031adb13-b270-44e5-b052-30ffbac3d9c3 is 0 STEP: deleting the pod [AfterEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 3 22:05:01.457: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-1477" for this suite. • [SLOW TEST:244.557 seconds] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Probing container should *not* be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":-1,"completed":16,"skipped":397,"failed":1,"failures":["[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]"]} SSSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 3 22:05:01.487: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating projection with secret that has name projected-secret-test-e1acd799-e450-469e-b46a-aafe5de15161 STEP: Creating a pod to test consume secrets Jun 3 22:05:01.522: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-3830e672-900b-4925-beef-ebc8bbde2138" in namespace "projected-4338" to be "Succeeded or Failed" Jun 3 22:05:01.525: INFO: Pod "pod-projected-secrets-3830e672-900b-4925-beef-ebc8bbde2138": Phase="Pending", Reason="", readiness=false. Elapsed: 3.178695ms Jun 3 22:05:03.528: INFO: Pod "pod-projected-secrets-3830e672-900b-4925-beef-ebc8bbde2138": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00625248s Jun 3 22:05:05.532: INFO: Pod "pod-projected-secrets-3830e672-900b-4925-beef-ebc8bbde2138": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.009692675s STEP: Saw pod success Jun 3 22:05:05.532: INFO: Pod "pod-projected-secrets-3830e672-900b-4925-beef-ebc8bbde2138" satisfied condition "Succeeded or Failed" Jun 3 22:05:05.535: INFO: Trying to get logs from node node2 pod pod-projected-secrets-3830e672-900b-4925-beef-ebc8bbde2138 container projected-secret-volume-test: STEP: delete the pod Jun 3 22:05:05.549: INFO: Waiting for pod pod-projected-secrets-3830e672-900b-4925-beef-ebc8bbde2138 to disappear Jun 3 22:05:05.551: INFO: Pod pod-projected-secrets-3830e672-900b-4925-beef-ebc8bbde2138 no longer exists [AfterEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 3 22:05:05.551: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4338" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":17,"skipped":404,"failed":1,"failures":["[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]"]} SSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 3 22:04:51.125: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a secret. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Discovering how many secrets are in namespace by default STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Secret STEP: Ensuring resource quota status captures secret creation STEP: Deleting a secret STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 3 22:05:08.185: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-8292" for this suite. • [SLOW TEST:17.069 seconds] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a secret. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance]","total":-1,"completed":22,"skipped":519,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 3 22:05:08.236: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should support configurable pod DNS nameservers [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod with dnsPolicy=None and customized dnsConfig... Jun 3 22:05:08.273: INFO: Created pod &Pod{ObjectMeta:{test-dns-nameservers dns-6154 980c744d-cbad-46a3-85f9-a4b14ccbf1bb 47693 0 2022-06-03 22:05:08 +0000 UTC map[] map[kubernetes.io/psp:collectd] [] [] [{e2e.test Update v1 2022-06-03 22:05:08 +0000 UTC FieldsV1 {"f:spec":{"f:containers":{"k:{\"name\":\"agnhost-container\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsConfig":{".":{},"f:nameservers":{},"f:searches":{}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-zt4cf,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:agnhost-container,Image:k8s.gcr.io/e2e-test-images/agnhost:2.32,Command:[],Args:[pause],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-zt4cf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:None,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:&PodDNSConfig{Nameservers:[1.1.1.1],Searches:[resolv.conf.local],Options:[]PodDNSConfigOption{},},ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jun 3 22:05:08.277: INFO: The status of Pod test-dns-nameservers is Pending, waiting for it to be Running (with Ready = true) Jun 3 22:05:10.282: INFO: The status of Pod test-dns-nameservers is Pending, waiting for it to be Running (with Ready = true) Jun 3 22:05:12.284: INFO: The status of Pod test-dns-nameservers is Running (Ready = true) STEP: Verifying customized DNS suffix list is configured on pod... Jun 3 22:05:12.284: INFO: ExecWithOptions {Command:[/agnhost dns-suffix] Namespace:dns-6154 PodName:test-dns-nameservers ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 3 22:05:12.284: INFO: >>> kubeConfig: /root/.kube/config STEP: Verifying customized DNS server is configured on pod... Jun 3 22:05:12.377: INFO: ExecWithOptions {Command:[/agnhost dns-server-list] Namespace:dns-6154 PodName:test-dns-nameservers ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 3 22:05:12.377: INFO: >>> kubeConfig: /root/.kube/config Jun 3 22:05:12.475: INFO: Deleting pod test-dns-nameservers... [AfterEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 3 22:05:12.482: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-6154" for this suite. • ------------------------------ {"msg":"PASSED [sig-network] DNS should support configurable pod DNS nameservers [Conformance]","total":-1,"completed":23,"skipped":535,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 3 22:04:56.592: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should verify ResourceQuota with best effort scope. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a ResourceQuota with best effort scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a ResourceQuota with not best effort scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a best-effort pod STEP: Ensuring resource quota with best effort scope captures the pod usage STEP: Ensuring resource quota with not best effort ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage STEP: Creating a not best-effort pod STEP: Ensuring resource quota with not best effort scope captures the pod usage STEP: Ensuring resource quota with best effort scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 3 22:05:12.692: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-8435" for this suite. • [SLOW TEST:16.108 seconds] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should verify ResourceQuota with best effort scope. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance]","total":-1,"completed":21,"skipped":394,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":16,"skipped":196,"failed":0} [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 3 22:02:13.246: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 [BeforeEach] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:105 STEP: Creating service test in namespace statefulset-5478 [It] should perform rolling updates and roll backs of template modifications [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a new StatefulSet Jun 3 22:02:13.281: INFO: Found 0 stateful pods, waiting for 3 Jun 3 22:02:23.287: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Jun 3 22:02:23.287: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Jun 3 22:02:23.287: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false Jun 3 22:02:33.287: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Jun 3 22:02:33.287: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Jun 3 22:02:33.287: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true Jun 3 22:02:33.298: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-5478 exec ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Jun 3 22:02:33.862: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" Jun 3 22:02:33.862: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Jun 3 22:02:33.862: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' STEP: Updating StatefulSet template: update image from k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 to k8s.gcr.io/e2e-test-images/httpd:2.4.39-1 Jun 3 22:02:43.898: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Updating Pods in reverse ordinal order Jun 3 22:02:53.915: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-5478 exec ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jun 3 22:02:54.187: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n" Jun 3 22:02:54.188: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Jun 3 22:02:54.188: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Jun 3 22:03:04.207: INFO: Waiting for StatefulSet statefulset-5478/ss2 to complete update Jun 3 22:03:04.207: INFO: Waiting for Pod statefulset-5478/ss2-0 to have revision ss2-5bbbc9fc94 update revision ss2-677d6db895 Jun 3 22:03:04.207: INFO: Waiting for Pod statefulset-5478/ss2-1 to have revision ss2-5bbbc9fc94 update revision ss2-677d6db895 Jun 3 22:03:04.207: INFO: Waiting for Pod statefulset-5478/ss2-2 to have revision ss2-5bbbc9fc94 update revision ss2-677d6db895 Jun 3 22:03:14.214: INFO: Waiting for StatefulSet statefulset-5478/ss2 to complete update Jun 3 22:03:14.214: INFO: Waiting for Pod statefulset-5478/ss2-0 to have revision ss2-5bbbc9fc94 update revision ss2-677d6db895 Jun 3 22:03:14.214: INFO: Waiting for Pod statefulset-5478/ss2-1 to have revision ss2-5bbbc9fc94 update revision ss2-677d6db895 Jun 3 22:03:24.214: INFO: Waiting for StatefulSet statefulset-5478/ss2 to complete update Jun 3 22:03:24.214: INFO: Waiting for Pod statefulset-5478/ss2-0 to have revision ss2-5bbbc9fc94 update revision ss2-677d6db895 Jun 3 22:03:34.214: INFO: Waiting for StatefulSet statefulset-5478/ss2 to complete update STEP: Rolling back to a previous revision Jun 3 22:03:44.213: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-5478 exec ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Jun 3 22:03:44.729: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" Jun 3 22:03:44.729: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Jun 3 22:03:44.729: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Jun 3 22:03:54.756: INFO: Updating stateful set ss2 STEP: Rolling back update in reverse ordinal order Jun 3 22:04:04.772: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-5478 exec ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jun 3 22:04:05.031: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n" Jun 3 22:04:05.031: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Jun 3 22:04:05.031: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Jun 3 22:04:35.052: INFO: Waiting for StatefulSet statefulset-5478/ss2 to complete update [AfterEach] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:116 Jun 3 22:04:45.061: INFO: Deleting all statefulset in ns statefulset-5478 Jun 3 22:04:45.063: INFO: Scaling statefulset ss2 to 0 Jun 3 22:05:15.078: INFO: Waiting for statefulset status.replicas updated to 0 Jun 3 22:05:15.080: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 3 22:05:15.090: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-5478" for this suite. • [SLOW TEST:181.854 seconds] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:95 should perform rolling updates and roll backs of template modifications [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance]","total":-1,"completed":17,"skipped":196,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 3 22:02:52.080: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:54 [It] should have monotonically increasing restart count [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating pod liveness-95927642-3465-4fab-8334-64742df860e3 in namespace container-probe-2120 Jun 3 22:02:58.124: INFO: Started pod liveness-95927642-3465-4fab-8334-64742df860e3 in namespace container-probe-2120 STEP: checking the pod's current state and verifying that restartCount is present Jun 3 22:02:58.127: INFO: Initial restart count of pod liveness-95927642-3465-4fab-8334-64742df860e3 is 0 Jun 3 22:03:14.161: INFO: Restart count of pod container-probe-2120/liveness-95927642-3465-4fab-8334-64742df860e3 is now 1 (16.034111572s elapsed) Jun 3 22:03:36.201: INFO: Restart count of pod container-probe-2120/liveness-95927642-3465-4fab-8334-64742df860e3 is now 2 (38.074100472s elapsed) Jun 3 22:03:56.235: INFO: Restart count of pod container-probe-2120/liveness-95927642-3465-4fab-8334-64742df860e3 is now 3 (58.108582992s elapsed) Jun 3 22:04:16.273: INFO: Restart count of pod container-probe-2120/liveness-95927642-3465-4fab-8334-64742df860e3 is now 4 (1m18.146650129s elapsed) Jun 3 22:05:16.400: INFO: Restart count of pod container-probe-2120/liveness-95927642-3465-4fab-8334-64742df860e3 is now 5 (2m18.273882389s elapsed) STEP: deleting the pod [AfterEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 3 22:05:16.408: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-2120" for this suite. • [SLOW TEST:144.336 seconds] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should have monotonically increasing restart count [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","total":-1,"completed":44,"skipped":809,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] EndpointSlice /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 3 22:04:48.572: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename endpointslice STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] EndpointSlice /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/endpointslice.go:49 [It] should create Endpoints and EndpointSlices for Pods matching a Service [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: referencing a single matching pod STEP: referencing matching pods with named port STEP: creating empty Endpoints and EndpointSlices for no matching Pods STEP: recreating EndpointSlices after they've been deleted Jun 3 22:05:08.683: INFO: EndpointSlice for Service endpointslice-2290/example-named-port not found [AfterEach] [sig-network] EndpointSlice /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 3 22:05:18.688: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "endpointslice-2290" for this suite. • [SLOW TEST:30.126 seconds] [sig-network] EndpointSlice /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should create Endpoints and EndpointSlices for Pods matching a Service [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-network] EndpointSlice should create Endpoints and EndpointSlices for Pods matching a Service [Conformance]","total":-1,"completed":53,"skipped":865,"failed":0} SSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 3 22:05:16.485: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating projection with secret that has name projected-secret-test-map-5852cd20-a8d9-44d7-a625-0b3de005f8ab STEP: Creating a pod to test consume secrets Jun 3 22:05:16.524: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-4a106e2f-f6a6-4fed-9baa-5fac64a18c4c" in namespace "projected-4934" to be "Succeeded or Failed" Jun 3 22:05:16.528: INFO: Pod "pod-projected-secrets-4a106e2f-f6a6-4fed-9baa-5fac64a18c4c": Phase="Pending", Reason="", readiness=false. Elapsed: 3.232229ms Jun 3 22:05:18.533: INFO: Pod "pod-projected-secrets-4a106e2f-f6a6-4fed-9baa-5fac64a18c4c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008736677s Jun 3 22:05:20.537: INFO: Pod "pod-projected-secrets-4a106e2f-f6a6-4fed-9baa-5fac64a18c4c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012431775s STEP: Saw pod success Jun 3 22:05:20.537: INFO: Pod "pod-projected-secrets-4a106e2f-f6a6-4fed-9baa-5fac64a18c4c" satisfied condition "Succeeded or Failed" Jun 3 22:05:20.540: INFO: Trying to get logs from node node1 pod pod-projected-secrets-4a106e2f-f6a6-4fed-9baa-5fac64a18c4c container projected-secret-volume-test: STEP: delete the pod Jun 3 22:05:20.566: INFO: Waiting for pod pod-projected-secrets-4a106e2f-f6a6-4fed-9baa-5fac64a18c4c to disappear Jun 3 22:05:20.568: INFO: Pod pod-projected-secrets-4a106e2f-f6a6-4fed-9baa-5fac64a18c4c no longer exists [AfterEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 3 22:05:20.569: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4934" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":45,"skipped":839,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 3 22:05:18.726: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should mount projected service account token [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test service account token: Jun 3 22:05:18.762: INFO: Waiting up to 5m0s for pod "test-pod-3498933b-f9a3-47ef-94fa-7cc95fb164ad" in namespace "svcaccounts-3297" to be "Succeeded or Failed" Jun 3 22:05:18.764: INFO: Pod "test-pod-3498933b-f9a3-47ef-94fa-7cc95fb164ad": Phase="Pending", Reason="", readiness=false. Elapsed: 2.773668ms Jun 3 22:05:20.768: INFO: Pod "test-pod-3498933b-f9a3-47ef-94fa-7cc95fb164ad": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006147123s Jun 3 22:05:22.771: INFO: Pod "test-pod-3498933b-f9a3-47ef-94fa-7cc95fb164ad": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.009827016s STEP: Saw pod success Jun 3 22:05:22.772: INFO: Pod "test-pod-3498933b-f9a3-47ef-94fa-7cc95fb164ad" satisfied condition "Succeeded or Failed" Jun 3 22:05:22.774: INFO: Trying to get logs from node node2 pod test-pod-3498933b-f9a3-47ef-94fa-7cc95fb164ad container agnhost-container: STEP: delete the pod Jun 3 22:05:22.787: INFO: Waiting for pod test-pod-3498933b-f9a3-47ef-94fa-7cc95fb164ad to disappear Jun 3 22:05:22.789: INFO: Pod test-pod-3498933b-f9a3-47ef-94fa-7cc95fb164ad no longer exists [AfterEach] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 3 22:05:22.789: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-3297" for this suite. • ------------------------------ {"msg":"PASSED [sig-auth] ServiceAccounts should mount projected service account token [Conformance]","total":-1,"completed":54,"skipped":876,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 3 22:05:22.858: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41 [It] should provide container's cpu request [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test downward API volume plugin Jun 3 22:05:22.894: INFO: Waiting up to 5m0s for pod "downwardapi-volume-d15c18ff-4ac1-43ab-bd36-437a33e8bf0b" in namespace "downward-api-9783" to be "Succeeded or Failed" Jun 3 22:05:22.898: INFO: Pod "downwardapi-volume-d15c18ff-4ac1-43ab-bd36-437a33e8bf0b": Phase="Pending", Reason="", readiness=false. Elapsed: 3.84572ms Jun 3 22:05:24.901: INFO: Pod "downwardapi-volume-d15c18ff-4ac1-43ab-bd36-437a33e8bf0b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006634724s Jun 3 22:05:26.904: INFO: Pod "downwardapi-volume-d15c18ff-4ac1-43ab-bd36-437a33e8bf0b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.00984143s STEP: Saw pod success Jun 3 22:05:26.904: INFO: Pod "downwardapi-volume-d15c18ff-4ac1-43ab-bd36-437a33e8bf0b" satisfied condition "Succeeded or Failed" Jun 3 22:05:26.906: INFO: Trying to get logs from node node2 pod downwardapi-volume-d15c18ff-4ac1-43ab-bd36-437a33e8bf0b container client-container: STEP: delete the pod Jun 3 22:05:26.919: INFO: Waiting for pod downwardapi-volume-d15c18ff-4ac1-43ab-bd36-437a33e8bf0b to disappear Jun 3 22:05:26.921: INFO: Pod downwardapi-volume-d15c18ff-4ac1-43ab-bd36-437a33e8bf0b no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 3 22:05:26.921: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-9783" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance]","total":-1,"completed":55,"skipped":905,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 3 22:05:12.782: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Jun 3 22:05:13.188: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Jun 3 22:05:15.198: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63789890713, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63789890713, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63789890713, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63789890713, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} Jun 3 22:05:17.202: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63789890713, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63789890713, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63789890713, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63789890713, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jun 3 22:05:20.211: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Jun 3 22:05:20.214: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-2873-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource that should be mutated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 3 22:05:28.280: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-1331" for this suite. STEP: Destroying namespace "webhook-1331-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:15.526 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","total":-1,"completed":22,"skipped":431,"failed":0} SSSSSSS ------------------------------ [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 3 22:05:28.327: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] binary data should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating configMap with name configmap-test-upd-ed1e0a39-41db-4fc8-b914-e4b939ad3b0d STEP: Creating the pod STEP: Waiting for pod with text data STEP: Waiting for pod with binary data [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 3 22:05:34.391: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-9456" for this suite. • [SLOW TEST:6.072 seconds] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 binary data should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance]","total":-1,"completed":23,"skipped":438,"failed":0} SSSSSS ------------------------------ [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 3 22:05:27.022: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:241 [It] should create services for rc [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating Agnhost RC Jun 3 22:05:27.047: INFO: namespace kubectl-3223 Jun 3 22:05:27.047: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-3223 create -f -' Jun 3 22:05:27.463: INFO: stderr: "" Jun 3 22:05:27.463: INFO: stdout: "replicationcontroller/agnhost-primary created\n" STEP: Waiting for Agnhost primary to start. Jun 3 22:05:28.466: INFO: Selector matched 1 pods for map[app:agnhost] Jun 3 22:05:28.466: INFO: Found 0 / 1 Jun 3 22:05:29.466: INFO: Selector matched 1 pods for map[app:agnhost] Jun 3 22:05:29.466: INFO: Found 0 / 1 Jun 3 22:05:30.467: INFO: Selector matched 1 pods for map[app:agnhost] Jun 3 22:05:30.467: INFO: Found 1 / 1 Jun 3 22:05:30.468: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Jun 3 22:05:30.470: INFO: Selector matched 1 pods for map[app:agnhost] Jun 3 22:05:30.470: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Jun 3 22:05:30.470: INFO: wait on agnhost-primary startup in kubectl-3223 Jun 3 22:05:30.470: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-3223 logs agnhost-primary-69mpp agnhost-primary' Jun 3 22:05:30.628: INFO: stderr: "" Jun 3 22:05:30.628: INFO: stdout: "Paused\n" STEP: exposing RC Jun 3 22:05:30.628: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-3223 expose rc agnhost-primary --name=rm2 --port=1234 --target-port=6379' Jun 3 22:05:30.842: INFO: stderr: "" Jun 3 22:05:30.842: INFO: stdout: "service/rm2 exposed\n" Jun 3 22:05:30.844: INFO: Service rm2 in namespace kubectl-3223 found. STEP: exposing service Jun 3 22:05:32.852: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-3223 expose service rm2 --name=rm3 --port=2345 --target-port=6379' Jun 3 22:05:33.056: INFO: stderr: "" Jun 3 22:05:33.056: INFO: stdout: "service/rm3 exposed\n" Jun 3 22:05:33.059: INFO: Service rm3 in namespace kubectl-3223 found. [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 3 22:05:35.068: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3223" for this suite. • [SLOW TEST:8.055 seconds] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl expose /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1223 should create services for rc [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl expose should create services for rc [Conformance]","total":-1,"completed":56,"skipped":949,"failed":0} SSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 3 22:05:05.580: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 [BeforeEach] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:105 STEP: Creating service test in namespace statefulset-9348 [It] should have a working scale subresource [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating statefulset ss in namespace statefulset-9348 Jun 3 22:05:05.612: INFO: Found 0 stateful pods, waiting for 1 Jun 3 22:05:15.617: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: getting scale subresource STEP: updating a scale subresource STEP: verifying the statefulset Spec.Replicas was modified STEP: Patch a scale subresource STEP: verifying the statefulset Spec.Replicas was modified [AfterEach] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:116 Jun 3 22:05:15.637: INFO: Deleting all statefulset in ns statefulset-9348 Jun 3 22:05:15.640: INFO: Scaling statefulset ss to 0 Jun 3 22:05:35.653: INFO: Waiting for statefulset status.replicas updated to 0 Jun 3 22:05:35.656: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 3 22:05:35.664: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-9348" for this suite. • [SLOW TEST:30.092 seconds] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:95 should have a working scale subresource [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance]","total":-1,"completed":18,"skipped":410,"failed":1,"failures":["[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]"]} S ------------------------------ [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 3 22:05:15.150: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:54 [It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating pod liveness-b73e7bd3-cd13-4bb5-8b27-88a252743314 in namespace container-probe-2379 Jun 3 22:05:21.190: INFO: Started pod liveness-b73e7bd3-cd13-4bb5-8b27-88a252743314 in namespace container-probe-2379 STEP: checking the pod's current state and verifying that restartCount is present Jun 3 22:05:21.192: INFO: Initial restart count of pod liveness-b73e7bd3-cd13-4bb5-8b27-88a252743314 is 0 Jun 3 22:05:37.225: INFO: Restart count of pod container-probe-2379/liveness-b73e7bd3-cd13-4bb5-8b27-88a252743314 is now 1 (16.03322815s elapsed) STEP: deleting the pod [AfterEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 3 22:05:37.232: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-2379" for this suite. • [SLOW TEST:22.091 seconds] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":-1,"completed":18,"skipped":214,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] Watchers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 3 22:05:37.309: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to restart watching from the last resource version observed by the previous watch [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating a watch on configmaps STEP: creating a new configmap STEP: modifying the configmap once STEP: closing the watch once it receives two notifications Jun 3 22:05:37.339: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-4911 30ce4305-d6a5-44d3-b209-e857e1a41fe4 48446 0 2022-06-03 22:05:37 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] [{e2e.test Update v1 2022-06-03 22:05:37 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} Jun 3 22:05:37.340: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-4911 30ce4305-d6a5-44d3-b209-e857e1a41fe4 48447 0 2022-06-03 22:05:37 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] [{e2e.test Update v1 2022-06-03 22:05:37 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: modifying the configmap a second time, while the watch is closed STEP: creating a new watch on configmaps from the last resource version observed by the first watch STEP: deleting the configmap STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed Jun 3 22:05:37.349: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-4911 30ce4305-d6a5-44d3-b209-e857e1a41fe4 48448 0 2022-06-03 22:05:37 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] [{e2e.test Update v1 2022-06-03 22:05:37 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} Jun 3 22:05:37.349: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-4911 30ce4305-d6a5-44d3-b209-e857e1a41fe4 48449 0 2022-06-03 22:05:37 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] [{e2e.test Update v1 2022-06-03 22:05:37 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 3 22:05:37.349: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-4911" for this suite. • ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance]","total":-1,"completed":19,"skipped":241,"failed":0} SSS ------------------------------ [BeforeEach] [sig-apps] CronJob /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 3 22:00:37.875: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename cronjob STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] CronJob /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/cronjob.go:63 W0603 22:00:37.897541 33 warnings.go:70] batch/v1beta1 CronJob is deprecated in v1.21+, unavailable in v1.25+; use batch/v1 CronJob [It] should not schedule jobs when suspended [Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a suspended cronjob STEP: Ensuring no jobs are scheduled STEP: Ensuring no job exists by listing jobs explicitly STEP: Removing cronjob [AfterEach] [sig-apps] CronJob /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 3 22:05:37.918: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "cronjob-385" for this suite. • [SLOW TEST:300.053 seconds] [sig-apps] CronJob /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should not schedule jobs when suspended [Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] CronJob should not schedule jobs when suspended [Slow] [Conformance]","total":-1,"completed":22,"skipped":286,"failed":0} SSS ------------------------------ [BeforeEach] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 3 22:05:34.416: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod UID as env vars [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test downward api env vars Jun 3 22:05:34.453: INFO: Waiting up to 5m0s for pod "downward-api-17b43675-6f91-4b59-974b-6ed96298900a" in namespace "downward-api-3781" to be "Succeeded or Failed" Jun 3 22:05:34.457: INFO: Pod "downward-api-17b43675-6f91-4b59-974b-6ed96298900a": Phase="Pending", Reason="", readiness=false. Elapsed: 3.844466ms Jun 3 22:05:36.461: INFO: Pod "downward-api-17b43675-6f91-4b59-974b-6ed96298900a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007789987s Jun 3 22:05:38.466: INFO: Pod "downward-api-17b43675-6f91-4b59-974b-6ed96298900a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012354596s STEP: Saw pod success Jun 3 22:05:38.466: INFO: Pod "downward-api-17b43675-6f91-4b59-974b-6ed96298900a" satisfied condition "Succeeded or Failed" Jun 3 22:05:38.468: INFO: Trying to get logs from node node1 pod downward-api-17b43675-6f91-4b59-974b-6ed96298900a container dapi-container: STEP: delete the pod Jun 3 22:05:38.506: INFO: Waiting for pod downward-api-17b43675-6f91-4b59-974b-6ed96298900a to disappear Jun 3 22:05:38.508: INFO: Pod downward-api-17b43675-6f91-4b59-974b-6ed96298900a no longer exists [AfterEach] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 3 22:05:38.508: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-3781" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance]","total":-1,"completed":24,"skipped":444,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 3 22:05:35.113: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/rc.go:54 [It] should adopt matching pods on creation [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Given a Pod with a 'name' label pod-adoption is created Jun 3 22:05:35.155: INFO: The status of Pod pod-adoption is Pending, waiting for it to be Running (with Ready = true) Jun 3 22:05:37.159: INFO: The status of Pod pod-adoption is Pending, waiting for it to be Running (with Ready = true) Jun 3 22:05:39.160: INFO: The status of Pod pod-adoption is Running (Ready = true) STEP: When a replication controller with a matching selector is created STEP: Then the orphan pod is adopted [AfterEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 3 22:05:40.172: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-6844" for this suite. • [SLOW TEST:5.067 seconds] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching pods on creation [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should adopt matching pods on creation [Conformance]","total":-1,"completed":57,"skipped":963,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 3 22:05:12.578: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746 [It] should be able to change the type from NodePort to ExternalName [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating a service nodeport-service with the type=NodePort in namespace services-9691 STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service STEP: creating service externalsvc in namespace services-9691 STEP: creating replication controller externalsvc in namespace services-9691 I0603 22:05:12.619472 27 runners.go:190] Created replication controller with name: externalsvc, namespace: services-9691, replica count: 2 I0603 22:05:15.670731 27 runners.go:190] externalsvc Pods: 2 out of 2 created, 1 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0603 22:05:18.673151 27 runners.go:190] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady STEP: changing the NodePort service to type=ExternalName Jun 3 22:05:18.689: INFO: Creating new exec pod Jun 3 22:05:22.707: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9691 exec execpodsglq7 -- /bin/sh -x -c nslookup nodeport-service.services-9691.svc.cluster.local' Jun 3 22:05:22.973: INFO: stderr: "+ nslookup nodeport-service.services-9691.svc.cluster.local\n" Jun 3 22:05:22.973: INFO: stdout: "Server:\t\t10.233.0.3\nAddress:\t10.233.0.3#53\n\nnodeport-service.services-9691.svc.cluster.local\tcanonical name = externalsvc.services-9691.svc.cluster.local.\nName:\texternalsvc.services-9691.svc.cluster.local\nAddress: 10.233.13.136\n\n" STEP: deleting ReplicationController externalsvc in namespace services-9691, will wait for the garbage collector to delete the pods Jun 3 22:05:23.032: INFO: Deleting ReplicationController externalsvc took: 5.196072ms Jun 3 22:05:23.133: INFO: Terminating ReplicationController externalsvc pods took: 100.89405ms Jun 3 22:05:40.242: INFO: Cleaning up the NodePort to ExternalName test service [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 3 22:05:40.248: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-9691" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750 • [SLOW TEST:27.679 seconds] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should be able to change the type from NodePort to ExternalName [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance]","total":-1,"completed":24,"skipped":568,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-instrumentation] Events API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 3 22:05:40.251: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-instrumentation] Events API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/instrumentation/events.go:81 [It] should ensure that an event can be fetched, patched, deleted, and listed [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating a test event STEP: listing events in all namespaces STEP: listing events in test namespace STEP: listing events with field selection filtering on source STEP: listing events with field selection filtering on reportingController STEP: getting the test event STEP: patching the test event STEP: getting the test event STEP: updating the test event STEP: getting the test event STEP: deleting the test event STEP: listing events in all namespaces STEP: listing events in test namespace [AfterEach] [sig-instrumentation] Events API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 3 22:05:40.315: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-5266" for this suite. •S ------------------------------ {"msg":"PASSED [sig-instrumentation] Events API should ensure that an event can be fetched, patched, deleted, and listed [Conformance]","total":-1,"completed":58,"skipped":994,"failed":0} SS ------------------------------ [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 3 22:05:40.330: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746 [It] should find a service from listing all namespaces [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: fetching services [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 3 22:05:40.355: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-3630" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750 • ------------------------------ {"msg":"PASSED [sig-network] Services should find a service from listing all namespaces [Conformance]","total":-1,"completed":59,"skipped":995,"failed":0} SS ------------------------------ [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 3 22:05:38.571: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:86 [It] RecreateDeployment should delete old pods and create new ones [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Jun 3 22:05:38.592: INFO: Creating deployment "test-recreate-deployment" Jun 3 22:05:38.596: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1 Jun 3 22:05:38.601: INFO: deployment "test-recreate-deployment" doesn't have the required revision set Jun 3 22:05:40.606: INFO: Waiting deployment "test-recreate-deployment" to complete Jun 3 22:05:40.609: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63789890738, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63789890738, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63789890738, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63789890738, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-6cb8b65c46\" is progressing."}}, CollisionCount:(*int32)(nil)} Jun 3 22:05:42.612: INFO: Triggering a new rollout for deployment "test-recreate-deployment" Jun 3 22:05:42.619: INFO: Updating deployment test-recreate-deployment Jun 3 22:05:42.619: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:80 Jun 3 22:05:42.657: INFO: Deployment "test-recreate-deployment": &Deployment{ObjectMeta:{test-recreate-deployment deployment-7833 76c3aa4e-118a-43b9-a68c-c5ad6f86d586 48719 2 2022-06-03 22:05:38 +0000 UTC map[name:sample-pod-3] map[deployment.kubernetes.io/revision:2] [] [] [{e2e.test Update apps/v1 2022-06-03 22:05:42 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{},"f:strategy":{"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2022-06-03 22:05:42 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:replicas":{},"f:unavailableReplicas":{},"f:updatedReplicas":{}}}}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3] map[] [] [] []} {[] [] [{httpd k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc0037a3218 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2022-06-03 22:05:42 +0000 UTC,LastTransitionTime:2022-06-03 22:05:42 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "test-recreate-deployment-85d47dcb4" is progressing.,LastUpdateTime:2022-06-03 22:05:42 +0000 UTC,LastTransitionTime:2022-06-03 22:05:38 +0000 UTC,},},ReadyReplicas:0,CollisionCount:nil,},} Jun 3 22:05:42.661: INFO: New ReplicaSet "test-recreate-deployment-85d47dcb4" of Deployment "test-recreate-deployment": &ReplicaSet{ObjectMeta:{test-recreate-deployment-85d47dcb4 deployment-7833 23de6597-fc08-4c1c-8a7a-30b5a2f41a32 48717 1 2022-06-03 22:05:42 +0000 UTC map[name:sample-pod-3 pod-template-hash:85d47dcb4] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-recreate-deployment 76c3aa4e-118a-43b9-a68c-c5ad6f86d586 0xc0037a3f10 0xc0037a3f11}] [] [{kube-controller-manager Update apps/v1 2022-06-03 22:05:42 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"76c3aa4e-118a-43b9-a68c-c5ad6f86d586\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 85d47dcb4,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3 pod-template-hash:85d47dcb4] map[] [] [] []} {[] [] [{httpd k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc0037d4028 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Jun 3 22:05:42.661: INFO: All old ReplicaSets of Deployment "test-recreate-deployment": Jun 3 22:05:42.661: INFO: &ReplicaSet{ObjectMeta:{test-recreate-deployment-6cb8b65c46 deployment-7833 8c7d56a9-dd8a-4393-86dc-5dd7610b3139 48707 2 2022-06-03 22:05:38 +0000 UTC map[name:sample-pod-3 pod-template-hash:6cb8b65c46] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-recreate-deployment 76c3aa4e-118a-43b9-a68c-c5ad6f86d586 0xc0037a3c37 0xc0037a3c38}] [] [{kube-controller-manager Update apps/v1 2022-06-03 22:05:42 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"76c3aa4e-118a-43b9-a68c-c5ad6f86d586\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 6cb8b65c46,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3 pod-template-hash:6cb8b65c46] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.32 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc0037a3dc8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Jun 3 22:05:42.667: INFO: Pod "test-recreate-deployment-85d47dcb4-7vgms" is not available: &Pod{ObjectMeta:{test-recreate-deployment-85d47dcb4-7vgms test-recreate-deployment-85d47dcb4- deployment-7833 e84b6be3-22c6-4c59-aa64-dfad0863200f 48720 0 2022-06-03 22:05:42 +0000 UTC map[name:sample-pod-3 pod-template-hash:85d47dcb4] map[kubernetes.io/psp:collectd] [{apps/v1 ReplicaSet test-recreate-deployment-85d47dcb4 23de6597-fc08-4c1c-8a7a-30b5a2f41a32 0xc0037d471f 0xc0037d4740}] [] [{kube-controller-manager Update v1 2022-06-03 22:05:42 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"23de6597-fc08-4c1c-8a7a-30b5a2f41a32\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2022-06-03 22:05:42 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-nk5d7,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-nk5d7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node1,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-06-03 22:05:42 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-06-03 22:05:42 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-06-03 22:05:42 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-06-03 22:05:42 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.10.190.207,PodIP:,StartTime:2022-06-03 22:05:42 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 3 22:05:42.667: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-7833" for this suite. • ------------------------------ {"msg":"PASSED [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance]","total":-1,"completed":25,"skipped":468,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 3 22:05:37.369: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/rc.go:54 [It] should release no longer matching pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Given a ReplicationController is created STEP: When the matched label of one of its pods change Jun 3 22:05:37.394: INFO: Pod name pod-release: Found 0 pods out of 1 Jun 3 22:05:42.397: INFO: Pod name pod-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 3 22:05:43.410: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-4693" for this suite. • [SLOW TEST:6.050 seconds] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should release no longer matching pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should release no longer matching pods [Conformance]","total":-1,"completed":20,"skipped":244,"failed":0} SSSSSS ------------------------------ [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 3 22:05:40.332: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test emptydir 0644 on node default medium Jun 3 22:05:40.366: INFO: Waiting up to 5m0s for pod "pod-ce203671-3b13-4690-811f-e4dcdeadf17c" in namespace "emptydir-6275" to be "Succeeded or Failed" Jun 3 22:05:40.369: INFO: Pod "pod-ce203671-3b13-4690-811f-e4dcdeadf17c": Phase="Pending", Reason="", readiness=false. Elapsed: 3.773962ms Jun 3 22:05:42.396: INFO: Pod "pod-ce203671-3b13-4690-811f-e4dcdeadf17c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.029964306s Jun 3 22:05:44.399: INFO: Pod "pod-ce203671-3b13-4690-811f-e4dcdeadf17c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.033550942s STEP: Saw pod success Jun 3 22:05:44.399: INFO: Pod "pod-ce203671-3b13-4690-811f-e4dcdeadf17c" satisfied condition "Succeeded or Failed" Jun 3 22:05:44.401: INFO: Trying to get logs from node node1 pod pod-ce203671-3b13-4690-811f-e4dcdeadf17c container test-container: STEP: delete the pod Jun 3 22:05:44.415: INFO: Waiting for pod pod-ce203671-3b13-4690-811f-e4dcdeadf17c to disappear Jun 3 22:05:44.417: INFO: Pod pod-ce203671-3b13-4690-811f-e4dcdeadf17c no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 3 22:05:44.417: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-6275" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":25,"skipped":600,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 3 22:04:07.012: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating configMap with name configmap-test-upd-249bd50e-2c7f-4e50-9faf-e8fc73f5f062 STEP: Creating the pod Jun 3 22:04:07.056: INFO: The status of Pod pod-configmaps-d2de6236-c0b5-4537-90cb-d226b373dc1a is Pending, waiting for it to be Running (with Ready = true) Jun 3 22:04:09.059: INFO: The status of Pod pod-configmaps-d2de6236-c0b5-4537-90cb-d226b373dc1a is Pending, waiting for it to be Running (with Ready = true) Jun 3 22:04:11.059: INFO: The status of Pod pod-configmaps-d2de6236-c0b5-4537-90cb-d226b373dc1a is Pending, waiting for it to be Running (with Ready = true) Jun 3 22:04:13.061: INFO: The status of Pod pod-configmaps-d2de6236-c0b5-4537-90cb-d226b373dc1a is Pending, waiting for it to be Running (with Ready = true) Jun 3 22:04:15.059: INFO: The status of Pod pod-configmaps-d2de6236-c0b5-4537-90cb-d226b373dc1a is Running (Ready = true) STEP: Updating configmap configmap-test-upd-249bd50e-2c7f-4e50-9faf-e8fc73f5f062 STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 3 22:05:45.850: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-3179" for this suite. • [SLOW TEST:98.848 seconds] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance]","total":-1,"completed":21,"skipped":302,"failed":1,"failures":["[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Docker Containers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 3 22:05:40.372: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test override command Jun 3 22:05:40.402: INFO: Waiting up to 5m0s for pod "client-containers-0bb794ba-35c3-4599-bf79-df6afb30cc3a" in namespace "containers-164" to be "Succeeded or Failed" Jun 3 22:05:40.404: INFO: Pod "client-containers-0bb794ba-35c3-4599-bf79-df6afb30cc3a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.180686ms Jun 3 22:05:42.407: INFO: Pod "client-containers-0bb794ba-35c3-4599-bf79-df6afb30cc3a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005161271s Jun 3 22:05:44.411: INFO: Pod "client-containers-0bb794ba-35c3-4599-bf79-df6afb30cc3a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.008877927s Jun 3 22:05:46.415: INFO: Pod "client-containers-0bb794ba-35c3-4599-bf79-df6afb30cc3a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.012770202s STEP: Saw pod success Jun 3 22:05:46.415: INFO: Pod "client-containers-0bb794ba-35c3-4599-bf79-df6afb30cc3a" satisfied condition "Succeeded or Failed" Jun 3 22:05:46.418: INFO: Trying to get logs from node node2 pod client-containers-0bb794ba-35c3-4599-bf79-df6afb30cc3a container agnhost-container: STEP: delete the pod Jun 3 22:05:46.430: INFO: Waiting for pod client-containers-0bb794ba-35c3-4599-bf79-df6afb30cc3a to disappear Jun 3 22:05:46.432: INFO: Pod client-containers-0bb794ba-35c3-4599-bf79-df6afb30cc3a no longer exists [AfterEach] [sig-node] Docker Containers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 3 22:05:46.432: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-164" for this suite. • [SLOW TEST:6.068 seconds] [sig-node] Docker Containers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ [BeforeEach] [sig-node] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 3 22:05:42.715: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/kubelet.go:38 [It] should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Jun 3 22:05:42.749: INFO: The status of Pod busybox-readonly-fs087850c3-1b94-4173-bd45-3e0018bc5030 is Pending, waiting for it to be Running (with Ready = true) Jun 3 22:05:44.752: INFO: The status of Pod busybox-readonly-fs087850c3-1b94-4173-bd45-3e0018bc5030 is Pending, waiting for it to be Running (with Ready = true) Jun 3 22:05:46.752: INFO: The status of Pod busybox-readonly-fs087850c3-1b94-4173-bd45-3e0018bc5030 is Pending, waiting for it to be Running (with Ready = true) Jun 3 22:05:48.754: INFO: The status of Pod busybox-readonly-fs087850c3-1b94-4173-bd45-3e0018bc5030 is Running (Ready = true) [AfterEach] [sig-node] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 3 22:05:48.762: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-6954" for this suite. • [SLOW TEST:6.056 seconds] [sig-node] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 when scheduling a read only busybox container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/kubelet.go:188 should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":26,"skipped":484,"failed":0} SSS ------------------------------ [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 3 22:05:44.550: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:186 [It] should be updated [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating the pod STEP: submitting the pod to kubernetes Jun 3 22:05:44.583: INFO: The status of Pod pod-update-4a2bd5b8-bd8e-4de6-a0e9-a04050549e04 is Pending, waiting for it to be Running (with Ready = true) Jun 3 22:05:46.586: INFO: The status of Pod pod-update-4a2bd5b8-bd8e-4de6-a0e9-a04050549e04 is Pending, waiting for it to be Running (with Ready = true) Jun 3 22:05:48.589: INFO: The status of Pod pod-update-4a2bd5b8-bd8e-4de6-a0e9-a04050549e04 is Running (Ready = true) STEP: verifying the pod is in kubernetes STEP: updating the pod Jun 3 22:05:49.103: INFO: Successfully updated pod "pod-update-4a2bd5b8-bd8e-4de6-a0e9-a04050549e04" STEP: verifying the updated pod is in kubernetes Jun 3 22:05:49.107: INFO: Pod update OK [AfterEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 3 22:05:49.107: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-2128" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Pods should be updated [NodeConformance] [Conformance]","total":-1,"completed":26,"skipped":649,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ {"msg":"PASSED [sig-node] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]","total":-1,"completed":60,"skipped":997,"failed":0} [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 3 22:05:46.443: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:241 [It] should add annotations for pods in rc [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating Agnhost RC Jun 3 22:05:46.464: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-5041 create -f -' Jun 3 22:05:46.825: INFO: stderr: "" Jun 3 22:05:46.825: INFO: stdout: "replicationcontroller/agnhost-primary created\n" STEP: Waiting for Agnhost primary to start. Jun 3 22:05:47.829: INFO: Selector matched 1 pods for map[app:agnhost] Jun 3 22:05:47.829: INFO: Found 0 / 1 Jun 3 22:05:48.829: INFO: Selector matched 1 pods for map[app:agnhost] Jun 3 22:05:48.829: INFO: Found 0 / 1 Jun 3 22:05:49.830: INFO: Selector matched 1 pods for map[app:agnhost] Jun 3 22:05:49.830: INFO: Found 0 / 1 Jun 3 22:05:50.828: INFO: Selector matched 1 pods for map[app:agnhost] Jun 3 22:05:50.828: INFO: Found 0 / 1 Jun 3 22:05:51.829: INFO: Selector matched 1 pods for map[app:agnhost] Jun 3 22:05:51.829: INFO: Found 0 / 1 Jun 3 22:05:52.831: INFO: Selector matched 1 pods for map[app:agnhost] Jun 3 22:05:52.831: INFO: Found 1 / 1 Jun 3 22:05:52.831: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 STEP: patching all pods Jun 3 22:05:52.834: INFO: Selector matched 1 pods for map[app:agnhost] Jun 3 22:05:52.834: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Jun 3 22:05:52.834: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-5041 patch pod agnhost-primary-kz5b2 -p {"metadata":{"annotations":{"x":"y"}}}' Jun 3 22:05:53.003: INFO: stderr: "" Jun 3 22:05:53.003: INFO: stdout: "pod/agnhost-primary-kz5b2 patched\n" STEP: checking annotations Jun 3 22:05:53.006: INFO: Selector matched 1 pods for map[app:agnhost] Jun 3 22:05:53.006: INFO: ForEach: Found 1 pods from the filter. Now looping through them. [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 3 22:05:53.006: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5041" for this suite. • [SLOW TEST:6.575 seconds] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl patch /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1460 should add annotations for pods in rc [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc [Conformance]","total":-1,"completed":61,"skipped":997,"failed":0} SSSSSSS ------------------------------ [BeforeEach] [sig-apps] DisruptionController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 3 22:05:45.905: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename disruption STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] DisruptionController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/disruption.go:69 [It] should observe PodDisruptionBudget status updated [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Waiting for the pdb to be processed STEP: Waiting for all pods to be running Jun 3 22:05:47.976: INFO: running pods: 0 < 3 Jun 3 22:05:49.980: INFO: running pods: 0 < 3 Jun 3 22:05:51.980: INFO: running pods: 1 < 3 [AfterEach] [sig-apps] DisruptionController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 3 22:05:53.983: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "disruption-6306" for this suite. • [SLOW TEST:8.087 seconds] [sig-apps] DisruptionController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should observe PodDisruptionBudget status updated [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] DisruptionController should observe PodDisruptionBudget status updated [Conformance]","total":-1,"completed":22,"skipped":320,"failed":1,"failures":["[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSSSSS ------------------------------ [BeforeEach] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 3 22:05:53.041: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: create the container STEP: wait for the container to reach Failed STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Jun 3 22:05:58.098: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 3 22:05:58.112: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-2841" for this suite. • [SLOW TEST:5.082 seconds] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 blackbox test /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:41 on terminated container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:134 should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":-1,"completed":62,"skipped":1004,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 3 22:05:48.781: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:186 [It] should be submitted and removed [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating the pod STEP: setting up watch STEP: submitting the pod to kubernetes Jun 3 22:05:48.805: INFO: observed the pod list STEP: verifying the pod is in kubernetes STEP: verifying pod creation was observed STEP: deleting the pod gracefully STEP: verifying pod deletion was observed [AfterEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 3 22:05:58.362: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-445" for this suite. • [SLOW TEST:9.596 seconds] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should be submitted and removed [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Pods should be submitted and removed [NodeConformance] [Conformance]","total":-1,"completed":27,"skipped":487,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 3 22:05:58.428: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:241 [BeforeEach] Kubectl run pod /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1514 [It] should create a pod from an image when restart is Never [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: running the image k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 Jun 3 22:05:58.451: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-5341 run e2e-test-httpd-pod --restart=Never --image=k8s.gcr.io/e2e-test-images/httpd:2.4.38-1' Jun 3 22:05:58.608: INFO: stderr: "" Jun 3 22:05:58.608: INFO: stdout: "pod/e2e-test-httpd-pod created\n" STEP: verifying the pod e2e-test-httpd-pod was created [AfterEach] Kubectl run pod /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1518 Jun 3 22:05:58.613: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-5341 delete pods e2e-test-httpd-pod' Jun 3 22:05:59.317: INFO: stderr: "" Jun 3 22:05:59.317: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 3 22:05:59.317: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5341" for this suite. • ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never [Conformance]","total":-1,"completed":28,"skipped":509,"failed":0} S ------------------------------ [BeforeEach] [sig-node] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 3 22:05:59.330: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create secret due to empty secret key [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating projection with secret that has name secret-emptykey-test-9d296950-4638-4949-9818-5b05cb3b844a [AfterEach] [sig-node] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 3 22:05:59.352: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-1509" for this suite. • ------------------------------ [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 3 22:05:49.165: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a replication controller. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ReplicationController STEP: Ensuring resource quota status captures replication controller creation STEP: Deleting a ReplicationController STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 3 22:06:00.220: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-3908" for this suite. • [SLOW TEST:11.064 seconds] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a replication controller. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance]","total":-1,"completed":27,"skipped":671,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 3 22:05:35.678: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] updates the published spec when one version gets renamed [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: set up a multi version CRD Jun 3 22:05:35.702: INFO: >>> kubeConfig: /root/.kube/config STEP: rename a version STEP: check the new version name is served STEP: check the old version name is removed STEP: check the other version is not changed [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 3 22:06:01.241: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-2129" for this suite. • [SLOW TEST:25.583 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 updates the published spec when one version gets renamed [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance]","total":-1,"completed":19,"skipped":411,"failed":1,"failures":["[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]"]} SSSSSSSS ------------------------------ [BeforeEach] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 3 22:06:00.289: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should guarantee kube-root-ca.crt exist in any namespace [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Jun 3 22:06:00.311: INFO: Got root ca configmap in namespace "svcaccounts-3095" Jun 3 22:06:00.314: INFO: Deleted root ca configmap in namespace "svcaccounts-3095" STEP: waiting for a new root ca configmap created Jun 3 22:06:00.818: INFO: Recreated root ca configmap in namespace "svcaccounts-3095" Jun 3 22:06:00.821: INFO: Updated root ca configmap in namespace "svcaccounts-3095" STEP: waiting for the root ca configmap reconciled Jun 3 22:06:01.326: INFO: Reconciled root ca configmap in namespace "svcaccounts-3095" [AfterEach] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 3 22:06:01.326: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-3095" for this suite. • ------------------------------ {"msg":"PASSED [sig-auth] ServiceAccounts should guarantee kube-root-ca.crt exist in any namespace [Conformance]","total":-1,"completed":28,"skipped":699,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] PodTemplates /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 3 22:06:01.443: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename podtemplate STEP: Waiting for a default service account to be provisioned in namespace [It] should delete a collection of pod templates [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Create set of pod templates Jun 3 22:06:01.466: INFO: created test-podtemplate-1 Jun 3 22:06:01.469: INFO: created test-podtemplate-2 Jun 3 22:06:01.472: INFO: created test-podtemplate-3 STEP: get a list of pod templates with a label in the current namespace STEP: delete collection of pod templates Jun 3 22:06:01.475: INFO: requesting DeleteCollection of pod templates STEP: check that the list of pod templates matches the requested quantity Jun 3 22:06:01.484: INFO: requesting list of pod templates to confirm quantity [AfterEach] [sig-node] PodTemplates /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 3 22:06:01.487: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "podtemplate-3095" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] PodTemplates should delete a collection of pod templates [Conformance]","total":-1,"completed":29,"skipped":743,"failed":0} SS ------------------------------ [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 3 22:06:01.503: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:241 [It] should support --unix-socket=/path [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Starting the proxy Jun 3 22:06:01.524: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-3234 proxy --unix-socket=/tmp/kubectl-proxy-unix661860338/test' STEP: retrieving proxy /api/ output [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 3 22:06:01.605: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3234" for this suite. • ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Proxy server should support --unix-socket=/path [Conformance]","total":-1,"completed":30,"skipped":745,"failed":0} S ------------------------------ [BeforeEach] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 3 22:05:37.939: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with projected pod [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating pod pod-subpath-test-projected-fvv7 STEP: Creating a pod to test atomic-volume-subpath Jun 3 22:05:37.981: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-fvv7" in namespace "subpath-101" to be "Succeeded or Failed" Jun 3 22:05:37.984: INFO: Pod "pod-subpath-test-projected-fvv7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.596773ms Jun 3 22:05:39.988: INFO: Pod "pod-subpath-test-projected-fvv7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006703566s Jun 3 22:05:41.995: INFO: Pod "pod-subpath-test-projected-fvv7": Phase="Running", Reason="", readiness=true. Elapsed: 4.014135319s Jun 3 22:05:44.000: INFO: Pod "pod-subpath-test-projected-fvv7": Phase="Running", Reason="", readiness=true. Elapsed: 6.018810264s Jun 3 22:05:46.006: INFO: Pod "pod-subpath-test-projected-fvv7": Phase="Running", Reason="", readiness=true. Elapsed: 8.02500572s Jun 3 22:05:48.010: INFO: Pod "pod-subpath-test-projected-fvv7": Phase="Running", Reason="", readiness=true. Elapsed: 10.028802176s Jun 3 22:05:50.013: INFO: Pod "pod-subpath-test-projected-fvv7": Phase="Running", Reason="", readiness=true. Elapsed: 12.031917695s Jun 3 22:05:52.016: INFO: Pod "pod-subpath-test-projected-fvv7": Phase="Running", Reason="", readiness=true. Elapsed: 14.035226617s Jun 3 22:05:54.020: INFO: Pod "pod-subpath-test-projected-fvv7": Phase="Running", Reason="", readiness=true. Elapsed: 16.038890996s Jun 3 22:05:56.024: INFO: Pod "pod-subpath-test-projected-fvv7": Phase="Running", Reason="", readiness=true. Elapsed: 18.043391224s Jun 3 22:05:58.030: INFO: Pod "pod-subpath-test-projected-fvv7": Phase="Running", Reason="", readiness=true. Elapsed: 20.048653726s Jun 3 22:06:00.034: INFO: Pod "pod-subpath-test-projected-fvv7": Phase="Running", Reason="", readiness=true. Elapsed: 22.053007811s Jun 3 22:06:02.038: INFO: Pod "pod-subpath-test-projected-fvv7": Phase="Running", Reason="", readiness=true. Elapsed: 24.056609616s Jun 3 22:06:04.043: INFO: Pod "pod-subpath-test-projected-fvv7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.061696884s STEP: Saw pod success Jun 3 22:06:04.043: INFO: Pod "pod-subpath-test-projected-fvv7" satisfied condition "Succeeded or Failed" Jun 3 22:06:04.046: INFO: Trying to get logs from node node1 pod pod-subpath-test-projected-fvv7 container test-container-subpath-projected-fvv7: STEP: delete the pod Jun 3 22:06:04.061: INFO: Waiting for pod pod-subpath-test-projected-fvv7 to disappear Jun 3 22:06:04.063: INFO: Pod pod-subpath-test-projected-fvv7 no longer exists STEP: Deleting pod pod-subpath-test-projected-fvv7 Jun 3 22:06:04.063: INFO: Deleting pod "pod-subpath-test-projected-fvv7" in namespace "subpath-101" [AfterEach] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 3 22:06:04.066: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-101" for this suite. • [SLOW TEST:26.134 seconds] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with projected pod [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance]","total":-1,"completed":23,"skipped":289,"failed":0} SSSSSSS ------------------------------ [BeforeEach] version v1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 3 22:06:01.619: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] A set of valid responses are returned for both pod and service ProxyWithPath [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Jun 3 22:06:01.637: INFO: Creating pod... Jun 3 22:06:01.652: INFO: Pod Quantity: 1 Status: Pending Jun 3 22:06:02.655: INFO: Pod Quantity: 1 Status: Pending Jun 3 22:06:03.655: INFO: Pod Quantity: 1 Status: Pending Jun 3 22:06:04.655: INFO: Pod Quantity: 1 Status: Pending Jun 3 22:06:05.656: INFO: Pod Quantity: 1 Status: Pending Jun 3 22:06:06.655: INFO: Pod Quantity: 1 Status: Pending Jun 3 22:06:07.656: INFO: Pod Status: Running Jun 3 22:06:07.656: INFO: Creating service... Jun 3 22:06:07.661: INFO: Starting http.Client for https://10.10.190.202:6443/api/v1/namespaces/proxy-4613/pods/agnhost/proxy/some/path/with/DELETE Jun 3 22:06:07.664: INFO: http.Client request:DELETE | StatusCode:200 | Response:foo | Method:DELETE Jun 3 22:06:07.664: INFO: Starting http.Client for https://10.10.190.202:6443/api/v1/namespaces/proxy-4613/pods/agnhost/proxy/some/path/with/GET Jun 3 22:06:07.666: INFO: http.Client request:GET | StatusCode:200 | Response:foo | Method:GET Jun 3 22:06:07.666: INFO: Starting http.Client for https://10.10.190.202:6443/api/v1/namespaces/proxy-4613/pods/agnhost/proxy/some/path/with/HEAD Jun 3 22:06:07.668: INFO: http.Client request:HEAD | StatusCode:200 Jun 3 22:06:07.668: INFO: Starting http.Client for https://10.10.190.202:6443/api/v1/namespaces/proxy-4613/pods/agnhost/proxy/some/path/with/OPTIONS Jun 3 22:06:07.670: INFO: http.Client request:OPTIONS | StatusCode:200 | Response:foo | Method:OPTIONS Jun 3 22:06:07.670: INFO: Starting http.Client for https://10.10.190.202:6443/api/v1/namespaces/proxy-4613/pods/agnhost/proxy/some/path/with/PATCH Jun 3 22:06:07.672: INFO: http.Client request:PATCH | StatusCode:200 | Response:foo | Method:PATCH Jun 3 22:06:07.672: INFO: Starting http.Client for https://10.10.190.202:6443/api/v1/namespaces/proxy-4613/pods/agnhost/proxy/some/path/with/POST Jun 3 22:06:07.674: INFO: http.Client request:POST | StatusCode:200 | Response:foo | Method:POST Jun 3 22:06:07.674: INFO: Starting http.Client for https://10.10.190.202:6443/api/v1/namespaces/proxy-4613/pods/agnhost/proxy/some/path/with/PUT Jun 3 22:06:07.676: INFO: http.Client request:PUT | StatusCode:200 | Response:foo | Method:PUT Jun 3 22:06:07.676: INFO: Starting http.Client for https://10.10.190.202:6443/api/v1/namespaces/proxy-4613/services/test-service/proxy/some/path/with/DELETE Jun 3 22:06:07.679: INFO: http.Client request:DELETE | StatusCode:200 | Response:foo | Method:DELETE Jun 3 22:06:07.679: INFO: Starting http.Client for https://10.10.190.202:6443/api/v1/namespaces/proxy-4613/services/test-service/proxy/some/path/with/GET Jun 3 22:06:07.682: INFO: http.Client request:GET | StatusCode:200 | Response:foo | Method:GET Jun 3 22:06:07.682: INFO: Starting http.Client for https://10.10.190.202:6443/api/v1/namespaces/proxy-4613/services/test-service/proxy/some/path/with/HEAD Jun 3 22:06:07.684: INFO: http.Client request:HEAD | StatusCode:200 Jun 3 22:06:07.684: INFO: Starting http.Client for https://10.10.190.202:6443/api/v1/namespaces/proxy-4613/services/test-service/proxy/some/path/with/OPTIONS Jun 3 22:06:07.687: INFO: http.Client request:OPTIONS | StatusCode:200 | Response:foo | Method:OPTIONS Jun 3 22:06:07.687: INFO: Starting http.Client for https://10.10.190.202:6443/api/v1/namespaces/proxy-4613/services/test-service/proxy/some/path/with/PATCH Jun 3 22:06:07.690: INFO: http.Client request:PATCH | StatusCode:200 | Response:foo | Method:PATCH Jun 3 22:06:07.690: INFO: Starting http.Client for https://10.10.190.202:6443/api/v1/namespaces/proxy-4613/services/test-service/proxy/some/path/with/POST Jun 3 22:06:07.692: INFO: http.Client request:POST | StatusCode:200 | Response:foo | Method:POST Jun 3 22:06:07.692: INFO: Starting http.Client for https://10.10.190.202:6443/api/v1/namespaces/proxy-4613/services/test-service/proxy/some/path/with/PUT Jun 3 22:06:07.695: INFO: http.Client request:PUT | StatusCode:200 | Response:foo | Method:PUT [AfterEach] version v1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 3 22:06:07.695: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "proxy-4613" for this suite. • [SLOW TEST:6.084 seconds] [sig-network] Proxy /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 version v1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:74 A set of valid responses are returned for both pod and service ProxyWithPath [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-network] Proxy version v1 A set of valid responses are returned for both pod and service ProxyWithPath [Conformance]","total":-1,"completed":31,"skipped":746,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 3 22:06:04.091: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test emptydir 0644 on tmpfs Jun 3 22:06:04.122: INFO: Waiting up to 5m0s for pod "pod-f59c30ec-b3a9-4793-8aac-5879207bdabf" in namespace "emptydir-8525" to be "Succeeded or Failed" Jun 3 22:06:04.126: INFO: Pod "pod-f59c30ec-b3a9-4793-8aac-5879207bdabf": Phase="Pending", Reason="", readiness=false. Elapsed: 4.170597ms Jun 3 22:06:06.129: INFO: Pod "pod-f59c30ec-b3a9-4793-8aac-5879207bdabf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007500432s Jun 3 22:06:08.133: INFO: Pod "pod-f59c30ec-b3a9-4793-8aac-5879207bdabf": Phase="Pending", Reason="", readiness=false. Elapsed: 4.010722893s Jun 3 22:06:10.137: INFO: Pod "pod-f59c30ec-b3a9-4793-8aac-5879207bdabf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.014864212s STEP: Saw pod success Jun 3 22:06:10.137: INFO: Pod "pod-f59c30ec-b3a9-4793-8aac-5879207bdabf" satisfied condition "Succeeded or Failed" Jun 3 22:06:10.140: INFO: Trying to get logs from node node2 pod pod-f59c30ec-b3a9-4793-8aac-5879207bdabf container test-container: STEP: delete the pod Jun 3 22:06:10.156: INFO: Waiting for pod pod-f59c30ec-b3a9-4793-8aac-5879207bdabf to disappear Jun 3 22:06:10.158: INFO: Pod pod-f59c30ec-b3a9-4793-8aac-5879207bdabf no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 3 22:06:10.158: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-8525" for this suite. • [SLOW TEST:6.075 seconds] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":24,"skipped":296,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 3 22:06:07.741: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Jun 3 22:06:11.806: INFO: Expected: &{OK} to match Container's Termination Message: OK -- STEP: delete the container [AfterEach] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 3 22:06:11.816: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-3975" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":-1,"completed":32,"skipped":764,"failed":0} SSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/sysctl.go:35 [BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 3 22:06:11.848: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sysctl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/sysctl.go:64 [It] should reject invalid sysctls [MinimumKubeletVersion:1.21] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod with one valid and two invalid sysctls [AfterEach] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 3 22:06:11.881: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sysctl-796" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] should reject invalid sysctls [MinimumKubeletVersion:1.21] [Conformance]","total":-1,"completed":33,"skipped":776,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 3 22:05:43.435: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746 [It] should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating service in namespace services-1645 STEP: creating service affinity-clusterip-transition in namespace services-1645 STEP: creating replication controller affinity-clusterip-transition in namespace services-1645 I0603 22:05:43.464994 36 runners.go:190] Created replication controller with name: affinity-clusterip-transition, namespace: services-1645, replica count: 3 I0603 22:05:46.516626 36 runners.go:190] affinity-clusterip-transition Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0603 22:05:49.517519 36 runners.go:190] affinity-clusterip-transition Pods: 3 out of 3 created, 2 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0603 22:05:52.517998 36 runners.go:190] affinity-clusterip-transition Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jun 3 22:05:52.524: INFO: Creating new exec pod Jun 3 22:05:57.543: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1645 exec execpod-affinityshl6t -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-transition 80' Jun 3 22:05:57.792: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-clusterip-transition 80\nConnection to affinity-clusterip-transition 80 port [tcp/http] succeeded!\n" Jun 3 22:05:57.792: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" Jun 3 22:05:57.792: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1645 exec execpod-affinityshl6t -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.233.53.71 80' Jun 3 22:05:58.082: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 10.233.53.71 80\nConnection to 10.233.53.71 80 port [tcp/http] succeeded!\n" Jun 3 22:05:58.082: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" Jun 3 22:05:58.089: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1645 exec execpod-affinityshl6t -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://10.233.53.71:80/ ; done' Jun 3 22:05:58.412: INFO: stderr: "+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.53.71:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.53.71:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.53.71:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.53.71:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.53.71:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.53.71:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.53.71:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.53.71:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.53.71:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.53.71:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.53.71:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.53.71:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.53.71:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.53.71:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.53.71:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.53.71:80/\n" Jun 3 22:05:58.412: INFO: stdout: "\naffinity-clusterip-transition-pv9q9\naffinity-clusterip-transition-cxwxh\naffinity-clusterip-transition-cxwxh\naffinity-clusterip-transition-6zm2b\naffinity-clusterip-transition-pv9q9\naffinity-clusterip-transition-cxwxh\naffinity-clusterip-transition-cxwxh\naffinity-clusterip-transition-6zm2b\naffinity-clusterip-transition-6zm2b\naffinity-clusterip-transition-cxwxh\naffinity-clusterip-transition-6zm2b\naffinity-clusterip-transition-6zm2b\naffinity-clusterip-transition-cxwxh\naffinity-clusterip-transition-pv9q9\naffinity-clusterip-transition-cxwxh\naffinity-clusterip-transition-cxwxh" Jun 3 22:05:58.412: INFO: Received response from host: affinity-clusterip-transition-pv9q9 Jun 3 22:05:58.412: INFO: Received response from host: affinity-clusterip-transition-cxwxh Jun 3 22:05:58.412: INFO: Received response from host: affinity-clusterip-transition-cxwxh Jun 3 22:05:58.412: INFO: Received response from host: affinity-clusterip-transition-6zm2b Jun 3 22:05:58.412: INFO: Received response from host: affinity-clusterip-transition-pv9q9 Jun 3 22:05:58.412: INFO: Received response from host: affinity-clusterip-transition-cxwxh Jun 3 22:05:58.412: INFO: Received response from host: affinity-clusterip-transition-cxwxh Jun 3 22:05:58.412: INFO: Received response from host: affinity-clusterip-transition-6zm2b Jun 3 22:05:58.412: INFO: Received response from host: affinity-clusterip-transition-6zm2b Jun 3 22:05:58.412: INFO: Received response from host: affinity-clusterip-transition-cxwxh Jun 3 22:05:58.412: INFO: Received response from host: affinity-clusterip-transition-6zm2b Jun 3 22:05:58.412: INFO: Received response from host: affinity-clusterip-transition-6zm2b Jun 3 22:05:58.412: INFO: Received response from host: affinity-clusterip-transition-cxwxh Jun 3 22:05:58.412: INFO: Received response from host: affinity-clusterip-transition-pv9q9 Jun 3 22:05:58.412: INFO: Received response from host: affinity-clusterip-transition-cxwxh Jun 3 22:05:58.412: INFO: Received response from host: affinity-clusterip-transition-cxwxh Jun 3 22:05:58.420: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1645 exec execpod-affinityshl6t -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://10.233.53.71:80/ ; done' Jun 3 22:05:58.730: INFO: stderr: "+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.53.71:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.53.71:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.53.71:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.53.71:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.53.71:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.53.71:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.53.71:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.53.71:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.53.71:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.53.71:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.53.71:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.53.71:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.53.71:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.53.71:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.53.71:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.53.71:80/\n" Jun 3 22:05:58.730: INFO: stdout: "\naffinity-clusterip-transition-pv9q9\naffinity-clusterip-transition-pv9q9\naffinity-clusterip-transition-pv9q9\naffinity-clusterip-transition-pv9q9\naffinity-clusterip-transition-pv9q9\naffinity-clusterip-transition-pv9q9\naffinity-clusterip-transition-pv9q9\naffinity-clusterip-transition-pv9q9\naffinity-clusterip-transition-pv9q9\naffinity-clusterip-transition-pv9q9\naffinity-clusterip-transition-pv9q9\naffinity-clusterip-transition-pv9q9\naffinity-clusterip-transition-pv9q9\naffinity-clusterip-transition-pv9q9\naffinity-clusterip-transition-pv9q9\naffinity-clusterip-transition-pv9q9" Jun 3 22:05:58.730: INFO: Received response from host: affinity-clusterip-transition-pv9q9 Jun 3 22:05:58.730: INFO: Received response from host: affinity-clusterip-transition-pv9q9 Jun 3 22:05:58.730: INFO: Received response from host: affinity-clusterip-transition-pv9q9 Jun 3 22:05:58.730: INFO: Received response from host: affinity-clusterip-transition-pv9q9 Jun 3 22:05:58.730: INFO: Received response from host: affinity-clusterip-transition-pv9q9 Jun 3 22:05:58.730: INFO: Received response from host: affinity-clusterip-transition-pv9q9 Jun 3 22:05:58.730: INFO: Received response from host: affinity-clusterip-transition-pv9q9 Jun 3 22:05:58.730: INFO: Received response from host: affinity-clusterip-transition-pv9q9 Jun 3 22:05:58.730: INFO: Received response from host: affinity-clusterip-transition-pv9q9 Jun 3 22:05:58.730: INFO: Received response from host: affinity-clusterip-transition-pv9q9 Jun 3 22:05:58.730: INFO: Received response from host: affinity-clusterip-transition-pv9q9 Jun 3 22:05:58.730: INFO: Received response from host: affinity-clusterip-transition-pv9q9 Jun 3 22:05:58.730: INFO: Received response from host: affinity-clusterip-transition-pv9q9 Jun 3 22:05:58.730: INFO: Received response from host: affinity-clusterip-transition-pv9q9 Jun 3 22:05:58.730: INFO: Received response from host: affinity-clusterip-transition-pv9q9 Jun 3 22:05:58.730: INFO: Received response from host: affinity-clusterip-transition-pv9q9 Jun 3 22:05:58.731: INFO: Cleaning up the exec pod STEP: deleting ReplicationController affinity-clusterip-transition in namespace services-1645, will wait for the garbage collector to delete the pods Jun 3 22:05:58.796: INFO: Deleting ReplicationController affinity-clusterip-transition took: 4.965371ms Jun 3 22:05:58.897: INFO: Terminating ReplicationController affinity-clusterip-transition pods took: 101.188185ms [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 3 22:06:12.206: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-1645" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750 • [SLOW TEST:28.778 seconds] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","total":-1,"completed":21,"skipped":250,"failed":0} SSS ------------------------------ [BeforeEach] [sig-node] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 3 22:06:12.222: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create ConfigMap with empty key [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating configMap that has name configmap-test-emptyKey-855541e7-b591-4546-b67f-edc2f4d91eb4 [AfterEach] [sig-node] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 3 22:06:12.245: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-3048" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance]","total":-1,"completed":22,"skipped":253,"failed":0} SSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 3 22:05:54.011: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/lifecycle_hook.go:52 STEP: create the container to handle the HTTPGet hook request. Jun 3 22:05:54.047: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Jun 3 22:05:56.050: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Jun 3 22:05:58.051: INFO: The status of Pod pod-handle-http-request is Running (Ready = true) [It] should execute prestop exec hook properly [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: create the pod with lifecycle hook Jun 3 22:05:58.066: INFO: The status of Pod pod-with-prestop-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jun 3 22:06:00.070: INFO: The status of Pod pod-with-prestop-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jun 3 22:06:02.070: INFO: The status of Pod pod-with-prestop-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jun 3 22:06:04.070: INFO: The status of Pod pod-with-prestop-exec-hook is Running (Ready = true) STEP: delete the pod with lifecycle hook Jun 3 22:06:04.076: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jun 3 22:06:04.079: INFO: Pod pod-with-prestop-exec-hook still exists Jun 3 22:06:06.079: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jun 3 22:06:06.082: INFO: Pod pod-with-prestop-exec-hook still exists Jun 3 22:06:08.081: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jun 3 22:06:08.084: INFO: Pod pod-with-prestop-exec-hook still exists Jun 3 22:06:10.079: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jun 3 22:06:10.083: INFO: Pod pod-with-prestop-exec-hook still exists Jun 3 22:06:12.080: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jun 3 22:06:12.083: INFO: Pod pod-with-prestop-exec-hook still exists Jun 3 22:06:14.081: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jun 3 22:06:14.083: INFO: Pod pod-with-prestop-exec-hook no longer exists STEP: check prestop hook [AfterEach] [sig-node] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 3 22:06:14.089: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-3585" for this suite. • [SLOW TEST:20.086 seconds] [sig-node] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 when create a pod with lifecycle hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/lifecycle_hook.go:43 should execute prestop exec hook properly [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]","total":-1,"completed":23,"skipped":326,"failed":1,"failures":["[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} S ------------------------------ [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 3 22:06:10.212: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test emptydir 0777 on node default medium Jun 3 22:06:10.247: INFO: Waiting up to 5m0s for pod "pod-71f3a4cf-a98f-46d6-af9d-660c16fd132e" in namespace "emptydir-2032" to be "Succeeded or Failed" Jun 3 22:06:10.251: INFO: Pod "pod-71f3a4cf-a98f-46d6-af9d-660c16fd132e": Phase="Pending", Reason="", readiness=false. Elapsed: 3.642695ms Jun 3 22:06:12.255: INFO: Pod "pod-71f3a4cf-a98f-46d6-af9d-660c16fd132e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007830804s Jun 3 22:06:14.260: INFO: Pod "pod-71f3a4cf-a98f-46d6-af9d-660c16fd132e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013298295s STEP: Saw pod success Jun 3 22:06:14.260: INFO: Pod "pod-71f3a4cf-a98f-46d6-af9d-660c16fd132e" satisfied condition "Succeeded or Failed" Jun 3 22:06:14.265: INFO: Trying to get logs from node node1 pod pod-71f3a4cf-a98f-46d6-af9d-660c16fd132e container test-container: STEP: delete the pod Jun 3 22:06:14.283: INFO: Waiting for pod pod-71f3a4cf-a98f-46d6-af9d-660c16fd132e to disappear Jun 3 22:06:14.285: INFO: Pod pod-71f3a4cf-a98f-46d6-af9d-660c16fd132e no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 3 22:06:14.285: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-2032" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":25,"skipped":318,"failed":0} SSS ------------------------------ {"msg":"PASSED [sig-node] Secrets should fail to create secret due to empty secret key [Conformance]","total":-1,"completed":29,"skipped":510,"failed":0} [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 3 22:05:59.362: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:86 [It] should run the lifecycle of a Deployment [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating a Deployment STEP: waiting for Deployment to be created STEP: waiting for all Replicas to be Ready Jun 3 22:05:59.390: INFO: observed Deployment test-deployment in namespace deployment-9716 with ReadyReplicas 0 and labels map[test-deployment-static:true] Jun 3 22:05:59.390: INFO: observed Deployment test-deployment in namespace deployment-9716 with ReadyReplicas 0 and labels map[test-deployment-static:true] Jun 3 22:05:59.393: INFO: observed Deployment test-deployment in namespace deployment-9716 with ReadyReplicas 0 and labels map[test-deployment-static:true] Jun 3 22:05:59.393: INFO: observed Deployment test-deployment in namespace deployment-9716 with ReadyReplicas 0 and labels map[test-deployment-static:true] Jun 3 22:05:59.400: INFO: observed Deployment test-deployment in namespace deployment-9716 with ReadyReplicas 0 and labels map[test-deployment-static:true] Jun 3 22:05:59.400: INFO: observed Deployment test-deployment in namespace deployment-9716 with ReadyReplicas 0 and labels map[test-deployment-static:true] Jun 3 22:05:59.415: INFO: observed Deployment test-deployment in namespace deployment-9716 with ReadyReplicas 0 and labels map[test-deployment-static:true] Jun 3 22:05:59.415: INFO: observed Deployment test-deployment in namespace deployment-9716 with ReadyReplicas 0 and labels map[test-deployment-static:true] Jun 3 22:06:05.281: INFO: observed Deployment test-deployment in namespace deployment-9716 with ReadyReplicas 1 and labels map[test-deployment-static:true] Jun 3 22:06:05.281: INFO: observed Deployment test-deployment in namespace deployment-9716 with ReadyReplicas 1 and labels map[test-deployment-static:true] Jun 3 22:06:05.285: INFO: observed Deployment test-deployment in namespace deployment-9716 with ReadyReplicas 2 and labels map[test-deployment-static:true] STEP: patching the Deployment Jun 3 22:06:05.292: INFO: observed event type ADDED STEP: waiting for Replicas to scale Jun 3 22:06:05.293: INFO: observed Deployment test-deployment in namespace deployment-9716 with ReadyReplicas 0 Jun 3 22:06:05.293: INFO: observed Deployment test-deployment in namespace deployment-9716 with ReadyReplicas 0 Jun 3 22:06:05.293: INFO: observed Deployment test-deployment in namespace deployment-9716 with ReadyReplicas 0 Jun 3 22:06:05.293: INFO: observed Deployment test-deployment in namespace deployment-9716 with ReadyReplicas 0 Jun 3 22:06:05.293: INFO: observed Deployment test-deployment in namespace deployment-9716 with ReadyReplicas 0 Jun 3 22:06:05.293: INFO: observed Deployment test-deployment in namespace deployment-9716 with ReadyReplicas 0 Jun 3 22:06:05.293: INFO: observed Deployment test-deployment in namespace deployment-9716 with ReadyReplicas 0 Jun 3 22:06:05.293: INFO: observed Deployment test-deployment in namespace deployment-9716 with ReadyReplicas 0 Jun 3 22:06:05.293: INFO: observed Deployment test-deployment in namespace deployment-9716 with ReadyReplicas 1 Jun 3 22:06:05.293: INFO: observed Deployment test-deployment in namespace deployment-9716 with ReadyReplicas 1 Jun 3 22:06:05.293: INFO: observed Deployment test-deployment in namespace deployment-9716 with ReadyReplicas 2 Jun 3 22:06:05.293: INFO: observed Deployment test-deployment in namespace deployment-9716 with ReadyReplicas 2 Jun 3 22:06:05.293: INFO: observed Deployment test-deployment in namespace deployment-9716 with ReadyReplicas 2 Jun 3 22:06:05.293: INFO: observed Deployment test-deployment in namespace deployment-9716 with ReadyReplicas 2 Jun 3 22:06:05.298: INFO: observed Deployment test-deployment in namespace deployment-9716 with ReadyReplicas 2 Jun 3 22:06:05.298: INFO: observed Deployment test-deployment in namespace deployment-9716 with ReadyReplicas 2 Jun 3 22:06:05.319: INFO: observed Deployment test-deployment in namespace deployment-9716 with ReadyReplicas 2 Jun 3 22:06:05.319: INFO: observed Deployment test-deployment in namespace deployment-9716 with ReadyReplicas 2 Jun 3 22:06:05.327: INFO: observed Deployment test-deployment in namespace deployment-9716 with ReadyReplicas 1 Jun 3 22:06:05.327: INFO: observed Deployment test-deployment in namespace deployment-9716 with ReadyReplicas 1 Jun 3 22:06:05.330: INFO: observed Deployment test-deployment in namespace deployment-9716 with ReadyReplicas 1 Jun 3 22:06:05.330: INFO: observed Deployment test-deployment in namespace deployment-9716 with ReadyReplicas 1 Jun 3 22:06:09.543: INFO: observed Deployment test-deployment in namespace deployment-9716 with ReadyReplicas 2 Jun 3 22:06:09.543: INFO: observed Deployment test-deployment in namespace deployment-9716 with ReadyReplicas 2 Jun 3 22:06:09.559: INFO: observed Deployment test-deployment in namespace deployment-9716 with ReadyReplicas 1 STEP: listing Deployments Jun 3 22:06:09.562: INFO: Found test-deployment with labels: map[test-deployment:patched test-deployment-static:true] STEP: updating the Deployment Jun 3 22:06:09.574: INFO: observed Deployment test-deployment in namespace deployment-9716 with ReadyReplicas 1 STEP: fetching the DeploymentStatus Jun 3 22:06:09.580: INFO: observed Deployment test-deployment in namespace deployment-9716 with ReadyReplicas 1 and labels map[test-deployment:updated test-deployment-static:true] Jun 3 22:06:09.580: INFO: observed Deployment test-deployment in namespace deployment-9716 with ReadyReplicas 1 and labels map[test-deployment:updated test-deployment-static:true] Jun 3 22:06:09.586: INFO: observed Deployment test-deployment in namespace deployment-9716 with ReadyReplicas 1 and labels map[test-deployment:updated test-deployment-static:true] Jun 3 22:06:09.593: INFO: observed Deployment test-deployment in namespace deployment-9716 with ReadyReplicas 1 and labels map[test-deployment:updated test-deployment-static:true] Jun 3 22:06:09.603: INFO: observed Deployment test-deployment in namespace deployment-9716 with ReadyReplicas 1 and labels map[test-deployment:updated test-deployment-static:true] Jun 3 22:06:12.715: INFO: observed Deployment test-deployment in namespace deployment-9716 with ReadyReplicas 2 and labels map[test-deployment:updated test-deployment-static:true] Jun 3 22:06:12.721: INFO: observed Deployment test-deployment in namespace deployment-9716 with ReadyReplicas 2 and labels map[test-deployment:updated test-deployment-static:true] Jun 3 22:06:12.725: INFO: observed Deployment test-deployment in namespace deployment-9716 with ReadyReplicas 2 and labels map[test-deployment:updated test-deployment-static:true] Jun 3 22:06:12.728: INFO: observed Deployment test-deployment in namespace deployment-9716 with ReadyReplicas 2 and labels map[test-deployment:updated test-deployment-static:true] Jun 3 22:06:12.733: INFO: observed Deployment test-deployment in namespace deployment-9716 with ReadyReplicas 2 and labels map[test-deployment:updated test-deployment-static:true] Jun 3 22:06:16.643: INFO: observed Deployment test-deployment in namespace deployment-9716 with ReadyReplicas 3 and labels map[test-deployment:updated test-deployment-static:true] STEP: patching the DeploymentStatus STEP: fetching the DeploymentStatus Jun 3 22:06:16.666: INFO: observed Deployment test-deployment in namespace deployment-9716 with ReadyReplicas 1 Jun 3 22:06:16.666: INFO: observed Deployment test-deployment in namespace deployment-9716 with ReadyReplicas 1 Jun 3 22:06:16.666: INFO: observed Deployment test-deployment in namespace deployment-9716 with ReadyReplicas 1 Jun 3 22:06:16.666: INFO: observed Deployment test-deployment in namespace deployment-9716 with ReadyReplicas 1 Jun 3 22:06:16.667: INFO: observed Deployment test-deployment in namespace deployment-9716 with ReadyReplicas 1 Jun 3 22:06:16.667: INFO: observed Deployment test-deployment in namespace deployment-9716 with ReadyReplicas 2 Jun 3 22:06:16.667: INFO: observed Deployment test-deployment in namespace deployment-9716 with ReadyReplicas 2 Jun 3 22:06:16.667: INFO: observed Deployment test-deployment in namespace deployment-9716 with ReadyReplicas 2 Jun 3 22:06:16.667: INFO: observed Deployment test-deployment in namespace deployment-9716 with ReadyReplicas 2 Jun 3 22:06:16.667: INFO: observed Deployment test-deployment in namespace deployment-9716 with ReadyReplicas 2 Jun 3 22:06:16.667: INFO: observed Deployment test-deployment in namespace deployment-9716 with ReadyReplicas 3 STEP: deleting the Deployment Jun 3 22:06:16.673: INFO: observed event type MODIFIED Jun 3 22:06:16.673: INFO: observed event type MODIFIED Jun 3 22:06:16.673: INFO: observed event type MODIFIED Jun 3 22:06:16.673: INFO: observed event type MODIFIED Jun 3 22:06:16.673: INFO: observed event type MODIFIED Jun 3 22:06:16.673: INFO: observed event type MODIFIED Jun 3 22:06:16.673: INFO: observed event type MODIFIED Jun 3 22:06:16.673: INFO: observed event type MODIFIED Jun 3 22:06:16.673: INFO: observed event type MODIFIED Jun 3 22:06:16.673: INFO: observed event type MODIFIED Jun 3 22:06:16.674: INFO: observed event type MODIFIED Jun 3 22:06:16.674: INFO: observed event type MODIFIED [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:80 Jun 3 22:06:16.676: INFO: Log out all the ReplicaSets if there is no deployment created Jun 3 22:06:16.684: INFO: ReplicaSet "test-deployment-748588b7cd": &ReplicaSet{ObjectMeta:{test-deployment-748588b7cd deployment-9716 bdb137af-d9aa-4809-a50d-adea7d608b13 49861 4 2022-06-03 22:06:05 +0000 UTC map[pod-template-hash:748588b7cd test-deployment-static:true] map[deployment.kubernetes.io/desired-replicas:2 deployment.kubernetes.io/max-replicas:3 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-deployment a4ae8d85-d67c-4dfb-99f9-eb4f82ab68ea 0xc0062c0d67 0xc0062c0d68}] [] [{kube-controller-manager Update apps/v1 2022-06-03 22:06:16 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:pod-template-hash":{},"f:test-deployment-static":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a4ae8d85-d67c-4dfb-99f9-eb4f82ab68ea\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:pod-template-hash":{},"f:test-deployment-static":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"test-deployment\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{pod-template-hash: 748588b7cd,test-deployment-static: true,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[pod-template-hash:748588b7cd test-deployment-static:true] map[] [] [] []} {[] [] [{test-deployment k8s.gcr.io/pause:3.4.1 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc0062c0dd0 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:4,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Jun 3 22:06:16.687: INFO: ReplicaSet "test-deployment-7b4c744884": &ReplicaSet{ObjectMeta:{test-deployment-7b4c744884 deployment-9716 7c22ca45-babb-451a-9edc-ec039e05cb8e 49589 3 2022-06-03 22:05:59 +0000 UTC map[pod-template-hash:7b4c744884 test-deployment-static:true] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-deployment a4ae8d85-d67c-4dfb-99f9-eb4f82ab68ea 0xc0062c0e37 0xc0062c0e38}] [] [{kube-controller-manager Update apps/v1 2022-06-03 22:06:09 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:pod-template-hash":{},"f:test-deployment-static":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a4ae8d85-d67c-4dfb-99f9-eb4f82ab68ea\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:pod-template-hash":{},"f:test-deployment-static":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"test-deployment\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{pod-template-hash: 7b4c744884,test-deployment-static: true,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[pod-template-hash:7b4c744884 test-deployment-static:true] map[] [] [] []} {[] [] [{test-deployment k8s.gcr.io/e2e-test-images/agnhost:2.32 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc0062c0ea0 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:3,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Jun 3 22:06:16.690: INFO: ReplicaSet "test-deployment-85d87c6f4b": &ReplicaSet{ObjectMeta:{test-deployment-85d87c6f4b deployment-9716 4714eceb-7986-4177-a925-4de223c06620 49850 2 2022-06-03 22:06:09 +0000 UTC map[pod-template-hash:85d87c6f4b test-deployment-static:true] map[deployment.kubernetes.io/desired-replicas:2 deployment.kubernetes.io/max-replicas:3 deployment.kubernetes.io/revision:3] [{apps/v1 Deployment test-deployment a4ae8d85-d67c-4dfb-99f9-eb4f82ab68ea 0xc0062c0f07 0xc0062c0f08}] [] [{kube-controller-manager Update apps/v1 2022-06-03 22:06:12 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:pod-template-hash":{},"f:test-deployment-static":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a4ae8d85-d67c-4dfb-99f9-eb4f82ab68ea\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:pod-template-hash":{},"f:test-deployment-static":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"test-deployment\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*2,Selector:&v1.LabelSelector{MatchLabels:map[string]string{pod-template-hash: 85d87c6f4b,test-deployment-static: true,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[pod-template-hash:85d87c6f4b test-deployment-static:true] map[] [] [] []} {[] [] [{test-deployment k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc0062c0f70 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:2,FullyLabeledReplicas:2,ObservedGeneration:2,ReadyReplicas:2,AvailableReplicas:2,Conditions:[]ReplicaSetCondition{},},} Jun 3 22:06:16.693: INFO: pod: "test-deployment-85d87c6f4b-cqlbr": &Pod{ObjectMeta:{test-deployment-85d87c6f4b-cqlbr test-deployment-85d87c6f4b- deployment-9716 15587256-fcb6-4ac0-bb95-ac69e3289983 49718 0 2022-06-03 22:06:09 +0000 UTC map[pod-template-hash:85d87c6f4b test-deployment-static:true] map[k8s.v1.cni.cncf.io/network-status:[{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.4.181" ], "mac": "52:57:54:e5:a6:c5", "default": true, "dns": {} }] k8s.v1.cni.cncf.io/networks-status:[{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.4.181" ], "mac": "52:57:54:e5:a6:c5", "default": true, "dns": {} }] kubernetes.io/psp:collectd] [{apps/v1 ReplicaSet test-deployment-85d87c6f4b 4714eceb-7986-4177-a925-4de223c06620 0xc0062c15f7 0xc0062c15f8}] [] [{kube-controller-manager Update v1 2022-06-03 22:06:09 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:pod-template-hash":{},"f:test-deployment-static":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"4714eceb-7986-4177-a925-4de223c06620\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"test-deployment\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {multus Update v1 2022-06-03 22:06:11 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:k8s.v1.cni.cncf.io/network-status":{},"f:k8s.v1.cni.cncf.io/networks-status":{}}}}} {kubelet Update v1 2022-06-03 22:06:12 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.4.181\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-cccjf,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:test-deployment,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-cccjf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-06-03 22:06:09 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-06-03 22:06:12 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-06-03 22:06:12 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-06-03 22:06:09 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.10.190.208,PodIP:10.244.4.181,StartTime:2022-06-03 22:06:09 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:test-deployment,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2022-06-03 22:06:12 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,ImageID:docker-pullable://k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50,ContainerID:docker://c3b0436d96c9498b114e24962594ae2ada9de9b219c0671c9b23c85a9448d2ff,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.4.181,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Jun 3 22:06:16.693: INFO: pod: "test-deployment-85d87c6f4b-rmmmq": &Pod{ObjectMeta:{test-deployment-85d87c6f4b-rmmmq test-deployment-85d87c6f4b- deployment-9716 bc86b551-6e9f-452f-ac71-f0ac232b1a39 49849 0 2022-06-03 22:06:12 +0000 UTC map[pod-template-hash:85d87c6f4b test-deployment-static:true] map[k8s.v1.cni.cncf.io/network-status:[{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.4.183" ], "mac": "06:65:f6:76:74:7d", "default": true, "dns": {} }] k8s.v1.cni.cncf.io/networks-status:[{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.4.183" ], "mac": "06:65:f6:76:74:7d", "default": true, "dns": {} }] kubernetes.io/psp:collectd] [{apps/v1 ReplicaSet test-deployment-85d87c6f4b 4714eceb-7986-4177-a925-4de223c06620 0xc0062c17ef 0xc0062c1800}] [] [{kube-controller-manager Update v1 2022-06-03 22:06:12 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:pod-template-hash":{},"f:test-deployment-static":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"4714eceb-7986-4177-a925-4de223c06620\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"test-deployment\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {multus Update v1 2022-06-03 22:06:14 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:k8s.v1.cni.cncf.io/network-status":{},"f:k8s.v1.cni.cncf.io/networks-status":{}}}}} {kubelet Update v1 2022-06-03 22:06:16 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.4.183\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-tblcr,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:test-deployment,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-tblcr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-06-03 22:06:12 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-06-03 22:06:16 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-06-03 22:06:16 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-06-03 22:06:12 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.10.190.208,PodIP:10.244.4.183,StartTime:2022-06-03 22:06:12 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:test-deployment,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2022-06-03 22:06:15 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,ImageID:docker-pullable://k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50,ContainerID:docker://48fa961aef0d561257bbd4533f86a26c632cb2e5f8bf261df6e20a15d673303f,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.4.183,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 3 22:06:16.693: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-9716" for this suite. • [SLOW TEST:17.341 seconds] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run the lifecycle of a Deployment [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] Deployment should run the lifecycle of a Deployment [Conformance]","total":-1,"completed":30,"skipped":510,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ Jun 3 22:06:16.752: INFO: Running AfterSuite actions on all nodes [BeforeEach] [sig-apps] DisruptionController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 3 22:06:11.943: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename disruption STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] DisruptionController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/disruption.go:69 [BeforeEach] Listing PodDisruptionBudgets for all namespaces /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 3 22:06:11.963: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename disruption-2 STEP: Waiting for a default service account to be provisioned in namespace [It] should list and delete a collection of PodDisruptionBudgets [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Waiting for the pdb to be processed STEP: Waiting for the pdb to be processed STEP: Waiting for the pdb to be processed STEP: listing a collection of PDBs across all namespaces STEP: listing a collection of PDBs in namespace disruption-1734 STEP: deleting a collection of PDBs STEP: Waiting for the PDB collection to be deleted [AfterEach] Listing PodDisruptionBudgets for all namespaces /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 3 22:06:18.028: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "disruption-2-1423" for this suite. [AfterEach] [sig-apps] DisruptionController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 3 22:06:18.035: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "disruption-1734" for this suite. • [SLOW TEST:6.100 seconds] [sig-apps] DisruptionController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 Listing PodDisruptionBudgets for all namespaces /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/disruption.go:75 should list and delete a collection of PodDisruptionBudgets [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ [BeforeEach] [sig-node] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 3 22:06:14.101: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/kubelet.go:38 [BeforeEach] when scheduling a busybox command that always fails in a pod /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/kubelet.go:82 [It] should have an terminated reason [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [AfterEach] [sig-node] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 3 22:06:18.140: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-5545" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance]","total":-1,"completed":24,"skipped":327,"failed":1,"failures":["[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} Jun 3 22:06:18.151: INFO: Running AfterSuite actions on all nodes [BeforeEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 3 22:06:12.280: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete pods created by rc when not orphaning [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: create the rc STEP: delete the rc STEP: wait for all pods to be garbage collected STEP: Gathering metrics Jun 3 22:06:22.352: INFO: The status of Pod kube-controller-manager-master3 is Running (Ready = true) Jun 3 22:06:22.555: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 3 22:06:22.555: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-6242" for this suite. • [SLOW TEST:10.284 seconds] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should delete pods created by rc when not orphaning [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance]","total":-1,"completed":23,"skipped":267,"failed":0} Jun 3 22:06:22.566: INFO: Running AfterSuite actions on all nodes [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 3 22:06:01.283: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] removes definition from spec when one version gets changed to not be served [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: set up a multi version CRD Jun 3 22:06:01.306: INFO: >>> kubeConfig: /root/.kube/config STEP: mark a version not serverd STEP: check the unserved version gets removed STEP: check the other version is not changed [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 3 22:06:25.146: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-70" for this suite. • [SLOW TEST:23.883 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 removes definition from spec when one version gets changed to not be served [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance]","total":-1,"completed":20,"skipped":419,"failed":1,"failures":["[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]"]} Jun 3 22:06:25.168: INFO: Running AfterSuite actions on all nodes [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 3 22:05:58.167: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:54 [It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating pod busybox-493da750-ec9e-4f70-bb46-c7befc00b4f4 in namespace container-probe-5822 Jun 3 22:06:04.220: INFO: Started pod busybox-493da750-ec9e-4f70-bb46-c7befc00b4f4 in namespace container-probe-5822 STEP: checking the pod's current state and verifying that restartCount is present Jun 3 22:06:04.223: INFO: Initial restart count of pod busybox-493da750-ec9e-4f70-bb46-c7befc00b4f4 is 0 Jun 3 22:06:50.323: INFO: Restart count of pod container-probe-5822/busybox-493da750-ec9e-4f70-bb46-c7befc00b4f4 is now 1 (46.10009906s elapsed) STEP: deleting the pod [AfterEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 3 22:06:50.330: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-5822" for this suite. • [SLOW TEST:52.172 seconds] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Probing container should be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":-1,"completed":63,"skipped":1020,"failed":0} Jun 3 22:06:50.341: INFO: Running AfterSuite actions on all nodes [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 3 22:04:24.482: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746 [It] should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating service in namespace services-2884 STEP: creating service affinity-nodeport-transition in namespace services-2884 STEP: creating replication controller affinity-nodeport-transition in namespace services-2884 I0603 22:04:24.515409 40 runners.go:190] Created replication controller with name: affinity-nodeport-transition, namespace: services-2884, replica count: 3 I0603 22:04:27.568311 40 runners.go:190] affinity-nodeport-transition Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0603 22:04:30.569447 40 runners.go:190] affinity-nodeport-transition Pods: 3 out of 3 created, 1 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0603 22:04:33.569886 40 runners.go:190] affinity-nodeport-transition Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jun 3 22:04:33.580: INFO: Creating new exec pod Jun 3 22:04:40.603: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2884 exec execpod-affinity9ffh8 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-transition 80' Jun 3 22:04:40.903: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-nodeport-transition 80\nConnection to affinity-nodeport-transition 80 port [tcp/http] succeeded!\n" Jun 3 22:04:40.904: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" Jun 3 22:04:40.904: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2884 exec execpod-affinity9ffh8 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.233.43.247 80' Jun 3 22:04:41.137: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 10.233.43.247 80\nConnection to 10.233.43.247 80 port [tcp/http] succeeded!\n" Jun 3 22:04:41.137: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" Jun 3 22:04:41.137: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2884 exec execpod-affinity9ffh8 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32689' Jun 3 22:04:41.391: INFO: rc: 1 Jun 3 22:04:41.391: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2884 exec execpod-affinity9ffh8 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32689: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32689 nc: connect to 10.10.190.207 port 32689 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 22:04:42.392: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2884 exec execpod-affinity9ffh8 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32689' Jun 3 22:04:42.668: INFO: rc: 1 Jun 3 22:04:42.668: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2884 exec execpod-affinity9ffh8 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32689: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32689 nc: connect to 10.10.190.207 port 32689 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 22:04:43.392: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2884 exec execpod-affinity9ffh8 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32689' Jun 3 22:04:43.638: INFO: rc: 1 Jun 3 22:04:43.638: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2884 exec execpod-affinity9ffh8 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32689: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32689 nc: connect to 10.10.190.207 port 32689 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 22:04:44.391: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2884 exec execpod-affinity9ffh8 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32689' Jun 3 22:04:44.636: INFO: rc: 1 Jun 3 22:04:44.636: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2884 exec execpod-affinity9ffh8 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32689: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32689 nc: connect to 10.10.190.207 port 32689 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 22:04:45.391: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2884 exec execpod-affinity9ffh8 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32689' Jun 3 22:04:45.620: INFO: rc: 1 Jun 3 22:04:45.620: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2884 exec execpod-affinity9ffh8 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32689: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32689 nc: connect to 10.10.190.207 port 32689 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 22:04:46.392: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2884 exec execpod-affinity9ffh8 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32689' Jun 3 22:04:46.648: INFO: rc: 1 Jun 3 22:04:46.648: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2884 exec execpod-affinity9ffh8 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32689: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32689 nc: connect to 10.10.190.207 port 32689 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 22:04:47.392: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2884 exec execpod-affinity9ffh8 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32689' Jun 3 22:04:47.638: INFO: rc: 1 Jun 3 22:04:47.638: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2884 exec execpod-affinity9ffh8 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32689: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32689 nc: connect to 10.10.190.207 port 32689 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 22:04:48.393: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2884 exec execpod-affinity9ffh8 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32689' Jun 3 22:04:48.635: INFO: rc: 1 Jun 3 22:04:48.635: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2884 exec execpod-affinity9ffh8 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32689: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32689 nc: connect to 10.10.190.207 port 32689 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 22:04:49.391: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2884 exec execpod-affinity9ffh8 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32689' Jun 3 22:04:49.836: INFO: rc: 1 Jun 3 22:04:49.836: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2884 exec execpod-affinity9ffh8 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32689: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32689 nc: connect to 10.10.190.207 port 32689 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 22:04:50.392: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2884 exec execpod-affinity9ffh8 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32689' Jun 3 22:04:50.774: INFO: rc: 1 Jun 3 22:04:50.774: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2884 exec execpod-affinity9ffh8 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32689: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32689 nc: connect to 10.10.190.207 port 32689 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 22:04:51.392: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2884 exec execpod-affinity9ffh8 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32689' Jun 3 22:04:51.640: INFO: rc: 1 Jun 3 22:04:51.640: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2884 exec execpod-affinity9ffh8 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32689: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32689 nc: connect to 10.10.190.207 port 32689 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 22:04:52.392: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2884 exec execpod-affinity9ffh8 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32689' Jun 3 22:04:52.711: INFO: rc: 1 Jun 3 22:04:52.711: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2884 exec execpod-affinity9ffh8 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32689: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32689 nc: connect to 10.10.190.207 port 32689 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 22:04:53.393: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2884 exec execpod-affinity9ffh8 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32689' Jun 3 22:04:53.652: INFO: rc: 1 Jun 3 22:04:53.652: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2884 exec execpod-affinity9ffh8 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32689: Command stdout: stderr: + + echonc hostName -v -t -w 2 10.10.190.207 32689 nc: connect to 10.10.190.207 port 32689 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 22:04:54.391: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2884 exec execpod-affinity9ffh8 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32689' Jun 3 22:04:54.646: INFO: rc: 1 Jun 3 22:04:54.646: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2884 exec execpod-affinity9ffh8 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32689: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32689 nc: connect to 10.10.190.207 port 32689 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 22:04:55.393: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2884 exec execpod-affinity9ffh8 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32689' Jun 3 22:04:55.649: INFO: rc: 1 Jun 3 22:04:55.649: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2884 exec execpod-affinity9ffh8 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32689: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32689 nc: connect to 10.10.190.207 port 32689 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 22:04:56.391: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2884 exec execpod-affinity9ffh8 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32689' Jun 3 22:04:56.643: INFO: rc: 1 Jun 3 22:04:56.643: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2884 exec execpod-affinity9ffh8 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32689: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32689 nc: connect to 10.10.190.207 port 32689 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 22:04:57.392: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2884 exec execpod-affinity9ffh8 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32689' Jun 3 22:04:57.626: INFO: rc: 1 Jun 3 22:04:57.626: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2884 exec execpod-affinity9ffh8 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32689: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32689 nc: connect to 10.10.190.207 port 32689 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 22:04:58.391: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2884 exec execpod-affinity9ffh8 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32689' Jun 3 22:04:59.160: INFO: rc: 1 Jun 3 22:04:59.160: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2884 exec execpod-affinity9ffh8 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32689: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32689 nc: connect to 10.10.190.207 port 32689 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 22:04:59.392: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2884 exec execpod-affinity9ffh8 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32689' Jun 3 22:04:59.705: INFO: rc: 1 Jun 3 22:04:59.705: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2884 exec execpod-affinity9ffh8 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32689: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32689 nc: connect to 10.10.190.207 port 32689 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 22:05:00.393: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2884 exec execpod-affinity9ffh8 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32689' Jun 3 22:05:00.646: INFO: rc: 1 Jun 3 22:05:00.646: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2884 exec execpod-affinity9ffh8 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32689: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32689 nc: connect to 10.10.190.207 port 32689 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 22:05:01.392: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2884 exec execpod-affinity9ffh8 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32689' Jun 3 22:05:01.650: INFO: rc: 1 Jun 3 22:05:01.650: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2884 exec execpod-affinity9ffh8 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32689: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32689 nc: connect to 10.10.190.207 port 32689 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 22:05:02.391: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2884 exec execpod-affinity9ffh8 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32689' Jun 3 22:05:02.672: INFO: rc: 1 Jun 3 22:05:02.672: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2884 exec execpod-affinity9ffh8 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32689: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32689 nc: connect to 10.10.190.207 port 32689 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 22:05:03.393: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2884 exec execpod-affinity9ffh8 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32689' Jun 3 22:05:03.653: INFO: rc: 1 Jun 3 22:05:03.653: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2884 exec execpod-affinity9ffh8 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32689: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32689 nc: connect to 10.10.190.207 port 32689 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 22:05:04.392: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2884 exec execpod-affinity9ffh8 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32689' Jun 3 22:05:04.627: INFO: rc: 1 Jun 3 22:05:04.627: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2884 exec execpod-affinity9ffh8 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32689: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32689 nc: connect to 10.10.190.207 port 32689 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 22:05:05.392: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2884 exec execpod-affinity9ffh8 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32689' Jun 3 22:05:05.624: INFO: rc: 1 Jun 3 22:05:05.624: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2884 exec execpod-affinity9ffh8 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32689: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32689 nc: connect to 10.10.190.207 port 32689 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 22:05:06.392: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2884 exec execpod-affinity9ffh8 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32689' Jun 3 22:05:06.642: INFO: rc: 1 Jun 3 22:05:06.642: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2884 exec execpod-affinity9ffh8 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32689: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32689 nc: connect to 10.10.190.207 port 32689 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 22:05:07.391: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2884 exec execpod-affinity9ffh8 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32689' Jun 3 22:05:07.633: INFO: rc: 1 Jun 3 22:05:07.633: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2884 exec execpod-affinity9ffh8 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32689: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32689 nc: connect to 10.10.190.207 port 32689 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 22:05:08.393: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2884 exec execpod-affinity9ffh8 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32689' Jun 3 22:05:08.634: INFO: rc: 1 Jun 3 22:05:08.634: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2884 exec execpod-affinity9ffh8 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32689: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32689 nc: connect to 10.10.190.207 port 32689 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 22:05:09.392: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2884 exec execpod-affinity9ffh8 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32689' Jun 3 22:05:09.626: INFO: rc: 1 Jun 3 22:05:09.626: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2884 exec execpod-affinity9ffh8 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32689: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32689 nc: connect to 10.10.190.207 port 32689 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 22:05:10.392: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2884 exec execpod-affinity9ffh8 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32689' Jun 3 22:05:10.632: INFO: rc: 1 Jun 3 22:05:10.632: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2884 exec execpod-affinity9ffh8 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32689: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32689 nc: connect to 10.10.190.207 port 32689 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 22:05:11.392: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2884 exec execpod-affinity9ffh8 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32689' Jun 3 22:05:11.639: INFO: rc: 1 Jun 3 22:05:11.640: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2884 exec execpod-affinity9ffh8 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32689: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32689 nc: connect to 10.10.190.207 port 32689 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 22:05:12.393: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2884 exec execpod-affinity9ffh8 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32689' Jun 3 22:05:12.641: INFO: rc: 1 Jun 3 22:05:12.641: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2884 exec execpod-affinity9ffh8 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32689: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32689 nc: connect to 10.10.190.207 port 32689 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 22:05:13.391: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2884 exec execpod-affinity9ffh8 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32689' Jun 3 22:05:14.034: INFO: rc: 1 Jun 3 22:05:14.034: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2884 exec execpod-affinity9ffh8 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32689: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32689 nc: connect to 10.10.190.207 port 32689 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 22:05:14.392: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2884 exec execpod-affinity9ffh8 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32689' Jun 3 22:05:14.619: INFO: rc: 1 Jun 3 22:05:14.619: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2884 exec execpod-affinity9ffh8 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32689: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32689 nc: connect to 10.10.190.207 port 32689 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 22:05:15.392: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2884 exec execpod-affinity9ffh8 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32689' Jun 3 22:05:15.622: INFO: rc: 1 Jun 3 22:05:15.622: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2884 exec execpod-affinity9ffh8 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32689: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32689 nc: connect to 10.10.190.207 port 32689 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 22:05:16.392: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2884 exec execpod-affinity9ffh8 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32689' Jun 3 22:05:16.626: INFO: rc: 1 Jun 3 22:05:16.626: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2884 exec execpod-affinity9ffh8 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32689: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32689 nc: connect to 10.10.190.207 port 32689 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 22:05:17.391: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2884 exec execpod-affinity9ffh8 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32689' Jun 3 22:05:18.117: INFO: rc: 1 Jun 3 22:05:18.117: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2884 exec execpod-affinity9ffh8 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32689: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32689 nc: connect to 10.10.190.207 port 32689 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 22:05:18.392: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2884 exec execpod-affinity9ffh8 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32689' Jun 3 22:05:18.633: INFO: rc: 1 Jun 3 22:05:18.633: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2884 exec execpod-affinity9ffh8 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32689: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32689 nc: connect to 10.10.190.207 port 32689 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 22:05:19.392: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2884 exec execpod-affinity9ffh8 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32689' Jun 3 22:05:19.846: INFO: rc: 1 Jun 3 22:05:19.846: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2884 exec execpod-affinity9ffh8 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32689: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32689 nc: connect to 10.10.190.207 port 32689 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 22:05:20.392: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2884 exec execpod-affinity9ffh8 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32689' Jun 3 22:05:20.657: INFO: rc: 1 Jun 3 22:05:20.657: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2884 exec execpod-affinity9ffh8 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32689: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32689 nc: connect to 10.10.190.207 port 32689 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 22:05:21.391: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2884 exec execpod-affinity9ffh8 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32689' Jun 3 22:05:21.806: INFO: rc: 1 Jun 3 22:05:21.806: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2884 exec execpod-affinity9ffh8 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32689: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32689 nc: connect to 10.10.190.207 port 32689 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 22:05:22.391: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2884 exec execpod-affinity9ffh8 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32689' Jun 3 22:05:22.632: INFO: rc: 1 Jun 3 22:05:22.632: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2884 exec execpod-affinity9ffh8 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32689: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32689 nc: connect to 10.10.190.207 port 32689 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 22:05:23.392: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2884 exec execpod-affinity9ffh8 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32689' Jun 3 22:05:23.665: INFO: rc: 1 Jun 3 22:05:23.665: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2884 exec execpod-affinity9ffh8 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32689: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32689 nc: connect to 10.10.190.207 port 32689 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 22:05:24.391: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2884 exec execpod-affinity9ffh8 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32689' Jun 3 22:05:24.685: INFO: rc: 1 Jun 3 22:05:24.685: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2884 exec execpod-affinity9ffh8 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32689: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32689 nc: connect to 10.10.190.207 port 32689 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 22:05:25.392: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2884 exec execpod-affinity9ffh8 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32689' Jun 3 22:05:25.649: INFO: rc: 1 Jun 3 22:05:25.649: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2884 exec execpod-affinity9ffh8 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32689: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32689 nc: connect to 10.10.190.207 port 32689 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 22:05:26.391: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2884 exec execpod-affinity9ffh8 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32689' Jun 3 22:05:26.631: INFO: rc: 1 Jun 3 22:05:26.631: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2884 exec execpod-affinity9ffh8 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32689: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32689 nc: connect to 10.10.190.207 port 32689 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 22:05:27.392: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2884 exec execpod-affinity9ffh8 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32689' Jun 3 22:05:27.634: INFO: rc: 1 Jun 3 22:05:27.634: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2884 exec execpod-affinity9ffh8 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32689: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32689 nc: connect to 10.10.190.207 port 32689 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 22:05:28.391: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2884 exec execpod-affinity9ffh8 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32689' Jun 3 22:05:29.696: INFO: rc: 1 Jun 3 22:05:29.696: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2884 exec execpod-affinity9ffh8 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32689: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32689 nc: connect to 10.10.190.207 port 32689 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 22:05:30.393: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2884 exec execpod-affinity9ffh8 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32689' Jun 3 22:05:30.788: INFO: rc: 1 Jun 3 22:05:30.788: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2884 exec execpod-affinity9ffh8 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32689: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32689 nc: connect to 10.10.190.207 port 32689 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 22:05:31.391: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2884 exec execpod-affinity9ffh8 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32689' Jun 3 22:05:31.703: INFO: rc: 1 Jun 3 22:05:31.704: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2884 exec execpod-affinity9ffh8 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32689: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32689 nc: connect to 10.10.190.207 port 32689 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 22:05:32.392: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2884 exec execpod-affinity9ffh8 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32689' Jun 3 22:05:32.637: INFO: rc: 1 Jun 3 22:05:32.637: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2884 exec execpod-affinity9ffh8 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32689: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32689 nc: connect to 10.10.190.207 port 32689 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 22:05:33.391: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2884 exec execpod-affinity9ffh8 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32689' Jun 3 22:05:33.654: INFO: rc: 1 Jun 3 22:05:33.654: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2884 exec execpod-affinity9ffh8 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32689: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32689 nc: connect to 10.10.190.207 port 32689 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 22:05:34.392: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2884 exec execpod-affinity9ffh8 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32689' Jun 3 22:05:34.643: INFO: rc: 1 Jun 3 22:05:34.643: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2884 exec execpod-affinity9ffh8 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32689: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32689 nc: connect to 10.10.190.207 port 32689 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 22:05:35.392: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2884 exec execpod-affinity9ffh8 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32689' Jun 3 22:05:35.962: INFO: rc: 1 Jun 3 22:05:35.963: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2884 exec execpod-affinity9ffh8 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32689: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32689 nc: connect to 10.10.190.207 port 32689 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 22:05:36.391: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2884 exec execpod-affinity9ffh8 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32689' Jun 3 22:05:37.476: INFO: rc: 1 Jun 3 22:05:37.476: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2884 exec execpod-affinity9ffh8 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32689: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32689 nc: connect to 10.10.190.207 port 32689 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 22:05:38.392: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2884 exec execpod-affinity9ffh8 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32689' Jun 3 22:05:38.808: INFO: rc: 1 Jun 3 22:05:38.808: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2884 exec execpod-affinity9ffh8 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32689: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32689 nc: connect to 10.10.190.207 port 32689 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 22:05:39.392: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2884 exec execpod-affinity9ffh8 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32689' Jun 3 22:05:39.728: INFO: rc: 1 Jun 3 22:05:39.728: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2884 exec execpod-affinity9ffh8 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32689: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32689 nc: connect to 10.10.190.207 port 32689 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 22:05:40.391: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2884 exec execpod-affinity9ffh8 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32689' Jun 3 22:05:40.631: INFO: rc: 1 Jun 3 22:05:40.631: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2884 exec execpod-affinity9ffh8 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32689: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32689 nc: connect to 10.10.190.207 port 32689 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 22:05:41.392: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2884 exec execpod-affinity9ffh8 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32689' Jun 3 22:05:41.824: INFO: rc: 1 Jun 3 22:05:41.824: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2884 exec execpod-affinity9ffh8 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32689: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32689 nc: connect to 10.10.190.207 port 32689 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 22:05:42.391: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2884 exec execpod-affinity9ffh8 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32689' Jun 3 22:05:42.644: INFO: rc: 1 Jun 3 22:05:42.644: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2884 exec execpod-affinity9ffh8 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32689: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32689 nc: connect to 10.10.190.207 port 32689 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 22:05:43.392: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2884 exec execpod-affinity9ffh8 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32689' Jun 3 22:05:43.646: INFO: rc: 1 Jun 3 22:05:43.646: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2884 exec execpod-affinity9ffh8 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32689: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32689 nc: connect to 10.10.190.207 port 32689 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 22:05:44.391: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2884 exec execpod-affinity9ffh8 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32689' Jun 3 22:05:44.941: INFO: rc: 1 Jun 3 22:05:44.941: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2884 exec execpod-affinity9ffh8 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32689: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32689 nc: connect to 10.10.190.207 port 32689 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 22:05:45.392: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2884 exec execpod-affinity9ffh8 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32689' Jun 3 22:05:45.947: INFO: rc: 1 Jun 3 22:05:45.948: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2884 exec execpod-affinity9ffh8 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32689: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32689 nc: connect to 10.10.190.207 port 32689 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 22:05:46.392: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2884 exec execpod-affinity9ffh8 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32689' Jun 3 22:05:46.844: INFO: rc: 1 Jun 3 22:05:46.844: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2884 exec execpod-affinity9ffh8 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32689: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32689 nc: connect to 10.10.190.207 port 32689 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 22:05:47.392: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2884 exec execpod-affinity9ffh8 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32689' Jun 3 22:05:47.656: INFO: rc: 1 Jun 3 22:05:47.656: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2884 exec execpod-affinity9ffh8 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32689: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32689 nc: connect to 10.10.190.207 port 32689 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 22:05:48.392: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2884 exec execpod-affinity9ffh8 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32689' Jun 3 22:05:48.654: INFO: rc: 1 Jun 3 22:05:48.654: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2884 exec execpod-affinity9ffh8 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32689: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32689 nc: connect to 10.10.190.207 port 32689 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 22:05:49.391: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2884 exec execpod-affinity9ffh8 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32689' Jun 3 22:05:49.647: INFO: rc: 1 Jun 3 22:05:49.647: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2884 exec execpod-affinity9ffh8 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32689: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32689 nc: connect to 10.10.190.207 port 32689 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 22:05:50.392: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2884 exec execpod-affinity9ffh8 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32689' Jun 3 22:05:50.608: INFO: rc: 1 Jun 3 22:05:50.608: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2884 exec execpod-affinity9ffh8 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32689: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32689 nc: connect to 10.10.190.207 port 32689 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 22:05:51.392: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2884 exec execpod-affinity9ffh8 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32689' Jun 3 22:05:51.649: INFO: rc: 1 Jun 3 22:05:51.649: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2884 exec execpod-affinity9ffh8 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32689: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32689 nc: connect to 10.10.190.207 port 32689 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 22:05:52.391: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2884 exec execpod-affinity9ffh8 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32689' Jun 3 22:05:52.632: INFO: rc: 1 Jun 3 22:05:52.632: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2884 exec execpod-affinity9ffh8 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32689: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32689 nc: connect to 10.10.190.207 port 32689 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 22:05:53.392: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2884 exec execpod-affinity9ffh8 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32689' Jun 3 22:05:53.663: INFO: rc: 1 Jun 3 22:05:53.664: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2884 exec execpod-affinity9ffh8 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32689: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32689 nc: connect to 10.10.190.207 port 32689 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 22:05:54.392: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2884 exec execpod-affinity9ffh8 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32689' Jun 3 22:05:54.777: INFO: rc: 1 Jun 3 22:05:54.777: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2884 exec execpod-affinity9ffh8 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32689: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32689 nc: connect to 10.10.190.207 port 32689 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 22:05:55.392: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2884 exec execpod-affinity9ffh8 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32689' Jun 3 22:05:55.925: INFO: rc: 1 Jun 3 22:05:55.925: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2884 exec execpod-affinity9ffh8 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32689: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32689 nc: connect to 10.10.190.207 port 32689 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 22:05:56.392: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2884 exec execpod-affinity9ffh8 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32689' Jun 3 22:05:56.672: INFO: rc: 1 Jun 3 22:05:56.672: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2884 exec execpod-affinity9ffh8 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32689: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32689 nc: connect to 10.10.190.207 port 32689 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 22:05:57.393: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2884 exec execpod-affinity9ffh8 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32689' Jun 3 22:05:57.632: INFO: rc: 1 Jun 3 22:05:57.632: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2884 exec execpod-affinity9ffh8 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32689: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32689 + echo hostName nc: connect to 10.10.190.207 port 32689 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 22:05:58.392: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2884 exec execpod-affinity9ffh8 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32689' Jun 3 22:06:00.193: INFO: rc: 1 Jun 3 22:06:00.193: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2884 exec execpod-affinity9ffh8 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32689: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32689 nc: connect to 10.10.190.207 port 32689 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 22:06:00.391: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2884 exec execpod-affinity9ffh8 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32689' Jun 3 22:06:01.020: INFO: rc: 1 Jun 3 22:06:01.020: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2884 exec execpod-affinity9ffh8 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32689: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32689 nc: connect to 10.10.190.207 port 32689 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 22:06:01.392: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2884 exec execpod-affinity9ffh8 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32689' Jun 3 22:06:01.766: INFO: rc: 1 Jun 3 22:06:01.766: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2884 exec execpod-affinity9ffh8 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32689: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32689 nc: connect to 10.10.190.207 port 32689 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 22:06:02.392: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2884 exec execpod-affinity9ffh8 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32689' Jun 3 22:06:02.649: INFO: rc: 1 Jun 3 22:06:02.649: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2884 exec execpod-affinity9ffh8 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32689: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32689 nc: connect to 10.10.190.207 port 32689 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 22:06:03.393: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2884 exec execpod-affinity9ffh8 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32689' Jun 3 22:06:03.668: INFO: rc: 1 Jun 3 22:06:03.668: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2884 exec execpod-affinity9ffh8 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32689: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32689 nc: connect to 10.10.190.207 port 32689 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 22:06:04.391: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2884 exec execpod-affinity9ffh8 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32689' Jun 3 22:06:04.639: INFO: rc: 1 Jun 3 22:06:04.639: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2884 exec execpod-affinity9ffh8 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32689: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32689 nc: connect to 10.10.190.207 port 32689 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 22:06:05.392: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2884 exec execpod-affinity9ffh8 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32689' Jun 3 22:06:05.637: INFO: rc: 1 Jun 3 22:06:05.637: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2884 exec execpod-affinity9ffh8 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32689: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32689 nc: connect to 10.10.190.207 port 32689 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 22:06:06.392: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2884 exec execpod-affinity9ffh8 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32689' Jun 3 22:06:06.638: INFO: rc: 1 Jun 3 22:06:06.638: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2884 exec execpod-affinity9ffh8 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32689: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32689 nc: connect to 10.10.190.207 port 32689 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 22:06:07.393: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2884 exec execpod-affinity9ffh8 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32689' Jun 3 22:06:07.623: INFO: rc: 1 Jun 3 22:06:07.623: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2884 exec execpod-affinity9ffh8 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32689: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32689 nc: connect to 10.10.190.207 port 32689 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 22:06:08.393: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2884 exec execpod-affinity9ffh8 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32689' Jun 3 22:06:08.636: INFO: rc: 1 Jun 3 22:06:08.636: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2884 exec execpod-affinity9ffh8 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32689: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32689 nc: connect to 10.10.190.207 port 32689 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 22:06:09.391: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2884 exec execpod-affinity9ffh8 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32689' Jun 3 22:06:09.658: INFO: rc: 1 Jun 3 22:06:09.658: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2884 exec execpod-affinity9ffh8 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32689: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32689 + echo hostName nc: connect to 10.10.190.207 port 32689 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 22:06:10.392: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2884 exec execpod-affinity9ffh8 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32689' Jun 3 22:06:10.664: INFO: rc: 1 Jun 3 22:06:10.664: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2884 exec execpod-affinity9ffh8 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32689: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32689 nc: connect to 10.10.190.207 port 32689 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 22:06:11.392: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2884 exec execpod-affinity9ffh8 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32689' Jun 3 22:06:11.632: INFO: rc: 1 Jun 3 22:06:11.632: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2884 exec execpod-affinity9ffh8 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32689: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32689 nc: connect to 10.10.190.207 port 32689 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 22:06:12.393: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2884 exec execpod-affinity9ffh8 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32689' Jun 3 22:06:12.657: INFO: rc: 1 Jun 3 22:06:12.657: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2884 exec execpod-affinity9ffh8 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32689: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32689 nc: connect to 10.10.190.207 port 32689 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 22:06:13.392: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2884 exec execpod-affinity9ffh8 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32689' Jun 3 22:06:13.934: INFO: rc: 1 Jun 3 22:06:13.935: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2884 exec execpod-affinity9ffh8 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32689: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32689 nc: connect to 10.10.190.207 port 32689 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 22:06:14.392: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2884 exec execpod-affinity9ffh8 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32689' Jun 3 22:06:14.672: INFO: rc: 1 Jun 3 22:06:14.673: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2884 exec execpod-affinity9ffh8 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32689: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32689 nc: connect to 10.10.190.207 port 32689 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 22:06:15.393: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2884 exec execpod-affinity9ffh8 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32689' Jun 3 22:06:15.631: INFO: rc: 1 Jun 3 22:06:15.631: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2884 exec execpod-affinity9ffh8 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32689: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32689 nc: connect to 10.10.190.207 port 32689 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 22:06:16.392: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2884 exec execpod-affinity9ffh8 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32689' Jun 3 22:06:16.640: INFO: rc: 1 Jun 3 22:06:16.640: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2884 exec execpod-affinity9ffh8 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32689: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32689 nc: connect to 10.10.190.207 port 32689 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 22:06:17.392: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2884 exec execpod-affinity9ffh8 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32689' Jun 3 22:06:17.634: INFO: rc: 1 Jun 3 22:06:17.634: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2884 exec execpod-affinity9ffh8 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32689: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32689 nc: connect to 10.10.190.207 port 32689 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 22:06:18.393: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2884 exec execpod-affinity9ffh8 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32689' Jun 3 22:06:18.632: INFO: rc: 1 Jun 3 22:06:18.632: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2884 exec execpod-affinity9ffh8 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32689: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32689 nc: connect to 10.10.190.207 port 32689 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 22:06:19.392: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2884 exec execpod-affinity9ffh8 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32689' Jun 3 22:06:19.651: INFO: rc: 1 Jun 3 22:06:19.651: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2884 exec execpod-affinity9ffh8 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32689: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32689 nc: connect to 10.10.190.207 port 32689 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 22:06:20.392: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2884 exec execpod-affinity9ffh8 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32689' Jun 3 22:06:20.619: INFO: rc: 1 Jun 3 22:06:20.619: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2884 exec execpod-affinity9ffh8 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32689: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32689 nc: connect to 10.10.190.207 port 32689 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 22:06:21.392: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2884 exec execpod-affinity9ffh8 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32689' Jun 3 22:06:21.777: INFO: rc: 1 Jun 3 22:06:21.777: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2884 exec execpod-affinity9ffh8 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32689: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32689 nc: connect to 10.10.190.207 port 32689 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 22:06:22.393: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2884 exec execpod-affinity9ffh8 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32689' Jun 3 22:06:22.638: INFO: rc: 1 Jun 3 22:06:22.638: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2884 exec execpod-affinity9ffh8 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32689: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32689 nc: connect to 10.10.190.207 port 32689 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 22:06:23.393: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2884 exec execpod-affinity9ffh8 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32689' Jun 3 22:06:24.386: INFO: rc: 1 Jun 3 22:06:24.386: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2884 exec execpod-affinity9ffh8 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32689: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32689 nc: connect to 10.10.190.207 port 32689 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 22:06:24.392: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2884 exec execpod-affinity9ffh8 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32689' Jun 3 22:06:24.622: INFO: rc: 1 Jun 3 22:06:24.622: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2884 exec execpod-affinity9ffh8 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32689: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32689 nc: connect to 10.10.190.207 port 32689 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 22:06:25.392: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2884 exec execpod-affinity9ffh8 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32689' Jun 3 22:06:25.688: INFO: rc: 1 Jun 3 22:06:25.688: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2884 exec execpod-affinity9ffh8 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32689: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32689 nc: connect to 10.10.190.207 port 32689 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 22:06:26.392: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2884 exec execpod-affinity9ffh8 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32689' Jun 3 22:06:26.635: INFO: rc: 1 Jun 3 22:06:26.635: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2884 exec execpod-affinity9ffh8 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32689: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32689 nc: connect to 10.10.190.207 port 32689 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 22:06:27.393: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2884 exec execpod-affinity9ffh8 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32689' Jun 3 22:06:27.641: INFO: rc: 1 Jun 3 22:06:27.641: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2884 exec execpod-affinity9ffh8 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32689: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32689 + echo hostName nc: connect to 10.10.190.207 port 32689 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 22:06:28.394: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2884 exec execpod-affinity9ffh8 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32689' Jun 3 22:06:29.442: INFO: rc: 1 Jun 3 22:06:29.442: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2884 exec execpod-affinity9ffh8 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32689: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32689 nc: connect to 10.10.190.207 port 32689 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 22:06:30.391: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2884 exec execpod-affinity9ffh8 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32689' Jun 3 22:06:30.655: INFO: rc: 1 Jun 3 22:06:30.655: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2884 exec execpod-affinity9ffh8 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32689: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32689 nc: connect to 10.10.190.207 port 32689 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 22:06:31.392: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2884 exec execpod-affinity9ffh8 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32689' Jun 3 22:06:31.632: INFO: rc: 1 Jun 3 22:06:31.633: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2884 exec execpod-affinity9ffh8 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32689: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32689 nc: connect to 10.10.190.207 port 32689 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 22:06:32.393: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2884 exec execpod-affinity9ffh8 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32689' Jun 3 22:06:32.668: INFO: rc: 1 Jun 3 22:06:32.668: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2884 exec execpod-affinity9ffh8 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32689: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32689 nc: connect to 10.10.190.207 port 32689 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 22:06:33.393: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2884 exec execpod-affinity9ffh8 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32689' Jun 3 22:06:33.635: INFO: rc: 1 Jun 3 22:06:33.636: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2884 exec execpod-affinity9ffh8 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32689: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32689 nc: connect to 10.10.190.207 port 32689 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 22:06:34.391: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2884 exec execpod-affinity9ffh8 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32689' Jun 3 22:06:34.627: INFO: rc: 1 Jun 3 22:06:34.627: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2884 exec execpod-affinity9ffh8 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32689: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32689 nc: connect to 10.10.190.207 port 32689 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 22:06:35.392: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2884 exec execpod-affinity9ffh8 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32689' Jun 3 22:06:35.653: INFO: rc: 1 Jun 3 22:06:35.653: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2884 exec execpod-affinity9ffh8 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32689: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32689 nc: connect to 10.10.190.207 port 32689 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 22:06:36.392: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2884 exec execpod-affinity9ffh8 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32689' Jun 3 22:06:36.637: INFO: rc: 1 Jun 3 22:06:36.637: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2884 exec execpod-affinity9ffh8 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32689: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32689 nc: connect to 10.10.190.207 port 32689 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 22:06:37.391: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2884 exec execpod-affinity9ffh8 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32689' Jun 3 22:06:37.641: INFO: rc: 1 Jun 3 22:06:37.641: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2884 exec execpod-affinity9ffh8 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32689: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32689 nc: connect to 10.10.190.207 port 32689 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 22:06:38.393: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2884 exec execpod-affinity9ffh8 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32689' Jun 3 22:06:38.638: INFO: rc: 1 Jun 3 22:06:38.638: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2884 exec execpod-affinity9ffh8 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32689: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32689 nc: connect to 10.10.190.207 port 32689 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 22:06:39.392: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2884 exec execpod-affinity9ffh8 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32689' Jun 3 22:06:39.636: INFO: rc: 1 Jun 3 22:06:39.636: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2884 exec execpod-affinity9ffh8 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32689: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32689 nc: connect to 10.10.190.207 port 32689 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 22:06:40.392: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2884 exec execpod-affinity9ffh8 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32689' Jun 3 22:06:40.646: INFO: rc: 1 Jun 3 22:06:40.646: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2884 exec execpod-affinity9ffh8 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32689: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32689 nc: connect to 10.10.190.207 port 32689 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 22:06:41.392: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2884 exec execpod-affinity9ffh8 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32689' Jun 3 22:06:41.652: INFO: rc: 1 Jun 3 22:06:41.652: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2884 exec execpod-affinity9ffh8 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32689: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32689 nc: connect to 10.10.190.207 port 32689 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 22:06:41.652: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2884 exec execpod-affinity9ffh8 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32689' Jun 3 22:06:41.892: INFO: rc: 1 Jun 3 22:06:41.892: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2884 exec execpod-affinity9ffh8 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32689: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32689 nc: connect to 10.10.190.207 port 32689 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 22:06:41.892: FAIL: Unexpected error: <*errors.errorString | 0xc000ad0830>: { s: "service is not reachable within 2m0s timeout on endpoint 10.10.190.207:32689 over TCP protocol", } service is not reachable within 2m0s timeout on endpoint 10.10.190.207:32689 over TCP protocol occurred Full Stack Trace k8s.io/kubernetes/test/e2e/network.execAffinityTestForNonLBServiceWithOptionalTransition(0xc001886420, 0x77b33d8, 0xc0035b66e0, 0xc002c7af00, 0x101) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:2576 +0x625 k8s.io/kubernetes/test/e2e/network.execAffinityTestForNonLBServiceWithTransition(...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:2531 k8s.io/kubernetes/test/e2e/network.glob..func24.27() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1862 +0xa5 k8s.io/kubernetes/test/e2e.RunE2ETests(0xc000703200) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x36c k8s.io/kubernetes/test/e2e.TestE2E(0xc000703200) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:144 +0x2b testing.tRunner(0xc000703200, 0x70f99e8) /usr/local/go/src/testing/testing.go:1193 +0xef created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1238 +0x2b3 Jun 3 22:06:41.894: INFO: Cleaning up the exec pod STEP: deleting ReplicationController affinity-nodeport-transition in namespace services-2884, will wait for the garbage collector to delete the pods Jun 3 22:06:41.960: INFO: Deleting ReplicationController affinity-nodeport-transition took: 5.138381ms Jun 3 22:06:42.062: INFO: Terminating ReplicationController affinity-nodeport-transition pods took: 101.134496ms [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 STEP: Collecting events from namespace "services-2884". STEP: Found 28 events. Jun 3 22:06:52.178: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for affinity-nodeport-transition-4qf7w: { } Scheduled: Successfully assigned services-2884/affinity-nodeport-transition-4qf7w to node2 Jun 3 22:06:52.178: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for affinity-nodeport-transition-cscqz: { } Scheduled: Successfully assigned services-2884/affinity-nodeport-transition-cscqz to node1 Jun 3 22:06:52.178: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for affinity-nodeport-transition-zmkvn: { } Scheduled: Successfully assigned services-2884/affinity-nodeport-transition-zmkvn to node1 Jun 3 22:06:52.178: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for execpod-affinity9ffh8: { } Scheduled: Successfully assigned services-2884/execpod-affinity9ffh8 to node1 Jun 3 22:06:52.178: INFO: At 2022-06-03 22:04:24 +0000 UTC - event for affinity-nodeport-transition: {replication-controller } SuccessfulCreate: Created pod: affinity-nodeport-transition-zmkvn Jun 3 22:06:52.178: INFO: At 2022-06-03 22:04:24 +0000 UTC - event for affinity-nodeport-transition: {replication-controller } SuccessfulCreate: Created pod: affinity-nodeport-transition-cscqz Jun 3 22:06:52.178: INFO: At 2022-06-03 22:04:24 +0000 UTC - event for affinity-nodeport-transition: {replication-controller } SuccessfulCreate: Created pod: affinity-nodeport-transition-4qf7w Jun 3 22:06:52.178: INFO: At 2022-06-03 22:04:27 +0000 UTC - event for affinity-nodeport-transition-4qf7w: {kubelet node2} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/agnhost:2.32" in 299.757861ms Jun 3 22:06:52.178: INFO: At 2022-06-03 22:04:27 +0000 UTC - event for affinity-nodeport-transition-4qf7w: {kubelet node2} Created: Created container affinity-nodeport-transition Jun 3 22:06:52.178: INFO: At 2022-06-03 22:04:27 +0000 UTC - event for affinity-nodeport-transition-4qf7w: {kubelet node2} Started: Started container affinity-nodeport-transition Jun 3 22:06:52.178: INFO: At 2022-06-03 22:04:27 +0000 UTC - event for affinity-nodeport-transition-4qf7w: {kubelet node2} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/agnhost:2.32" Jun 3 22:06:52.178: INFO: At 2022-06-03 22:04:30 +0000 UTC - event for affinity-nodeport-transition-cscqz: {kubelet node1} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/agnhost:2.32" Jun 3 22:06:52.178: INFO: At 2022-06-03 22:04:30 +0000 UTC - event for affinity-nodeport-transition-zmkvn: {kubelet node1} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/agnhost:2.32" Jun 3 22:06:52.178: INFO: At 2022-06-03 22:04:30 +0000 UTC - event for affinity-nodeport-transition-zmkvn: {kubelet node1} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/agnhost:2.32" in 257.325598ms Jun 3 22:06:52.178: INFO: At 2022-06-03 22:04:31 +0000 UTC - event for affinity-nodeport-transition-cscqz: {kubelet node1} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/agnhost:2.32" in 484.40895ms Jun 3 22:06:52.178: INFO: At 2022-06-03 22:04:31 +0000 UTC - event for affinity-nodeport-transition-cscqz: {kubelet node1} Started: Started container affinity-nodeport-transition Jun 3 22:06:52.178: INFO: At 2022-06-03 22:04:31 +0000 UTC - event for affinity-nodeport-transition-cscqz: {kubelet node1} Created: Created container affinity-nodeport-transition Jun 3 22:06:52.178: INFO: At 2022-06-03 22:04:31 +0000 UTC - event for affinity-nodeport-transition-zmkvn: {kubelet node1} Created: Created container affinity-nodeport-transition Jun 3 22:06:52.178: INFO: At 2022-06-03 22:04:32 +0000 UTC - event for affinity-nodeport-transition-zmkvn: {kubelet node1} Started: Started container affinity-nodeport-transition Jun 3 22:06:52.178: INFO: At 2022-06-03 22:04:35 +0000 UTC - event for execpod-affinity9ffh8: {kubelet node1} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/agnhost:2.32" Jun 3 22:06:52.178: INFO: At 2022-06-03 22:04:35 +0000 UTC - event for execpod-affinity9ffh8: {kubelet node1} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/agnhost:2.32" in 308.079579ms Jun 3 22:06:52.178: INFO: At 2022-06-03 22:04:36 +0000 UTC - event for execpod-affinity9ffh8: {kubelet node1} Created: Created container agnhost-container Jun 3 22:06:52.178: INFO: At 2022-06-03 22:04:36 +0000 UTC - event for execpod-affinity9ffh8: {kubelet node1} Started: Started container agnhost-container Jun 3 22:06:52.178: INFO: At 2022-06-03 22:06:41 +0000 UTC - event for affinity-nodeport-transition: {endpoint-controller } FailedToUpdateEndpoint: Failed to update endpoint services-2884/affinity-nodeport-transition: Operation cannot be fulfilled on endpoints "affinity-nodeport-transition": the object has been modified; please apply your changes to the latest version and try again Jun 3 22:06:52.178: INFO: At 2022-06-03 22:06:41 +0000 UTC - event for affinity-nodeport-transition-4qf7w: {kubelet node2} Killing: Stopping container affinity-nodeport-transition Jun 3 22:06:52.178: INFO: At 2022-06-03 22:06:41 +0000 UTC - event for affinity-nodeport-transition-cscqz: {kubelet node1} Killing: Stopping container affinity-nodeport-transition Jun 3 22:06:52.178: INFO: At 2022-06-03 22:06:41 +0000 UTC - event for affinity-nodeport-transition-zmkvn: {kubelet node1} Killing: Stopping container affinity-nodeport-transition Jun 3 22:06:52.178: INFO: At 2022-06-03 22:06:41 +0000 UTC - event for execpod-affinity9ffh8: {kubelet node1} Killing: Stopping container agnhost-container Jun 3 22:06:52.192: INFO: POD NODE PHASE GRACE CONDITIONS Jun 3 22:06:52.192: INFO: Jun 3 22:06:52.196: INFO: Logging node info for node master1 Jun 3 22:06:52.198: INFO: Node Info: &Node{ObjectMeta:{master1 4d289319-b343-4e96-a789-1a1cbeac007b 50261 0 2022-06-03 19:57:53 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master1 kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.202 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2022-06-03 19:57:56 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}}} {kube-controller-manager Update v1 2022-06-03 19:58:10 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.0.0/24\"":{}},"f:taints":{}}}} {flanneld Update v1 2022-06-03 20:00:37 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kubelet Update v1 2022-06-03 20:05:24 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{451201003520 0} {} 440625980Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234743296 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{406080902496 0} {} 406080902496 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324579328 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-06-03 20:03:30 +0000 UTC,LastTransitionTime:2022-06-03 20:03:30 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-06-03 22:06:44 +0000 UTC,LastTransitionTime:2022-06-03 19:57:50 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-06-03 22:06:44 +0000 UTC,LastTransitionTime:2022-06-03 19:57:50 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-06-03 22:06:44 +0000 UTC,LastTransitionTime:2022-06-03 19:57:50 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-06-03 22:06:44 +0000 UTC,LastTransitionTime:2022-06-03 20:00:47 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.202,},NodeAddress{Type:Hostname,Address:master1,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:3d668405f73a457bb0bcb4df5f4edac8,SystemUUID:00ACFB60-0631-E711-906E-0017A4403562,BootID:c08279e3-a5cb-4f4d-b9f0-f2cde655469f,KernelVersion:3.10.0-1160.66.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.16,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:727687199,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 sirot/netperf-latest:latest],SizeBytes:282025213,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e k8s.gcr.io/coredns/coredns:v1.8.0],SizeBytes:42454755,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[localhost:30500/tasextender@sha256:eddd5e176ac5f79e2e8ba9a1b7023bbf7200edfa835da39de54a6bf3568f9668 tasextender:latest localhost:30500/tasextender:0.4],SizeBytes:28910791,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[registry@sha256:1cd9409a311350c3072fe510b52046f104416376c126a479cef9a4dfe692cf57 registry:2.7.0],SizeBytes:24191168,},ContainerImage{Names:[nginx@sha256:b92d3b942c8b84da889ac3dc6e83bd20ffb8cd2d8298eba92c8b0bf88d52f03e nginx:1.20.1-alpine],SizeBytes:22721538,},ContainerImage{Names:[@ :],SizeBytes:5577654,},ContainerImage{Names:[alpine@sha256:c0e9560cda118f9ec63ddefb4a173a2b2a0347082d7dff7dc14272e7841a5b5a alpine:3.12.1],SizeBytes:5573013,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jun 3 22:06:52.199: INFO: Logging kubelet events for node master1 Jun 3 22:06:52.201: INFO: Logging pods the kubelet thinks is on node master1 Jun 3 22:06:52.223: INFO: kube-proxy-zgchh started at 2022-06-03 19:59:36 +0000 UTC (0+1 container statuses recorded) Jun 3 22:06:52.223: INFO: Container kube-proxy ready: true, restart count 2 Jun 3 22:06:52.223: INFO: dns-autoscaler-7df78bfcfb-vdtpl started at 2022-06-03 20:01:09 +0000 UTC (0+1 container statuses recorded) Jun 3 22:06:52.223: INFO: Container autoscaler ready: true, restart count 2 Jun 3 22:06:52.223: INFO: coredns-8474476ff8-rvc4v started at 2022-06-03 20:01:12 +0000 UTC (0+1 container statuses recorded) Jun 3 22:06:52.223: INFO: Container coredns ready: true, restart count 1 Jun 3 22:06:52.223: INFO: container-registry-65d7c44b96-2nzvn started at 2022-06-03 20:05:02 +0000 UTC (0+2 container statuses recorded) Jun 3 22:06:52.223: INFO: Container docker-registry ready: true, restart count 0 Jun 3 22:06:52.223: INFO: Container nginx ready: true, restart count 0 Jun 3 22:06:52.223: INFO: kube-scheduler-master1 started at 2022-06-03 20:06:52 +0000 UTC (0+1 container statuses recorded) Jun 3 22:06:52.223: INFO: Container kube-scheduler ready: true, restart count 0 Jun 3 22:06:52.223: INFO: kube-apiserver-master1 started at 2022-06-03 19:58:57 +0000 UTC (0+1 container statuses recorded) Jun 3 22:06:52.223: INFO: Container kube-apiserver ready: true, restart count 0 Jun 3 22:06:52.223: INFO: kube-controller-manager-master1 started at 2022-06-03 19:58:57 +0000 UTC (0+1 container statuses recorded) Jun 3 22:06:52.223: INFO: Container kube-controller-manager ready: true, restart count 1 Jun 3 22:06:52.223: INFO: kube-flannel-m8sj7 started at 2022-06-03 20:00:31 +0000 UTC (1+1 container statuses recorded) Jun 3 22:06:52.223: INFO: Init container install-cni ready: true, restart count 0 Jun 3 22:06:52.223: INFO: Container kube-flannel ready: true, restart count 1 Jun 3 22:06:52.223: INFO: kube-multus-ds-amd64-n58qk started at 2022-06-03 20:00:40 +0000 UTC (0+1 container statuses recorded) Jun 3 22:06:52.223: INFO: Container kube-multus ready: true, restart count 1 Jun 3 22:06:52.223: INFO: node-exporter-45rhg started at 2022-06-03 20:13:28 +0000 UTC (0+2 container statuses recorded) Jun 3 22:06:52.223: INFO: Container kube-rbac-proxy ready: true, restart count 0 Jun 3 22:06:52.223: INFO: Container node-exporter ready: true, restart count 0 Jun 3 22:06:52.316: INFO: Latency metrics for node master1 Jun 3 22:06:52.316: INFO: Logging node info for node master2 Jun 3 22:06:52.319: INFO: Node Info: &Node{ObjectMeta:{master2 a6ae2f0e-af0f-4dbb-a8e5-6d3a309310bc 50257 0 2022-06-03 19:58:21 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master2 kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.203 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2022-06-03 19:58:23 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}}} {flanneld Update v1 2022-06-03 20:00:37 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2022-06-03 20:00:45 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}},"f:taints":{}}}} {kubelet Update v1 2022-06-03 20:10:57 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:capacity":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.1.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{451201003520 0} {} 440625980Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234739200 0} {} 196518300Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{406080902496 0} {} 406080902496 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324575232 0} {} 195629468Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-06-03 20:03:28 +0000 UTC,LastTransitionTime:2022-06-03 20:03:28 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-06-03 22:06:43 +0000 UTC,LastTransitionTime:2022-06-03 19:58:21 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-06-03 22:06:43 +0000 UTC,LastTransitionTime:2022-06-03 19:58:21 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-06-03 22:06:43 +0000 UTC,LastTransitionTime:2022-06-03 19:58:21 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-06-03 22:06:43 +0000 UTC,LastTransitionTime:2022-06-03 20:00:45 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.203,},NodeAddress{Type:Hostname,Address:master2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:21e5c20b6e4a4d3fb07443d5575db572,SystemUUID:00A0DE53-E51D-E711-906E-0017A4403562,BootID:52401484-5222-49a3-a465-e7215ade9b1e,KernelVersion:3.10.0-1160.66.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.16,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:727687199,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 sirot/netperf-latest:latest],SizeBytes:282025213,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[quay.io/prometheus-operator/prometheus-operator@sha256:850c86bfeda4389bc9c757a9fd17ca5a090ea6b424968178d4467492cfa13921 quay.io/prometheus-operator/prometheus-operator:v0.44.1],SizeBytes:42617274,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e k8s.gcr.io/coredns/coredns:v1.8.0],SizeBytes:42454755,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jun 3 22:06:52.320: INFO: Logging kubelet events for node master2 Jun 3 22:06:52.322: INFO: Logging pods the kubelet thinks is on node master2 Jun 3 22:06:52.339: INFO: node-exporter-2h6sb started at 2022-06-03 20:13:28 +0000 UTC (0+2 container statuses recorded) Jun 3 22:06:52.339: INFO: Container kube-rbac-proxy ready: true, restart count 0 Jun 3 22:06:52.339: INFO: Container node-exporter ready: true, restart count 0 Jun 3 22:06:52.339: INFO: kube-apiserver-master2 started at 2022-06-03 19:58:55 +0000 UTC (0+1 container statuses recorded) Jun 3 22:06:52.339: INFO: Container kube-apiserver ready: true, restart count 0 Jun 3 22:06:52.339: INFO: kube-controller-manager-master2 started at 2022-06-03 19:58:55 +0000 UTC (0+1 container statuses recorded) Jun 3 22:06:52.339: INFO: Container kube-controller-manager ready: true, restart count 2 Jun 3 22:06:52.339: INFO: kube-proxy-nlc58 started at 2022-06-03 19:59:36 +0000 UTC (0+1 container statuses recorded) Jun 3 22:06:52.339: INFO: Container kube-proxy ready: true, restart count 1 Jun 3 22:06:52.339: INFO: prometheus-operator-585ccfb458-xp2lz started at 2022-06-03 20:13:21 +0000 UTC (0+2 container statuses recorded) Jun 3 22:06:52.339: INFO: Container kube-rbac-proxy ready: true, restart count 0 Jun 3 22:06:52.339: INFO: Container prometheus-operator ready: true, restart count 0 Jun 3 22:06:52.339: INFO: kube-scheduler-master2 started at 2022-06-03 19:58:55 +0000 UTC (0+1 container statuses recorded) Jun 3 22:06:52.339: INFO: Container kube-scheduler ready: true, restart count 3 Jun 3 22:06:52.339: INFO: kube-flannel-sbdcv started at 2022-06-03 20:00:32 +0000 UTC (1+1 container statuses recorded) Jun 3 22:06:52.339: INFO: Init container install-cni ready: true, restart count 2 Jun 3 22:06:52.339: INFO: Container kube-flannel ready: true, restart count 1 Jun 3 22:06:52.339: INFO: kube-multus-ds-amd64-ccvdq started at 2022-06-03 20:00:40 +0000 UTC (0+1 container statuses recorded) Jun 3 22:06:52.339: INFO: Container kube-multus ready: true, restart count 1 Jun 3 22:06:52.425: INFO: Latency metrics for node master2 Jun 3 22:06:52.425: INFO: Logging node info for node master3 Jun 3 22:06:52.427: INFO: Node Info: &Node{ObjectMeta:{master3 559b19e7-45b0-4589-9993-9bba259aae96 50262 0 2022-06-03 19:58:27 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master3 kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.204 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/master.version:v0.8.2 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2022-06-03 19:58:32 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}}} {flanneld Update v1 2022-06-03 20:00:37 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2022-06-03 20:00:47 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.2.0/24\"":{}},"f:taints":{}}}} {nfd-master Update v1 2022-06-03 20:08:15 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/master.version":{}}}}} {kubelet Update v1 2022-06-03 20:08:27 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:capacity":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.2.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{451201003520 0} {} 440625980Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234743296 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{406080902496 0} {} 406080902496 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324579328 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-06-03 20:03:22 +0000 UTC,LastTransitionTime:2022-06-03 20:03:22 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-06-03 22:06:44 +0000 UTC,LastTransitionTime:2022-06-03 19:58:27 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-06-03 22:06:44 +0000 UTC,LastTransitionTime:2022-06-03 19:58:27 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-06-03 22:06:44 +0000 UTC,LastTransitionTime:2022-06-03 19:58:27 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-06-03 22:06:44 +0000 UTC,LastTransitionTime:2022-06-03 20:03:18 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.204,},NodeAddress{Type:Hostname,Address:master3,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:5b399eed918a40dd8324debc1c0777a3,SystemUUID:008B1444-141E-E711-906E-0017A4403562,BootID:2fde35f0-2dc9-4531-9d2b-0bd4a6516b3a,KernelVersion:3.10.0-1160.66.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.16,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:727687199,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[k8s.gcr.io/nfd/node-feature-discovery@sha256:74a1cbd82354f148277e20cdce25d57816e355a896bc67f67a0f722164b16945 k8s.gcr.io/nfd/node-feature-discovery:v0.8.2],SizeBytes:108486428,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e k8s.gcr.io/coredns/coredns:v1.8.0],SizeBytes:42454755,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 aquasec/kube-bench:0.3.1],SizeBytes:19301876,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jun 3 22:06:52.428: INFO: Logging kubelet events for node master3 Jun 3 22:06:52.430: INFO: Logging pods the kubelet thinks is on node master3 Jun 3 22:06:52.448: INFO: coredns-8474476ff8-dvwn7 started at 2022-06-03 20:01:07 +0000 UTC (0+1 container statuses recorded) Jun 3 22:06:52.448: INFO: Container coredns ready: true, restart count 1 Jun 3 22:06:52.448: INFO: node-exporter-jn8vv started at 2022-06-03 20:13:28 +0000 UTC (0+2 container statuses recorded) Jun 3 22:06:52.448: INFO: Container kube-rbac-proxy ready: true, restart count 0 Jun 3 22:06:52.448: INFO: Container node-exporter ready: true, restart count 0 Jun 3 22:06:52.448: INFO: kube-controller-manager-master3 started at 2022-06-03 20:03:18 +0000 UTC (0+1 container statuses recorded) Jun 3 22:06:52.448: INFO: Container kube-controller-manager ready: true, restart count 2 Jun 3 22:06:52.448: INFO: kube-scheduler-master3 started at 2022-06-03 19:58:27 +0000 UTC (0+1 container statuses recorded) Jun 3 22:06:52.448: INFO: Container kube-scheduler ready: true, restart count 3 Jun 3 22:06:52.448: INFO: kube-proxy-m8r9n started at 2022-06-03 19:59:36 +0000 UTC (0+1 container statuses recorded) Jun 3 22:06:52.448: INFO: Container kube-proxy ready: true, restart count 2 Jun 3 22:06:52.448: INFO: node-feature-discovery-controller-cff799f9f-8fbbp started at 2022-06-03 20:08:09 +0000 UTC (0+1 container statuses recorded) Jun 3 22:06:52.448: INFO: Container nfd-controller ready: true, restart count 0 Jun 3 22:06:52.448: INFO: kube-apiserver-master3 started at 2022-06-03 20:03:18 +0000 UTC (0+1 container statuses recorded) Jun 3 22:06:52.448: INFO: Container kube-apiserver ready: true, restart count 0 Jun 3 22:06:52.448: INFO: kube-flannel-nx64t started at 2022-06-03 20:00:32 +0000 UTC (1+1 container statuses recorded) Jun 3 22:06:52.448: INFO: Init container install-cni ready: true, restart count 2 Jun 3 22:06:52.448: INFO: Container kube-flannel ready: true, restart count 2 Jun 3 22:06:52.448: INFO: kube-multus-ds-amd64-gjv49 started at 2022-06-03 20:00:40 +0000 UTC (0+1 container statuses recorded) Jun 3 22:06:52.448: INFO: Container kube-multus ready: true, restart count 1 Jun 3 22:06:52.533: INFO: Latency metrics for node master3 Jun 3 22:06:52.533: INFO: Logging node info for node node1 Jun 3 22:06:52.536: INFO: Node Info: &Node{ObjectMeta:{node1 482ecf0f-7f88-436d-a313-227096fe8b8d 50256 0 2022-06-03 19:59:31 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux cmk.intel.com/cmk-node:true feature.node.kubernetes.io/cpu-cpuid.ADX:true feature.node.kubernetes.io/cpu-cpuid.AESNI:true feature.node.kubernetes.io/cpu-cpuid.AVX:true feature.node.kubernetes.io/cpu-cpuid.AVX2:true feature.node.kubernetes.io/cpu-cpuid.AVX512BW:true feature.node.kubernetes.io/cpu-cpuid.AVX512CD:true feature.node.kubernetes.io/cpu-cpuid.AVX512DQ:true feature.node.kubernetes.io/cpu-cpuid.AVX512F:true feature.node.kubernetes.io/cpu-cpuid.AVX512VL:true feature.node.kubernetes.io/cpu-cpuid.FMA3:true feature.node.kubernetes.io/cpu-cpuid.HLE:true feature.node.kubernetes.io/cpu-cpuid.IBPB:true feature.node.kubernetes.io/cpu-cpuid.MPX:true feature.node.kubernetes.io/cpu-cpuid.RTM:true feature.node.kubernetes.io/cpu-cpuid.SSE4:true feature.node.kubernetes.io/cpu-cpuid.SSE42:true feature.node.kubernetes.io/cpu-cpuid.STIBP:true feature.node.kubernetes.io/cpu-cpuid.VMX:true feature.node.kubernetes.io/cpu-cstate.enabled:true feature.node.kubernetes.io/cpu-hardware_multithreading:true feature.node.kubernetes.io/cpu-pstate.status:active feature.node.kubernetes.io/cpu-pstate.turbo:true feature.node.kubernetes.io/cpu-rdt.RDTCMT:true feature.node.kubernetes.io/cpu-rdt.RDTL3CA:true feature.node.kubernetes.io/cpu-rdt.RDTMBA:true feature.node.kubernetes.io/cpu-rdt.RDTMBM:true feature.node.kubernetes.io/cpu-rdt.RDTMON:true feature.node.kubernetes.io/kernel-config.NO_HZ:true feature.node.kubernetes.io/kernel-config.NO_HZ_FULL:true feature.node.kubernetes.io/kernel-selinux.enabled:true feature.node.kubernetes.io/kernel-version.full:3.10.0-1160.66.1.el7.x86_64 feature.node.kubernetes.io/kernel-version.major:3 feature.node.kubernetes.io/kernel-version.minor:10 feature.node.kubernetes.io/kernel-version.revision:0 feature.node.kubernetes.io/memory-numa:true feature.node.kubernetes.io/network-sriov.capable:true feature.node.kubernetes.io/network-sriov.configured:true feature.node.kubernetes.io/pci-0300_1a03.present:true feature.node.kubernetes.io/storage-nonrotationaldisk:true feature.node.kubernetes.io/system-os_release.ID:centos feature.node.kubernetes.io/system-os_release.VERSION_ID:7 feature.node.kubernetes.io/system-os_release.VERSION_ID.major:7 kubernetes.io/arch:amd64 kubernetes.io/hostname:node1 kubernetes.io/os:linux] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.207 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/extended-resources: nfd.node.kubernetes.io/feature-labels:cpu-cpuid.ADX,cpu-cpuid.AESNI,cpu-cpuid.AVX,cpu-cpuid.AVX2,cpu-cpuid.AVX512BW,cpu-cpuid.AVX512CD,cpu-cpuid.AVX512DQ,cpu-cpuid.AVX512F,cpu-cpuid.AVX512VL,cpu-cpuid.FMA3,cpu-cpuid.HLE,cpu-cpuid.IBPB,cpu-cpuid.MPX,cpu-cpuid.RTM,cpu-cpuid.SSE4,cpu-cpuid.SSE42,cpu-cpuid.STIBP,cpu-cpuid.VMX,cpu-cstate.enabled,cpu-hardware_multithreading,cpu-pstate.status,cpu-pstate.turbo,cpu-rdt.RDTCMT,cpu-rdt.RDTL3CA,cpu-rdt.RDTMBA,cpu-rdt.RDTMBM,cpu-rdt.RDTMON,kernel-config.NO_HZ,kernel-config.NO_HZ_FULL,kernel-selinux.enabled,kernel-version.full,kernel-version.major,kernel-version.minor,kernel-version.revision,memory-numa,network-sriov.capable,network-sriov.configured,pci-0300_1a03.present,storage-nonrotationaldisk,system-os_release.ID,system-os_release.VERSION_ID,system-os_release.VERSION_ID.major nfd.node.kubernetes.io/worker.version:v0.8.2 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2022-06-03 19:59:31 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.3.0/24\"":{}}}}} {kubeadm Update v1 2022-06-03 19:59:32 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {flanneld Update v1 2022-06-03 20:00:37 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {nfd-master Update v1 2022-06-03 20:08:16 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/extended-resources":{},"f:nfd.node.kubernetes.io/feature-labels":{},"f:nfd.node.kubernetes.io/worker.version":{}},"f:labels":{"f:feature.node.kubernetes.io/cpu-cpuid.ADX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AESNI":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX2":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512BW":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512CD":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512DQ":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512F":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512VL":{},"f:feature.node.kubernetes.io/cpu-cpuid.FMA3":{},"f:feature.node.kubernetes.io/cpu-cpuid.HLE":{},"f:feature.node.kubernetes.io/cpu-cpuid.IBPB":{},"f:feature.node.kubernetes.io/cpu-cpuid.MPX":{},"f:feature.node.kubernetes.io/cpu-cpuid.RTM":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE4":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE42":{},"f:feature.node.kubernetes.io/cpu-cpuid.STIBP":{},"f:feature.node.kubernetes.io/cpu-cpuid.VMX":{},"f:feature.node.kubernetes.io/cpu-cstate.enabled":{},"f:feature.node.kubernetes.io/cpu-hardware_multithreading":{},"f:feature.node.kubernetes.io/cpu-pstate.status":{},"f:feature.node.kubernetes.io/cpu-pstate.turbo":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTCMT":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTL3CA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBM":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMON":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ_FULL":{},"f:feature.node.kubernetes.io/kernel-selinux.enabled":{},"f:feature.node.kubernetes.io/kernel-version.full":{},"f:feature.node.kubernetes.io/kernel-version.major":{},"f:feature.node.kubernetes.io/kernel-version.minor":{},"f:feature.node.kubernetes.io/kernel-version.revision":{},"f:feature.node.kubernetes.io/memory-numa":{},"f:feature.node.kubernetes.io/network-sriov.capable":{},"f:feature.node.kubernetes.io/network-sriov.configured":{},"f:feature.node.kubernetes.io/pci-0300_1a03.present":{},"f:feature.node.kubernetes.io/storage-nonrotationaldisk":{},"f:feature.node.kubernetes.io/system-os_release.ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID.major":{}}}}} {Swagger-Codegen Update v1 2022-06-03 20:11:45 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:cmk.intel.com/cmk-node":{}}},"f:status":{"f:capacity":{"f:cmk.intel.com/exclusive-cores":{}}}}} {kubelet Update v1 2022-06-03 20:11:51 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:cmk.intel.com/exclusive-cores":{},"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:capacity":{"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.3.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.3.0/24],},Status:NodeStatus{Capacity:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{451201003520 0} {} 440625980Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{201269608448 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{77 0} {} 77 DecimalSI},ephemeral-storage: {{406080902496 0} {} 406080902496 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{178884608000 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-06-03 20:03:39 +0000 UTC,LastTransitionTime:2022-06-03 20:03:39 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-06-03 22:06:42 +0000 UTC,LastTransitionTime:2022-06-03 19:59:31 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-06-03 22:06:42 +0000 UTC,LastTransitionTime:2022-06-03 19:59:31 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-06-03 22:06:42 +0000 UTC,LastTransitionTime:2022-06-03 19:59:31 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-06-03 22:06:42 +0000 UTC,LastTransitionTime:2022-06-03 20:00:42 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.207,},NodeAddress{Type:Hostname,Address:node1,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:d7b1fa7572024d5cac9eec5f4f2a75d3,SystemUUID:00CDA902-D022-E711-906E-0017A4403562,BootID:a1aa46cd-ec2c-417b-ae44-b808bdc04113,KernelVersion:3.10.0-1160.66.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.16,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[opnfv/barometer-collectd@sha256:f30e965aa6195e6ac4ca2410f5a15e3704c92e4afa5208178ca22a7911975d66],SizeBytes:1075575763,},ContainerImage{Names:[@ :],SizeBytes:1003977815,},ContainerImage{Names:[localhost:30500/cmk@sha256:196eade72a7e16bdb2d709d29fdec354c8a3dbbb68e384608929b41c5ec41520 cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:727687199,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[golang@sha256:db2475a1dbb2149508e5db31d7d77a75e6600d54be645f37681f03f2762169ba golang:alpine3.12],SizeBytes:301186719,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:702a992280fb7c3303e84a5801acbb4c9c7fcf48cffe0e9c8be3f0c60f74cf89 k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.4],SizeBytes:253371792,},ContainerImage{Names:[grafana/grafana@sha256:ba39bf5131dcc0464134a3ff0e26e8c6380415249fa725e5f619176601255172 grafana/grafana:7.5.4],SizeBytes:203572842,},ContainerImage{Names:[quay.io/prometheus/prometheus@sha256:b899dbd1b9017b9a379f76ce5b40eead01a62762c4f2057eacef945c3c22d210 quay.io/prometheus/prometheus:v2.22.1],SizeBytes:168344243,},ContainerImage{Names:[nginx@sha256:a05b0cdd4fc1be3b224ba9662ebdf98fe44c09c0c9215b45f84344c12867002e nginx:1.21.1],SizeBytes:133175493,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:716d2f68314c5c4ddd5ecdb45183fcb4ed8019015982c1321571f863989b70b0 k8s.gcr.io/e2e-test-images/httpd:2.4.39-1],SizeBytes:126894770,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:125930239,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:1f36a24cfb5e0c3f725d7565a867c2384282fcbeccc77b07b423c9da95763a9a k8s.gcr.io/e2e-test-images/nautilus:1.4],SizeBytes:121748345,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[k8s.gcr.io/nfd/node-feature-discovery@sha256:74a1cbd82354f148277e20cdce25d57816e355a896bc67f67a0f722164b16945 k8s.gcr.io/nfd/node-feature-discovery:v0.8.2],SizeBytes:108486428,},ContainerImage{Names:[directxman12/k8s-prometheus-adapter@sha256:2b09a571757a12c0245f2f1a74db4d1b9386ff901cf57f5ce48a0a682bd0e3af directxman12/k8s-prometheus-adapter:v0.8.2],SizeBytes:68230450,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[localhost:30500/sriov-device-plugin@sha256:bec5a478455b8244d18398355b5ec18540557180ddc029404300ca241638521b nfvpe/sriov-device-plugin:latest localhost:30500/sriov-device-plugin:v3.3.2],SizeBytes:42676189,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[prom/collectd-exporter@sha256:73fbda4d24421bff3b741c27efc36f1b6fbe7c57c378d56d4ff78101cd556654],SizeBytes:17463681,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:16032814,},ContainerImage{Names:[quay.io/prometheus-operator/prometheus-config-reloader@sha256:4dee0fcf1820355ddd6986c1317b555693776c731315544a99d6cc59a7e34ce9 quay.io/prometheus-operator/prometheus-config-reloader:v0.44.1],SizeBytes:13433274,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:11443478,},ContainerImage{Names:[appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172cead6502fec989198b98d8bb appropriate/curl:edge],SizeBytes:5654234,},ContainerImage{Names:[alpine@sha256:c75ac27b49326926b803b9ed43bf088bc220d22556de1bc5f72d742c91398f69 alpine:3.12],SizeBytes:5581590,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:1154361,},ContainerImage{Names:[busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 busybox:1.28],SizeBytes:1146369,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jun 3 22:06:52.536: INFO: Logging kubelet events for node node1 Jun 3 22:06:52.539: INFO: Logging pods the kubelet thinks is on node node1 Jun 3 22:06:52.553: INFO: kube-proxy-b6zlv started at 2022-06-03 19:59:36 +0000 UTC (0+1 container statuses recorded) Jun 3 22:06:52.553: INFO: Container kube-proxy ready: true, restart count 2 Jun 3 22:06:52.553: INFO: prometheus-k8s-0 started at 2022-06-03 20:13:45 +0000 UTC (0+4 container statuses recorded) Jun 3 22:06:52.554: INFO: Container config-reloader ready: true, restart count 0 Jun 3 22:06:52.554: INFO: Container custom-metrics-apiserver ready: true, restart count 0 Jun 3 22:06:52.554: INFO: Container grafana ready: true, restart count 0 Jun 3 22:06:52.554: INFO: Container prometheus ready: true, restart count 1 Jun 3 22:06:52.554: INFO: cmk-84nbw started at 2022-06-03 20:12:24 +0000 UTC (0+2 container statuses recorded) Jun 3 22:06:52.554: INFO: Container nodereport ready: true, restart count 0 Jun 3 22:06:52.554: INFO: Container reconcile ready: true, restart count 0 Jun 3 22:06:52.554: INFO: kube-flannel-hm6bh started at 2022-06-03 20:00:32 +0000 UTC (1+1 container statuses recorded) Jun 3 22:06:52.554: INFO: Init container install-cni ready: true, restart count 2 Jun 3 22:06:52.554: INFO: Container kube-flannel ready: true, restart count 3 Jun 3 22:06:52.554: INFO: externalname-service-ltl7q started at 2022-06-03 22:05:20 +0000 UTC (0+1 container statuses recorded) Jun 3 22:06:52.554: INFO: Container externalname-service ready: true, restart count 0 Jun 3 22:06:52.554: INFO: nginx-proxy-node1 started at 2022-06-03 19:59:31 +0000 UTC (0+1 container statuses recorded) Jun 3 22:06:52.554: INFO: Container nginx-proxy ready: true, restart count 2 Jun 3 22:06:52.554: INFO: cmk-init-discover-node1-n75dv started at 2022-06-03 20:11:42 +0000 UTC (0+3 container statuses recorded) Jun 3 22:06:52.554: INFO: Container discover ready: false, restart count 0 Jun 3 22:06:52.554: INFO: Container init ready: false, restart count 0 Jun 3 22:06:52.554: INFO: Container install ready: false, restart count 0 Jun 3 22:06:52.554: INFO: node-feature-discovery-worker-rg6tx started at 2022-06-03 20:08:09 +0000 UTC (0+1 container statuses recorded) Jun 3 22:06:52.554: INFO: Container nfd-worker ready: true, restart count 0 Jun 3 22:06:52.554: INFO: node-exporter-f5xkq started at 2022-06-03 20:13:28 +0000 UTC (0+2 container statuses recorded) Jun 3 22:06:52.554: INFO: Container kube-rbac-proxy ready: true, restart count 0 Jun 3 22:06:52.554: INFO: Container node-exporter ready: true, restart count 0 Jun 3 22:06:52.554: INFO: collectd-nbx5z started at 2022-06-03 20:17:32 +0000 UTC (0+3 container statuses recorded) Jun 3 22:06:52.554: INFO: Container collectd ready: true, restart count 0 Jun 3 22:06:52.554: INFO: Container collectd-exporter ready: true, restart count 0 Jun 3 22:06:52.554: INFO: Container rbac-proxy ready: true, restart count 0 Jun 3 22:06:52.554: INFO: kube-multus-ds-amd64-p7r6j started at 2022-06-03 20:00:40 +0000 UTC (0+1 container statuses recorded) Jun 3 22:06:52.554: INFO: Container kube-multus ready: true, restart count 1 Jun 3 22:06:52.554: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-qwqjx started at 2022-06-03 20:09:20 +0000 UTC (0+1 container statuses recorded) Jun 3 22:06:52.554: INFO: Container kube-sriovdp ready: true, restart count 0 Jun 3 22:06:52.554: INFO: cmk-webhook-6c9d5f8578-c927x started at 2022-06-03 20:12:25 +0000 UTC (0+1 container statuses recorded) Jun 3 22:06:52.554: INFO: Container cmk-webhook ready: true, restart count 0 Jun 3 22:06:52.977: INFO: Latency metrics for node node1 Jun 3 22:06:52.977: INFO: Logging node info for node node2 Jun 3 22:06:52.980: INFO: Node Info: &Node{ObjectMeta:{node2 bb95e261-57f4-4e78-b1f6-cbf8d9287d74 50259 0 2022-06-03 19:59:32 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux cmk.intel.com/cmk-node:true feature.node.kubernetes.io/cpu-cpuid.ADX:true feature.node.kubernetes.io/cpu-cpuid.AESNI:true feature.node.kubernetes.io/cpu-cpuid.AVX:true feature.node.kubernetes.io/cpu-cpuid.AVX2:true feature.node.kubernetes.io/cpu-cpuid.AVX512BW:true feature.node.kubernetes.io/cpu-cpuid.AVX512CD:true feature.node.kubernetes.io/cpu-cpuid.AVX512DQ:true feature.node.kubernetes.io/cpu-cpuid.AVX512F:true feature.node.kubernetes.io/cpu-cpuid.AVX512VL:true feature.node.kubernetes.io/cpu-cpuid.FMA3:true feature.node.kubernetes.io/cpu-cpuid.HLE:true feature.node.kubernetes.io/cpu-cpuid.IBPB:true feature.node.kubernetes.io/cpu-cpuid.MPX:true feature.node.kubernetes.io/cpu-cpuid.RTM:true feature.node.kubernetes.io/cpu-cpuid.SSE4:true feature.node.kubernetes.io/cpu-cpuid.SSE42:true feature.node.kubernetes.io/cpu-cpuid.STIBP:true feature.node.kubernetes.io/cpu-cpuid.VMX:true feature.node.kubernetes.io/cpu-cstate.enabled:true feature.node.kubernetes.io/cpu-hardware_multithreading:true feature.node.kubernetes.io/cpu-pstate.status:active feature.node.kubernetes.io/cpu-pstate.turbo:true feature.node.kubernetes.io/cpu-rdt.RDTCMT:true feature.node.kubernetes.io/cpu-rdt.RDTL3CA:true feature.node.kubernetes.io/cpu-rdt.RDTMBA:true feature.node.kubernetes.io/cpu-rdt.RDTMBM:true feature.node.kubernetes.io/cpu-rdt.RDTMON:true feature.node.kubernetes.io/kernel-config.NO_HZ:true feature.node.kubernetes.io/kernel-config.NO_HZ_FULL:true feature.node.kubernetes.io/kernel-selinux.enabled:true feature.node.kubernetes.io/kernel-version.full:3.10.0-1160.66.1.el7.x86_64 feature.node.kubernetes.io/kernel-version.major:3 feature.node.kubernetes.io/kernel-version.minor:10 feature.node.kubernetes.io/kernel-version.revision:0 feature.node.kubernetes.io/memory-numa:true feature.node.kubernetes.io/network-sriov.capable:true feature.node.kubernetes.io/network-sriov.configured:true feature.node.kubernetes.io/pci-0300_1a03.present:true feature.node.kubernetes.io/storage-nonrotationaldisk:true feature.node.kubernetes.io/system-os_release.ID:centos feature.node.kubernetes.io/system-os_release.VERSION_ID:7 feature.node.kubernetes.io/system-os_release.VERSION_ID.major:7 kubernetes.io/arch:amd64 kubernetes.io/hostname:node2 kubernetes.io/os:linux] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.208 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/extended-resources: nfd.node.kubernetes.io/feature-labels:cpu-cpuid.ADX,cpu-cpuid.AESNI,cpu-cpuid.AVX,cpu-cpuid.AVX2,cpu-cpuid.AVX512BW,cpu-cpuid.AVX512CD,cpu-cpuid.AVX512DQ,cpu-cpuid.AVX512F,cpu-cpuid.AVX512VL,cpu-cpuid.FMA3,cpu-cpuid.HLE,cpu-cpuid.IBPB,cpu-cpuid.MPX,cpu-cpuid.RTM,cpu-cpuid.SSE4,cpu-cpuid.SSE42,cpu-cpuid.STIBP,cpu-cpuid.VMX,cpu-cstate.enabled,cpu-hardware_multithreading,cpu-pstate.status,cpu-pstate.turbo,cpu-rdt.RDTCMT,cpu-rdt.RDTL3CA,cpu-rdt.RDTMBA,cpu-rdt.RDTMBM,cpu-rdt.RDTMON,kernel-config.NO_HZ,kernel-config.NO_HZ_FULL,kernel-selinux.enabled,kernel-version.full,kernel-version.major,kernel-version.minor,kernel-version.revision,memory-numa,network-sriov.capable,network-sriov.configured,pci-0300_1a03.present,storage-nonrotationaldisk,system-os_release.ID,system-os_release.VERSION_ID,system-os_release.VERSION_ID.major nfd.node.kubernetes.io/worker.version:v0.8.2 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2022-06-03 19:59:32 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.4.0/24\"":{}}}}} {kubeadm Update v1 2022-06-03 19:59:33 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {flanneld Update v1 2022-06-03 20:00:37 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {nfd-master Update v1 2022-06-03 20:08:16 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/extended-resources":{},"f:nfd.node.kubernetes.io/feature-labels":{},"f:nfd.node.kubernetes.io/worker.version":{}},"f:labels":{"f:feature.node.kubernetes.io/cpu-cpuid.ADX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AESNI":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX2":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512BW":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512CD":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512DQ":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512F":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512VL":{},"f:feature.node.kubernetes.io/cpu-cpuid.FMA3":{},"f:feature.node.kubernetes.io/cpu-cpuid.HLE":{},"f:feature.node.kubernetes.io/cpu-cpuid.IBPB":{},"f:feature.node.kubernetes.io/cpu-cpuid.MPX":{},"f:feature.node.kubernetes.io/cpu-cpuid.RTM":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE4":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE42":{},"f:feature.node.kubernetes.io/cpu-cpuid.STIBP":{},"f:feature.node.kubernetes.io/cpu-cpuid.VMX":{},"f:feature.node.kubernetes.io/cpu-cstate.enabled":{},"f:feature.node.kubernetes.io/cpu-hardware_multithreading":{},"f:feature.node.kubernetes.io/cpu-pstate.status":{},"f:feature.node.kubernetes.io/cpu-pstate.turbo":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTCMT":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTL3CA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBM":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMON":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ_FULL":{},"f:feature.node.kubernetes.io/kernel-selinux.enabled":{},"f:feature.node.kubernetes.io/kernel-version.full":{},"f:feature.node.kubernetes.io/kernel-version.major":{},"f:feature.node.kubernetes.io/kernel-version.minor":{},"f:feature.node.kubernetes.io/kernel-version.revision":{},"f:feature.node.kubernetes.io/memory-numa":{},"f:feature.node.kubernetes.io/network-sriov.capable":{},"f:feature.node.kubernetes.io/network-sriov.configured":{},"f:feature.node.kubernetes.io/pci-0300_1a03.present":{},"f:feature.node.kubernetes.io/storage-nonrotationaldisk":{},"f:feature.node.kubernetes.io/system-os_release.ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID.major":{}}}}} {Swagger-Codegen Update v1 2022-06-03 20:12:07 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:cmk.intel.com/cmk-node":{}}},"f:status":{"f:capacity":{"f:cmk.intel.com/exclusive-cores":{}}}}} {kubelet Update v1 2022-06-03 20:12:10 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:cmk.intel.com/exclusive-cores":{},"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:capacity":{"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.4.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.4.0/24],},Status:NodeStatus{Capacity:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{451201003520 0} {} 440625980Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{201269604352 0} {} 196552348Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{77 0} {} 77 DecimalSI},ephemeral-storage: {{406080902496 0} {} 406080902496 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{178884603904 0} {} 174691996Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-06-03 20:03:25 +0000 UTC,LastTransitionTime:2022-06-03 20:03:25 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-06-03 22:06:43 +0000 UTC,LastTransitionTime:2022-06-03 19:59:32 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-06-03 22:06:43 +0000 UTC,LastTransitionTime:2022-06-03 19:59:32 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-06-03 22:06:43 +0000 UTC,LastTransitionTime:2022-06-03 19:59:32 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-06-03 22:06:43 +0000 UTC,LastTransitionTime:2022-06-03 20:03:20 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.208,},NodeAddress{Type:Hostname,Address:node2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:73f6f7c4482d4ddfadf38b35a5d03575,SystemUUID:80B3CD56-852F-E711-906E-0017A4403562,BootID:14b04379-324d-413e-8b7f-b1dff077c955,KernelVersion:3.10.0-1160.66.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.16,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[opnfv/barometer-collectd@sha256:f30e965aa6195e6ac4ca2410f5a15e3704c92e4afa5208178ca22a7911975d66],SizeBytes:1075575763,},ContainerImage{Names:[localhost:30500/cmk@sha256:196eade72a7e16bdb2d709d29fdec354c8a3dbbb68e384608929b41c5ec41520 localhost:30500/cmk:v1.5.1],SizeBytes:727687199,},ContainerImage{Names:[cmk:v1.5.1],SizeBytes:727687199,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[aquasec/kube-hunter@sha256:2be6820bc1d7e0f57193a9a27d5a3e16b2fd93c53747b03ce8ca48c6fc323781 aquasec/kube-hunter:0.3.1],SizeBytes:347611549,},ContainerImage{Names:[sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 sirot/netperf-latest:latest],SizeBytes:282025213,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[k8s.gcr.io/etcd@sha256:4ad90a11b55313b182afc186b9876c8e891531b8db4c9bf1541953021618d0e2 k8s.gcr.io/etcd:3.4.13-0],SizeBytes:253392289,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[nginx@sha256:a05b0cdd4fc1be3b224ba9662ebdf98fe44c09c0c9215b45f84344c12867002e nginx:1.21.1],SizeBytes:133175493,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:716d2f68314c5c4ddd5ecdb45183fcb4ed8019015982c1321571f863989b70b0 k8s.gcr.io/e2e-test-images/httpd:2.4.39-1],SizeBytes:126894770,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:125930239,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:1f36a24cfb5e0c3f725d7565a867c2384282fcbeccc77b07b423c9da95763a9a k8s.gcr.io/e2e-test-images/nautilus:1.4],SizeBytes:121748345,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[k8s.gcr.io/nfd/node-feature-discovery@sha256:74a1cbd82354f148277e20cdce25d57816e355a896bc67f67a0f722164b16945 k8s.gcr.io/nfd/node-feature-discovery:v0.8.2],SizeBytes:108486428,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/sample-apiserver@sha256:e7fddbaac4c3451da2365ab90bad149d32f11409738034e41e0f460927f7c276 k8s.gcr.io/e2e-test-images/sample-apiserver:1.17.4],SizeBytes:58172101,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[localhost:30500/sriov-device-plugin@sha256:bec5a478455b8244d18398355b5ec18540557180ddc029404300ca241638521b localhost:30500/sriov-device-plugin:v3.3.2],SizeBytes:42676189,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[localhost:30500/tasextender@sha256:eddd5e176ac5f79e2e8ba9a1b7023bbf7200edfa835da39de54a6bf3568f9668 localhost:30500/tasextender:0.4],SizeBytes:28910791,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 aquasec/kube-bench:0.3.1],SizeBytes:19301876,},ContainerImage{Names:[prom/collectd-exporter@sha256:73fbda4d24421bff3b741c27efc36f1b6fbe7c57c378d56d4ff78101cd556654],SizeBytes:17463681,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:16032814,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:11443478,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonewprivs@sha256:8ac1264691820febacf3aea5d152cbde6d10685731ec14966a9401c6f47a68ac k8s.gcr.io/e2e-test-images/nonewprivs:1.3],SizeBytes:7107254,},ContainerImage{Names:[appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172cead6502fec989198b98d8bb appropriate/curl:edge],SizeBytes:5654234,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:1154361,},ContainerImage{Names:[busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 busybox:1.28],SizeBytes:1146369,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jun 3 22:06:52.980: INFO: Logging kubelet events for node node2 Jun 3 22:06:52.983: INFO: Logging pods the kubelet thinks is on node node2 Jun 3 22:06:52.992: INFO: kube-flannel-pc7wj started at 2022-06-03 20:00:32 +0000 UTC (1+1 container statuses recorded) Jun 3 22:06:52.993: INFO: Init container install-cni ready: true, restart count 0 Jun 3 22:06:52.993: INFO: Container kube-flannel ready: true, restart count 1 Jun 3 22:06:52.993: INFO: kube-multus-ds-amd64-n7spl started at 2022-06-03 20:00:40 +0000 UTC (0+1 container statuses recorded) Jun 3 22:06:52.993: INFO: Container kube-multus ready: true, restart count 1 Jun 3 22:06:52.993: INFO: kube-proxy-qmkcq started at 2022-06-03 19:59:36 +0000 UTC (0+1 container statuses recorded) Jun 3 22:06:52.993: INFO: Container kube-proxy ready: true, restart count 1 Jun 3 22:06:52.993: INFO: node-feature-discovery-worker-gn855 started at 2022-06-03 20:08:09 +0000 UTC (0+1 container statuses recorded) Jun 3 22:06:52.993: INFO: Container nfd-worker ready: true, restart count 0 Jun 3 22:06:52.993: INFO: collectd-q2l4t started at 2022-06-03 20:17:32 +0000 UTC (0+3 container statuses recorded) Jun 3 22:06:52.993: INFO: Container collectd ready: true, restart count 0 Jun 3 22:06:52.993: INFO: Container collectd-exporter ready: true, restart count 0 Jun 3 22:06:52.993: INFO: Container rbac-proxy ready: true, restart count 0 Jun 3 22:06:52.993: INFO: execpod-affinitygk6sd started at 2022-06-03 22:06:24 +0000 UTC (0+1 container statuses recorded) Jun 3 22:06:52.993: INFO: Container agnhost-container ready: true, restart count 0 Jun 3 22:06:52.993: INFO: kubernetes-dashboard-785dcbb76d-25c95 started at 2022-06-03 20:01:12 +0000 UTC (0+1 container statuses recorded) Jun 3 22:06:52.993: INFO: Container kubernetes-dashboard ready: true, restart count 1 Jun 3 22:06:52.993: INFO: cmk-v446x started at 2022-06-03 20:12:24 +0000 UTC (0+2 container statuses recorded) Jun 3 22:06:52.993: INFO: Container nodereport ready: true, restart count 0 Jun 3 22:06:52.993: INFO: Container reconcile ready: true, restart count 0 Jun 3 22:06:52.993: INFO: node-exporter-g45bm started at 2022-06-03 20:13:28 +0000 UTC (0+2 container statuses recorded) Jun 3 22:06:52.993: INFO: Container kube-rbac-proxy ready: true, restart count 0 Jun 3 22:06:52.993: INFO: Container node-exporter ready: true, restart count 0 Jun 3 22:06:52.993: INFO: affinity-nodeport-timeout-7w8xs started at 2022-06-03 22:06:18 +0000 UTC (0+1 container statuses recorded) Jun 3 22:06:52.993: INFO: Container affinity-nodeport-timeout ready: true, restart count 0 Jun 3 22:06:52.993: INFO: kubernetes-metrics-scraper-5558854cb-fz4kn started at 2022-06-03 20:01:12 +0000 UTC (0+1 container statuses recorded) Jun 3 22:06:52.993: INFO: Container kubernetes-metrics-scraper ready: true, restart count 1 Jun 3 22:06:52.993: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-49xzt started at 2022-06-03 20:09:20 +0000 UTC (0+1 container statuses recorded) Jun 3 22:06:52.993: INFO: Container kube-sriovdp ready: true, restart count 0 Jun 3 22:06:52.993: INFO: forbid-27571562-gfhz5 started at 2022-06-03 22:02:00 +0000 UTC (0+1 container statuses recorded) Jun 3 22:06:52.993: INFO: Container c ready: true, restart count 0 Jun 3 22:06:52.993: INFO: nginx-proxy-node2 started at 2022-06-03 19:59:32 +0000 UTC (0+1 container statuses recorded) Jun 3 22:06:52.993: INFO: Container nginx-proxy ready: true, restart count 2 Jun 3 22:06:52.993: INFO: cmk-init-discover-node2-xvf8p started at 2022-06-03 20:12:02 +0000 UTC (0+3 container statuses recorded) Jun 3 22:06:52.993: INFO: Container discover ready: false, restart count 0 Jun 3 22:06:52.993: INFO: Container init ready: false, restart count 0 Jun 3 22:06:52.993: INFO: Container install ready: false, restart count 0 Jun 3 22:06:52.993: INFO: execpodwl9ds started at 2022-06-03 22:05:23 +0000 UTC (0+1 container statuses recorded) Jun 3 22:06:52.993: INFO: Container agnhost-container ready: true, restart count 0 Jun 3 22:06:52.993: INFO: tas-telemetry-aware-scheduling-84ff454dfb-j2kg5 started at 2022-06-03 20:16:39 +0000 UTC (0+1 container statuses recorded) Jun 3 22:06:52.993: INFO: Container tas-extender ready: true, restart count 0 Jun 3 22:06:52.993: INFO: affinity-nodeport-timeout-wnsmp started at 2022-06-03 22:06:18 +0000 UTC (0+1 container statuses recorded) Jun 3 22:06:52.993: INFO: Container affinity-nodeport-timeout ready: true, restart count 0 Jun 3 22:06:52.993: INFO: affinity-nodeport-timeout-2xw5d started at 2022-06-03 22:06:18 +0000 UTC (0+1 container statuses recorded) Jun 3 22:06:52.993: INFO: Container affinity-nodeport-timeout ready: true, restart count 0 Jun 3 22:06:52.993: INFO: externalname-service-bkpmw started at 2022-06-03 22:05:20 +0000 UTC (0+1 container statuses recorded) Jun 3 22:06:52.993: INFO: Container externalname-service ready: true, restart count 0 Jun 3 22:06:53.148: INFO: Latency metrics for node node2 Jun 3 22:06:53.148: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-2884" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750 • Failure [148.674 seconds] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance] [It] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Jun 3 22:06:41.892: Unexpected error: <*errors.errorString | 0xc000ad0830>: { s: "service is not reachable within 2m0s timeout on endpoint 10.10.190.207:32689 over TCP protocol", } service is not reachable within 2m0s timeout on endpoint 10.10.190.207:32689 over TCP protocol occurred /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:2576 ------------------------------ {"msg":"FAILED [sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","total":-1,"completed":28,"skipped":503,"failed":2,"failures":["[sig-network] Services should be able to create a functioning NodePort service [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]"]} Jun 3 22:06:53.164: INFO: Running AfterSuite actions on all nodes [BeforeEach] [sig-apps] CronJob /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 3 22:01:34.112: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename cronjob STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] CronJob /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/cronjob.go:63 W0603 22:01:34.137088 30 warnings.go:70] batch/v1beta1 CronJob is deprecated in v1.21+, unavailable in v1.25+; use batch/v1 CronJob [It] should not schedule new jobs when ForbidConcurrent [Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a ForbidConcurrent cronjob STEP: Ensuring a job is scheduled STEP: Ensuring exactly one is scheduled STEP: Ensuring exactly one running job exists by listing jobs explicitly STEP: Ensuring no more jobs are scheduled STEP: Removing cronjob [AfterEach] [sig-apps] CronJob /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 3 22:07:00.160: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "cronjob-6068" for this suite. • [SLOW TEST:326.055 seconds] [sig-apps] CronJob /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should not schedule new jobs when ForbidConcurrent [Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","total":-1,"completed":22,"skipped":359,"failed":0} Jun 3 22:07:00.169: INFO: Running AfterSuite actions on all nodes [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 3 22:05:20.695: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746 [It] should be able to change the type from ExternalName to NodePort [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating a service externalname-service with the type=ExternalName in namespace services-7454 STEP: changing the ExternalName service to type=NodePort STEP: creating replication controller externalname-service in namespace services-7454 I0603 22:05:20.736718 38 runners.go:190] Created replication controller with name: externalname-service, namespace: services-7454, replica count: 2 I0603 22:05:23.788165 38 runners.go:190] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jun 3 22:05:23.788: INFO: Creating new exec pod Jun 3 22:05:28.815: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7454 exec execpodwl9ds -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80' Jun 3 22:05:29.229: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 externalname-service 80\nConnection to externalname-service 80 port [tcp/http] succeeded!\n" Jun 3 22:05:29.229: INFO: stdout: "externalname-service-ltl7q" Jun 3 22:05:29.229: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7454 exec execpodwl9ds -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.233.8.4 80' Jun 3 22:05:29.494: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 10.233.8.4 80\nConnection to 10.233.8.4 80 port [tcp/http] succeeded!\n" Jun 3 22:05:29.494: INFO: stdout: "externalname-service-bkpmw" Jun 3 22:05:29.494: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7454 exec execpodwl9ds -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30908' Jun 3 22:05:29.724: INFO: rc: 1 Jun 3 22:05:29.724: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7454 exec execpodwl9ds -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30908: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30908 nc: connect to 10.10.190.207 port 30908 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 22:05:30.724: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7454 exec execpodwl9ds -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30908' Jun 3 22:05:31.519: INFO: rc: 1 Jun 3 22:05:31.519: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7454 exec execpodwl9ds -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30908: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30908 nc: connect to 10.10.190.207 port 30908 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 22:05:31.725: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7454 exec execpodwl9ds -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30908' Jun 3 22:05:31.985: INFO: rc: 1 Jun 3 22:05:31.985: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7454 exec execpodwl9ds -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30908: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30908 nc: connect to 10.10.190.207 port 30908 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 22:05:32.724: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7454 exec execpodwl9ds -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30908' Jun 3 22:05:32.965: INFO: rc: 1 Jun 3 22:05:32.965: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7454 exec execpodwl9ds -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30908: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30908 nc: connect to 10.10.190.207 port 30908 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 22:05:33.725: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7454 exec execpodwl9ds -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30908' Jun 3 22:05:33.989: INFO: rc: 1 Jun 3 22:05:33.989: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7454 exec execpodwl9ds -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30908: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30908 nc: connect to 10.10.190.207 port 30908 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 22:05:34.725: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7454 exec execpodwl9ds -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30908' Jun 3 22:05:34.990: INFO: rc: 1 Jun 3 22:05:34.990: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7454 exec execpodwl9ds -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30908: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30908 nc: connect to 10.10.190.207 port 30908 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 22:05:35.725: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7454 exec execpodwl9ds -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30908' Jun 3 22:05:35.988: INFO: rc: 1 Jun 3 22:05:35.988: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7454 exec execpodwl9ds -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30908: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30908 nc: connect to 10.10.190.207 port 30908 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 22:05:36.725: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7454 exec execpodwl9ds -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30908' Jun 3 22:05:36.961: INFO: rc: 1 Jun 3 22:05:36.961: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7454 exec execpodwl9ds -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30908: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30908 nc: connect to 10.10.190.207 port 30908 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 22:05:37.725: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7454 exec execpodwl9ds -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30908' Jun 3 22:05:38.010: INFO: rc: 1 Jun 3 22:05:38.010: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7454 exec execpodwl9ds -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30908: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30908 nc: connect to 10.10.190.207 port 30908 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 22:05:38.724: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7454 exec execpodwl9ds -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30908' Jun 3 22:05:39.077: INFO: rc: 1 Jun 3 22:05:39.077: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7454 exec execpodwl9ds -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30908: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30908 nc: connect to 10.10.190.207 port 30908 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 22:05:39.724: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7454 exec execpodwl9ds -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30908' Jun 3 22:05:40.119: INFO: rc: 1 Jun 3 22:05:40.119: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7454 exec execpodwl9ds -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30908: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30908 nc: connect to 10.10.190.207 port 30908 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 22:05:40.724: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7454 exec execpodwl9ds -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30908' Jun 3 22:05:41.062: INFO: rc: 1 Jun 3 22:05:41.062: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7454 exec execpodwl9ds -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30908: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30908 nc: connect to 10.10.190.207 port 30908 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 22:05:41.724: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7454 exec execpodwl9ds -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30908' Jun 3 22:05:42.128: INFO: rc: 1 Jun 3 22:05:42.128: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7454 exec execpodwl9ds -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30908: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30908 nc: connect to 10.10.190.207 port 30908 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 22:05:42.725: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7454 exec execpodwl9ds -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30908' Jun 3 22:05:43.134: INFO: rc: 1 Jun 3 22:05:43.134: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7454 exec execpodwl9ds -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30908: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30908 nc: connect to 10.10.190.207 port 30908 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 22:05:43.724: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7454 exec execpodwl9ds -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30908' Jun 3 22:05:44.244: INFO: rc: 1 Jun 3 22:05:44.244: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7454 exec execpodwl9ds -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30908: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30908 nc: connect to 10.10.190.207 port 30908 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 22:05:44.724: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7454 exec execpodwl9ds -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30908' Jun 3 22:05:45.014: INFO: rc: 1 Jun 3 22:05:45.014: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7454 exec execpodwl9ds -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30908: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30908 nc: connect to 10.10.190.207 port 30908 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 22:05:45.726: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7454 exec execpodwl9ds -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30908' Jun 3 22:05:45.969: INFO: rc: 1 Jun 3 22:05:45.969: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7454 exec execpodwl9ds -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30908: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30908 nc: connect to 10.10.190.207 port 30908 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 22:05:46.725: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7454 exec execpodwl9ds -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30908' Jun 3 22:05:47.392: INFO: rc: 1 Jun 3 22:05:47.392: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7454 exec execpodwl9ds -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30908: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30908 nc: connect to 10.10.190.207 port 30908 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 22:05:47.724: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7454 exec execpodwl9ds -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30908' Jun 3 22:05:48.253: INFO: rc: 1 Jun 3 22:05:48.253: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7454 exec execpodwl9ds -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30908: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30908 nc: connect to 10.10.190.207 port 30908 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 22:05:48.724: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7454 exec execpodwl9ds -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30908' Jun 3 22:05:49.493: INFO: rc: 1 Jun 3 22:05:49.493: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7454 exec execpodwl9ds -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30908: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30908 nc: connect to 10.10.190.207 port 30908 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 22:05:49.725: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7454 exec execpodwl9ds -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30908' Jun 3 22:05:50.340: INFO: rc: 1 Jun 3 22:05:50.340: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7454 exec execpodwl9ds -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30908: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30908 nc: connect to 10.10.190.207 port 30908 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 22:05:50.725: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7454 exec execpodwl9ds -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30908' Jun 3 22:05:51.174: INFO: rc: 1 Jun 3 22:05:51.174: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7454 exec execpodwl9ds -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30908: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30908 nc: connect to 10.10.190.207 port 30908 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 22:05:51.724: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7454 exec execpodwl9ds -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30908' Jun 3 22:05:52.418: INFO: rc: 1 Jun 3 22:05:52.418: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7454 exec execpodwl9ds -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30908: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30908 nc: connect to 10.10.190.207 port 30908 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 22:05:52.725: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7454 exec execpodwl9ds -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30908' Jun 3 22:05:52.956: INFO: rc: 1 Jun 3 22:05:52.956: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7454 exec execpodwl9ds -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30908: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30908 nc: connect to 10.10.190.207 port 30908 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 22:05:53.725: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7454 exec execpodwl9ds -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30908' Jun 3 22:05:54.523: INFO: rc: 1 Jun 3 22:05:54.523: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7454 exec execpodwl9ds -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30908: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30908 nc: connect to 10.10.190.207 port 30908 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 22:05:54.724: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7454 exec execpodwl9ds -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30908' Jun 3 22:05:54.985: INFO: rc: 1 Jun 3 22:05:54.985: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7454 exec execpodwl9ds -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30908: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30908 nc: connect to 10.10.190.207 port 30908 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 22:05:55.724: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7454 exec execpodwl9ds -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30908' Jun 3 22:05:55.974: INFO: rc: 1 Jun 3 22:05:55.974: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7454 exec execpodwl9ds -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30908: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30908 nc: connect to 10.10.190.207 port 30908 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 22:05:56.724: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7454 exec execpodwl9ds -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30908' Jun 3 22:05:57.094: INFO: rc: 1 Jun 3 22:05:57.094: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7454 exec execpodwl9ds -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30908: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30908 nc: connect to 10.10.190.207 port 30908 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 22:05:57.725: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7454 exec execpodwl9ds -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30908' Jun 3 22:05:58.077: INFO: rc: 1 Jun 3 22:05:58.077: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7454 exec execpodwl9ds -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30908: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30908 nc: connect to 10.10.190.207 port 30908 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 22:05:58.724: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7454 exec execpodwl9ds -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30908' Jun 3 22:05:58.991: INFO: rc: 1 Jun 3 22:05:58.991: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7454 exec execpodwl9ds -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30908: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30908 nc: connect to 10.10.190.207 port 30908 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 22:05:59.725: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7454 exec execpodwl9ds -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30908' Jun 3 22:06:00.080: INFO: rc: 1 Jun 3 22:06:00.080: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7454 exec execpodwl9ds -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30908: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30908 nc: connect to 10.10.190.207 port 30908 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 22:06:00.724: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7454 exec execpodwl9ds -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30908' Jun 3 22:06:01.359: INFO: rc: 1 Jun 3 22:06:01.359: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7454 exec execpodwl9ds -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30908: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30908 nc: connect to 10.10.190.207 port 30908 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 22:06:01.725: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7454 exec execpodwl9ds -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30908' Jun 3 22:06:02.096: INFO: rc: 1 Jun 3 22:06:02.096: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7454 exec execpodwl9ds -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30908: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30908 nc: connect to 10.10.190.207 port 30908 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 22:06:02.724: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7454 exec execpodwl9ds -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30908' Jun 3 22:06:03.354: INFO: rc: 1 Jun 3 22:06:03.354: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7454 exec execpodwl9ds -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30908: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30908 nc: connect to 10.10.190.207 port 30908 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 22:06:03.724: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7454 exec execpodwl9ds -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30908' Jun 3 22:06:04.004: INFO: rc: 1 Jun 3 22:06:04.004: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7454 exec execpodwl9ds -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30908: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30908 nc: connect to 10.10.190.207 port 30908 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 22:06:04.724: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7454 exec execpodwl9ds -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30908' Jun 3 22:06:05.022: INFO: rc: 1 Jun 3 22:06:05.022: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7454 exec execpodwl9ds -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30908: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30908 nc: connect to 10.10.190.207 port 30908 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 22:06:05.724: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7454 exec execpodwl9ds -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30908' Jun 3 22:06:06.007: INFO: rc: 1 Jun 3 22:06:06.007: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7454 exec execpodwl9ds -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30908: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30908 nc: connect to 10.10.190.207 port 30908 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 22:06:06.725: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7454 exec execpodwl9ds -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30908' Jun 3 22:06:07.108: INFO: rc: 1 Jun 3 22:06:07.108: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7454 exec execpodwl9ds -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30908: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30908 nc: connect to 10.10.190.207 port 30908 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 22:06:07.724: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7454 exec execpodwl9ds -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30908' Jun 3 22:06:08.136: INFO: rc: 1 Jun 3 22:06:08.136: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7454 exec execpodwl9ds -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30908: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30908 nc: connect to 10.10.190.207 port 30908 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 22:06:08.725: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7454 exec execpodwl9ds -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30908' Jun 3 22:06:08.957: INFO: rc: 1 Jun 3 22:06:08.957: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7454 exec execpodwl9ds -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30908: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30908 nc: connect to 10.10.190.207 port 30908 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 22:06:09.724: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7454 exec execpodwl9ds -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30908' Jun 3 22:06:10.002: INFO: rc: 1 Jun 3 22:06:10.002: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7454 exec execpodwl9ds -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30908: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30908 nc: connect to 10.10.190.207 port 30908 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 22:06:10.724: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7454 exec execpodwl9ds -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30908' Jun 3 22:06:11.115: INFO: rc: 1 Jun 3 22:06:11.115: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7454 exec execpodwl9ds -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30908: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30908 nc: connect to 10.10.190.207 port 30908 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 22:06:11.725: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7454 exec execpodwl9ds -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30908' Jun 3 22:06:12.156: INFO: rc: 1 Jun 3 22:06:12.156: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7454 exec execpodwl9ds -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30908: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30908 nc: connect to 10.10.190.207 port 30908 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 22:06:12.725: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7454 exec execpodwl9ds -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30908' Jun 3 22:06:13.030: INFO: rc: 1 Jun 3 22:06:13.030: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7454 exec execpodwl9ds -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30908: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30908 nc: connect to 10.10.190.207 port 30908 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 22:06:13.724: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7454 exec execpodwl9ds -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30908' Jun 3 22:06:14.050: INFO: rc: 1 Jun 3 22:06:14.050: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7454 exec execpodwl9ds -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30908: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30908 nc: connect to 10.10.190.207 port 30908 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 22:06:14.724: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7454 exec execpodwl9ds -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30908' Jun 3 22:06:15.075: INFO: rc: 1 Jun 3 22:06:15.075: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7454 exec execpodwl9ds -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30908: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30908 nc: connect to 10.10.190.207 port 30908 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 22:06:15.724: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7454 exec execpodwl9ds -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30908' Jun 3 22:06:16.231: INFO: rc: 1 Jun 3 22:06:16.231: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7454 exec execpodwl9ds -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30908: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30908 nc: connect to 10.10.190.207 port 30908 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 22:06:16.724: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7454 exec execpodwl9ds -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30908' Jun 3 22:06:17.196: INFO: rc: 1 Jun 3 22:06:17.196: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7454 exec execpodwl9ds -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30908: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30908 nc: connect to 10.10.190.207 port 30908 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 22:06:17.724: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7454 exec execpodwl9ds -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30908' Jun 3 22:06:18.059: INFO: rc: 1 Jun 3 22:06:18.059: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7454 exec execpodwl9ds -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30908: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30908 + echo hostName nc: connect to 10.10.190.207 port 30908 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 22:06:18.725: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7454 exec execpodwl9ds -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30908' Jun 3 22:06:18.982: INFO: rc: 1 Jun 3 22:06:18.983: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7454 exec execpodwl9ds -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30908: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30908 nc: connect to 10.10.190.207 port 30908 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 22:06:19.725: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7454 exec execpodwl9ds -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30908' Jun 3 22:06:19.979: INFO: rc: 1 Jun 3 22:06:19.979: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7454 exec execpodwl9ds -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30908: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30908 nc: connect to 10.10.190.207 port 30908 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 22:06:20.725: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7454 exec execpodwl9ds -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30908' Jun 3 22:06:21.040: INFO: rc: 1 Jun 3 22:06:21.040: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7454 exec execpodwl9ds -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30908: Command stdout: stderr: + echo hostName+ nc -v -t -w 2 10.10.190.207 30908 nc: connect to 10.10.190.207 port 30908 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 22:06:21.724: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7454 exec execpodwl9ds -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30908' Jun 3 22:06:22.162: INFO: rc: 1 Jun 3 22:06:22.162: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7454 exec execpodwl9ds -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30908: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30908 nc: connect to 10.10.190.207 port 30908 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 22:06:22.724: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7454 exec execpodwl9ds -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30908' Jun 3 22:06:22.942: INFO: rc: 1 Jun 3 22:06:22.942: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7454 exec execpodwl9ds -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30908: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30908 nc: connect to 10.10.190.207 port 30908 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 22:06:23.725: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7454 exec execpodwl9ds -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30908' Jun 3 22:06:24.009: INFO: rc: 1 Jun 3 22:06:24.009: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7454 exec execpodwl9ds -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30908: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30908 nc: connect to 10.10.190.207 port 30908 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 22:06:24.724: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7454 exec execpodwl9ds -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30908' Jun 3 22:06:24.959: INFO: rc: 1 Jun 3 22:06:24.959: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7454 exec execpodwl9ds -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30908: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30908 nc: connect to 10.10.190.207 port 30908 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 22:06:25.724: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7454 exec execpodwl9ds -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30908' Jun 3 22:06:26.283: INFO: rc: 1 Jun 3 22:06:26.283: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7454 exec execpodwl9ds -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30908: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30908 nc: connect to 10.10.190.207 port 30908 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 22:06:26.724: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7454 exec execpodwl9ds -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30908' Jun 3 22:06:26.970: INFO: rc: 1 Jun 3 22:06:26.970: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7454 exec execpodwl9ds -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30908: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30908 nc: connect to 10.10.190.207 port 30908 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 22:06:27.726: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7454 exec execpodwl9ds -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30908' Jun 3 22:06:27.956: INFO: rc: 1 Jun 3 22:06:27.956: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7454 exec execpodwl9ds -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30908: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30908 nc: connect to 10.10.190.207 port 30908 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 22:06:28.725: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7454 exec execpodwl9ds -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30908' Jun 3 22:06:28.961: INFO: rc: 1 Jun 3 22:06:28.961: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7454 exec execpodwl9ds -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30908: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30908 nc: connect to 10.10.190.207 port 30908 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 22:06:29.724: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7454 exec execpodwl9ds -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30908' Jun 3 22:06:29.961: INFO: rc: 1 Jun 3 22:06:29.961: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7454 exec execpodwl9ds -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30908: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30908 nc: connect to 10.10.190.207 port 30908 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 22:06:30.724: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7454 exec execpodwl9ds -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30908' Jun 3 22:06:30.997: INFO: rc: 1 Jun 3 22:06:30.997: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7454 exec execpodwl9ds -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30908: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30908 nc: connect to 10.10.190.207 port 30908 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 22:06:31.724: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7454 exec execpodwl9ds -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30908' Jun 3 22:06:31.962: INFO: rc: 1 Jun 3 22:06:31.962: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7454 exec execpodwl9ds -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30908: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30908 nc: connect to 10.10.190.207 port 30908 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 22:06:32.724: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7454 exec execpodwl9ds -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30908' Jun 3 22:06:32.950: INFO: rc: 1 Jun 3 22:06:32.950: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7454 exec execpodwl9ds -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30908: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30908 nc: connect to 10.10.190.207 port 30908 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 22:06:33.726: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7454 exec execpodwl9ds -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30908' Jun 3 22:06:33.961: INFO: rc: 1 Jun 3 22:06:33.961: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7454 exec execpodwl9ds -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30908: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30908 nc: connect to 10.10.190.207 port 30908 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 22:06:34.725: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7454 exec execpodwl9ds -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30908' Jun 3 22:06:34.967: INFO: rc: 1 Jun 3 22:06:34.967: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7454 exec execpodwl9ds -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30908: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30908 nc: connect to 10.10.190.207 port 30908 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 22:06:35.725: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7454 exec execpodwl9ds -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30908' Jun 3 22:06:35.958: INFO: rc: 1 Jun 3 22:06:35.958: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7454 exec execpodwl9ds -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30908: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30908 nc: connect to 10.10.190.207 port 30908 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 22:06:36.724: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7454 exec execpodwl9ds -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30908' Jun 3 22:06:36.977: INFO: rc: 1 Jun 3 22:06:36.977: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7454 exec execpodwl9ds -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30908: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30908 nc: connect to 10.10.190.207 port 30908 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 22:06:37.724: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7454 exec execpodwl9ds -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30908' Jun 3 22:06:37.953: INFO: rc: 1 Jun 3 22:06:37.953: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7454 exec execpodwl9ds -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30908: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30908 nc: connect to 10.10.190.207 port 30908 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 22:06:38.724: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7454 exec execpodwl9ds -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30908' Jun 3 22:06:38.962: INFO: rc: 1 Jun 3 22:06:38.962: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7454 exec execpodwl9ds -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30908: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30908 nc: connect to 10.10.190.207 port 30908 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 22:06:39.724: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7454 exec execpodwl9ds -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30908' Jun 3 22:06:39.951: INFO: rc: 1 Jun 3 22:06:39.951: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7454 exec execpodwl9ds -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30908: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30908 nc: connect to 10.10.190.207 port 30908 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 22:06:40.726: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7454 exec execpodwl9ds -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30908' Jun 3 22:06:40.963: INFO: rc: 1 Jun 3 22:06:40.963: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7454 exec execpodwl9ds -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30908: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30908 nc: connect to 10.10.190.207 port 30908 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 22:06:41.724: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7454 exec execpodwl9ds -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30908' Jun 3 22:06:41.994: INFO: rc: 1 Jun 3 22:06:41.994: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7454 exec execpodwl9ds -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30908: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30908 nc: connect to 10.10.190.207 port 30908 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 22:06:42.724: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7454 exec execpodwl9ds -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30908' Jun 3 22:06:42.943: INFO: rc: 1 Jun 3 22:06:42.943: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7454 exec execpodwl9ds -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30908: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30908 nc: connect to 10.10.190.207 port 30908 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 22:06:43.724: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7454 exec execpodwl9ds -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30908' Jun 3 22:06:43.953: INFO: rc: 1 Jun 3 22:06:43.953: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7454 exec execpodwl9ds -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30908: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30908 nc: connect to 10.10.190.207 port 30908 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 22:06:44.725: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7454 exec execpodwl9ds -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30908' Jun 3 22:06:44.954: INFO: rc: 1 Jun 3 22:06:44.954: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7454 exec execpodwl9ds -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30908: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30908 nc: connect to 10.10.190.207 port 30908 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 22:06:45.724: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7454 exec execpodwl9ds -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30908' Jun 3 22:06:45.929: INFO: rc: 1 Jun 3 22:06:45.929: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7454 exec execpodwl9ds -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30908: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30908 nc: connect to 10.10.190.207 port 30908 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 22:06:46.724: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7454 exec execpodwl9ds -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30908' Jun 3 22:06:46.948: INFO: rc: 1 Jun 3 22:06:46.948: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7454 exec execpodwl9ds -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30908: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30908 nc: connect to 10.10.190.207 port 30908 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 22:06:47.724: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7454 exec execpodwl9ds -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30908' Jun 3 22:06:47.959: INFO: rc: 1 Jun 3 22:06:47.959: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7454 exec execpodwl9ds -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30908: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30908 nc: connect to 10.10.190.207 port 30908 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 22:06:48.726: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7454 exec execpodwl9ds -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30908' Jun 3 22:06:48.973: INFO: rc: 1 Jun 3 22:06:48.973: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7454 exec execpodwl9ds -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30908: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30908 nc: connect to 10.10.190.207 port 30908 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 22:06:49.725: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7454 exec execpodwl9ds -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30908' Jun 3 22:06:49.957: INFO: rc: 1 Jun 3 22:06:49.957: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7454 exec execpodwl9ds -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30908: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30908 nc: connect to 10.10.190.207 port 30908 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 22:06:50.724: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7454 exec execpodwl9ds -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30908' Jun 3 22:06:50.966: INFO: rc: 1 Jun 3 22:06:50.966: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7454 exec execpodwl9ds -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30908: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30908 nc: connect to 10.10.190.207 port 30908 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 22:06:51.724: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7454 exec execpodwl9ds -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30908' Jun 3 22:06:51.990: INFO: rc: 1 Jun 3 22:06:51.990: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7454 exec execpodwl9ds -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30908: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30908 nc: connect to 10.10.190.207 port 30908 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 22:06:52.725: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7454 exec execpodwl9ds -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30908' Jun 3 22:06:52.993: INFO: rc: 1 Jun 3 22:06:52.993: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7454 exec execpodwl9ds -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30908: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30908 nc: connect to 10.10.190.207 port 30908 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 22:06:53.725: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7454 exec execpodwl9ds -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30908' Jun 3 22:06:53.958: INFO: rc: 1 Jun 3 22:06:53.958: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7454 exec execpodwl9ds -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30908: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30908 nc: connect to 10.10.190.207 port 30908 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 22:06:54.724: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7454 exec execpodwl9ds -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30908' Jun 3 22:06:54.946: INFO: rc: 1 Jun 3 22:06:54.946: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7454 exec execpodwl9ds -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30908: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30908 nc: connect to 10.10.190.207 port 30908 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 22:06:55.726: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7454 exec execpodwl9ds -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30908' Jun 3 22:06:55.976: INFO: rc: 1 Jun 3 22:06:55.976: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7454 exec execpodwl9ds -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30908: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30908 nc: connect to 10.10.190.207 port 30908 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 22:06:56.724: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7454 exec execpodwl9ds -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30908' Jun 3 22:06:56.982: INFO: rc: 1 Jun 3 22:06:56.982: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7454 exec execpodwl9ds -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30908: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30908 nc: connect to 10.10.190.207 port 30908 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 22:06:57.724: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7454 exec execpodwl9ds -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30908' Jun 3 22:06:57.975: INFO: rc: 1 Jun 3 22:06:57.975: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7454 exec execpodwl9ds -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30908: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30908 nc: connect to 10.10.190.207 port 30908 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 22:06:58.725: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7454 exec execpodwl9ds -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30908' Jun 3 22:06:58.954: INFO: rc: 1 Jun 3 22:06:58.954: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7454 exec execpodwl9ds -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30908: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30908 nc: connect to 10.10.190.207 port 30908 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 22:06:59.724: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7454 exec execpodwl9ds -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30908' Jun 3 22:06:59.952: INFO: rc: 1 Jun 3 22:06:59.952: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7454 exec execpodwl9ds -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30908: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30908 nc: connect to 10.10.190.207 port 30908 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 22:07:00.724: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7454 exec execpodwl9ds -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30908' Jun 3 22:07:00.963: INFO: rc: 1 Jun 3 22:07:00.963: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7454 exec execpodwl9ds -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30908: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30908 nc: connect to 10.10.190.207 port 30908 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 22:07:01.724: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7454 exec execpodwl9ds -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30908' Jun 3 22:07:01.958: INFO: rc: 1 Jun 3 22:07:01.958: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7454 exec execpodwl9ds -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30908: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30908 nc: connect to 10.10.190.207 port 30908 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 22:07:02.724: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7454 exec execpodwl9ds -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30908' Jun 3 22:07:02.962: INFO: rc: 1 Jun 3 22:07:02.962: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7454 exec execpodwl9ds -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30908: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30908 nc: connect to 10.10.190.207 port 30908 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 22:07:03.725: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7454 exec execpodwl9ds -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30908' Jun 3 22:07:03.968: INFO: rc: 1 Jun 3 22:07:03.968: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7454 exec execpodwl9ds -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30908: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30908 nc: connect to 10.10.190.207 port 30908 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 22:07:04.724: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7454 exec execpodwl9ds -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30908' Jun 3 22:07:04.972: INFO: rc: 1 Jun 3 22:07:04.972: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7454 exec execpodwl9ds -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30908: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30908 nc: connect to 10.10.190.207 port 30908 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 22:07:05.725: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7454 exec execpodwl9ds -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30908' Jun 3 22:07:05.952: INFO: rc: 1 Jun 3 22:07:05.952: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7454 exec execpodwl9ds -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30908: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30908 nc: connect to 10.10.190.207 port 30908 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 22:07:06.725: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7454 exec execpodwl9ds -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30908' Jun 3 22:07:06.966: INFO: rc: 1 Jun 3 22:07:06.966: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7454 exec execpodwl9ds -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30908: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30908 nc: connect to 10.10.190.207 port 30908 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 22:07:07.724: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7454 exec execpodwl9ds -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30908' Jun 3 22:07:07.945: INFO: rc: 1 Jun 3 22:07:07.945: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7454 exec execpodwl9ds -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30908: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30908 nc: connect to 10.10.190.207 port 30908 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 22:07:08.724: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7454 exec execpodwl9ds -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30908' Jun 3 22:07:08.952: INFO: rc: 1 Jun 3 22:07:08.952: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7454 exec execpodwl9ds -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30908: Command stdout: stderr: + nc -v -t -w+ 2 10.10.190.207echo 30908 hostName nc: connect to 10.10.190.207 port 30908 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 22:07:09.724: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7454 exec execpodwl9ds -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30908' Jun 3 22:07:09.950: INFO: rc: 1 Jun 3 22:07:09.950: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7454 exec execpodwl9ds -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30908: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30908 nc: connect to 10.10.190.207 port 30908 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 22:07:10.724: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7454 exec execpodwl9ds -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30908' Jun 3 22:07:10.956: INFO: rc: 1 Jun 3 22:07:10.956: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7454 exec execpodwl9ds -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30908: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30908 nc: connect to 10.10.190.207 port 30908 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 22:07:11.724: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7454 exec execpodwl9ds -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30908' Jun 3 22:07:11.944: INFO: rc: 1 Jun 3 22:07:11.944: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7454 exec execpodwl9ds -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30908: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30908 nc: connect to 10.10.190.207 port 30908 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 22:07:12.725: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7454 exec execpodwl9ds -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30908' Jun 3 22:07:12.952: INFO: rc: 1 Jun 3 22:07:12.952: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7454 exec execpodwl9ds -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30908: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30908 nc: connect to 10.10.190.207 port 30908 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 22:07:13.724: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7454 exec execpodwl9ds -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30908' Jun 3 22:07:13.951: INFO: rc: 1 Jun 3 22:07:13.951: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7454 exec execpodwl9ds -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30908: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30908 nc: connect to 10.10.190.207 port 30908 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 22:07:14.724: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7454 exec execpodwl9ds -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30908' Jun 3 22:07:14.959: INFO: rc: 1 Jun 3 22:07:14.959: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7454 exec execpodwl9ds -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30908: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30908 nc: connect to 10.10.190.207 port 30908 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 22:07:15.724: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7454 exec execpodwl9ds -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30908' Jun 3 22:07:15.964: INFO: rc: 1 Jun 3 22:07:15.964: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7454 exec execpodwl9ds -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30908: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30908 nc: connect to 10.10.190.207 port 30908 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 22:07:16.724: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7454 exec execpodwl9ds -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30908' Jun 3 22:07:16.949: INFO: rc: 1 Jun 3 22:07:16.949: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7454 exec execpodwl9ds -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30908: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30908 nc: connect to 10.10.190.207 port 30908 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 22:07:17.725: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7454 exec execpodwl9ds -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30908' Jun 3 22:07:17.973: INFO: rc: 1 Jun 3 22:07:17.973: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7454 exec execpodwl9ds -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30908: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30908 nc: connect to 10.10.190.207 port 30908 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 22:07:18.725: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7454 exec execpodwl9ds -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30908' Jun 3 22:07:18.969: INFO: rc: 1 Jun 3 22:07:18.969: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7454 exec execpodwl9ds -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30908: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30908 nc: connect to 10.10.190.207 port 30908 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 22:07:19.724: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7454 exec execpodwl9ds -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30908' Jun 3 22:07:19.945: INFO: rc: 1 Jun 3 22:07:19.945: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7454 exec execpodwl9ds -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30908: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30908 + echo hostName nc: connect to 10.10.190.207 port 30908 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 22:07:20.724: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7454 exec execpodwl9ds -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30908' Jun 3 22:07:20.938: INFO: rc: 1 Jun 3 22:07:20.938: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7454 exec execpodwl9ds -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30908: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30908 nc: connect to 10.10.190.207 port 30908 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 22:07:21.724: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7454 exec execpodwl9ds -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30908' Jun 3 22:07:21.982: INFO: rc: 1 Jun 3 22:07:21.982: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7454 exec execpodwl9ds -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30908: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30908 nc: connect to 10.10.190.207 port 30908 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 22:07:22.724: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7454 exec execpodwl9ds -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30908' Jun 3 22:07:22.955: INFO: rc: 1 Jun 3 22:07:22.955: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7454 exec execpodwl9ds -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30908: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30908 + echo hostName nc: connect to 10.10.190.207 port 30908 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 22:07:23.725: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7454 exec execpodwl9ds -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30908' Jun 3 22:07:23.968: INFO: rc: 1 Jun 3 22:07:23.968: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7454 exec execpodwl9ds -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30908: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30908 nc: connect to 10.10.190.207 port 30908 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 22:07:24.724: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7454 exec execpodwl9ds -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30908' Jun 3 22:07:24.964: INFO: rc: 1 Jun 3 22:07:24.964: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7454 exec execpodwl9ds -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30908: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30908 nc: connect to 10.10.190.207 port 30908 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 22:07:25.724: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7454 exec execpodwl9ds -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30908' Jun 3 22:07:25.964: INFO: rc: 1 Jun 3 22:07:25.964: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7454 exec execpodwl9ds -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30908: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30908 nc: connect to 10.10.190.207 port 30908 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 22:07:26.725: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7454 exec execpodwl9ds -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30908' Jun 3 22:07:26.978: INFO: rc: 1 Jun 3 22:07:26.978: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7454 exec execpodwl9ds -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30908: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30908 nc: connect to 10.10.190.207 port 30908 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 22:07:27.724: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7454 exec execpodwl9ds -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30908' Jun 3 22:07:27.966: INFO: rc: 1 Jun 3 22:07:27.966: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7454 exec execpodwl9ds -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30908: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30908 nc: connect to 10.10.190.207 port 30908 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 22:07:28.724: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7454 exec execpodwl9ds -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30908' Jun 3 22:07:28.933: INFO: rc: 1 Jun 3 22:07:28.933: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7454 exec execpodwl9ds -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30908: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30908 nc: connect to 10.10.190.207 port 30908 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 22:07:29.724: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7454 exec execpodwl9ds -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30908' Jun 3 22:07:29.948: INFO: rc: 1 Jun 3 22:07:29.948: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7454 exec execpodwl9ds -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30908: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30908 nc: connect to 10.10.190.207 port 30908 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 22:07:29.949: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7454 exec execpodwl9ds -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30908' Jun 3 22:07:30.185: INFO: rc: 1 Jun 3 22:07:30.185: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7454 exec execpodwl9ds -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30908: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30908 nc: connect to 10.10.190.207 port 30908 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 22:07:30.186: FAIL: Unexpected error: <*errors.errorString | 0xc004440110>: { s: "service is not reachable within 2m0s timeout on endpoint 10.10.190.207:30908 over TCP protocol", } service is not reachable within 2m0s timeout on endpoint 10.10.190.207:30908 over TCP protocol occurred Full Stack Trace k8s.io/kubernetes/test/e2e/network.glob..func24.15() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1351 +0x358 k8s.io/kubernetes/test/e2e.RunE2ETests(0xc0022f4000) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x36c k8s.io/kubernetes/test/e2e.TestE2E(0xc0022f4000) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:144 +0x2b testing.tRunner(0xc0022f4000, 0x70f99e8) /usr/local/go/src/testing/testing.go:1193 +0xef created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1238 +0x2b3 Jun 3 22:07:30.187: INFO: Cleaning up the ExternalName to NodePort test service [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 STEP: Collecting events from namespace "services-7454". STEP: Found 17 events. Jun 3 22:07:30.221: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for execpodwl9ds: { } Scheduled: Successfully assigned services-7454/execpodwl9ds to node2 Jun 3 22:07:30.221: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for externalname-service-bkpmw: { } Scheduled: Successfully assigned services-7454/externalname-service-bkpmw to node2 Jun 3 22:07:30.221: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for externalname-service-ltl7q: { } Scheduled: Successfully assigned services-7454/externalname-service-ltl7q to node1 Jun 3 22:07:30.221: INFO: At 2022-06-03 22:05:20 +0000 UTC - event for externalname-service: {replication-controller } SuccessfulCreate: Created pod: externalname-service-ltl7q Jun 3 22:07:30.221: INFO: At 2022-06-03 22:05:20 +0000 UTC - event for externalname-service: {replication-controller } SuccessfulCreate: Created pod: externalname-service-bkpmw Jun 3 22:07:30.221: INFO: At 2022-06-03 22:05:22 +0000 UTC - event for externalname-service-bkpmw: {kubelet node2} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/agnhost:2.32" Jun 3 22:07:30.221: INFO: At 2022-06-03 22:05:22 +0000 UTC - event for externalname-service-bkpmw: {kubelet node2} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/agnhost:2.32" in 270.606339ms Jun 3 22:07:30.221: INFO: At 2022-06-03 22:05:22 +0000 UTC - event for externalname-service-bkpmw: {kubelet node2} Started: Started container externalname-service Jun 3 22:07:30.221: INFO: At 2022-06-03 22:05:22 +0000 UTC - event for externalname-service-bkpmw: {kubelet node2} Created: Created container externalname-service Jun 3 22:07:30.221: INFO: At 2022-06-03 22:05:22 +0000 UTC - event for externalname-service-ltl7q: {kubelet node1} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/agnhost:2.32" Jun 3 22:07:30.221: INFO: At 2022-06-03 22:05:22 +0000 UTC - event for externalname-service-ltl7q: {kubelet node1} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/agnhost:2.32" in 261.65792ms Jun 3 22:07:30.221: INFO: At 2022-06-03 22:05:22 +0000 UTC - event for externalname-service-ltl7q: {kubelet node1} Created: Created container externalname-service Jun 3 22:07:30.221: INFO: At 2022-06-03 22:05:23 +0000 UTC - event for externalname-service-ltl7q: {kubelet node1} Started: Started container externalname-service Jun 3 22:07:30.221: INFO: At 2022-06-03 22:05:26 +0000 UTC - event for execpodwl9ds: {kubelet node2} Started: Started container agnhost-container Jun 3 22:07:30.221: INFO: At 2022-06-03 22:05:26 +0000 UTC - event for execpodwl9ds: {kubelet node2} Created: Created container agnhost-container Jun 3 22:07:30.221: INFO: At 2022-06-03 22:05:26 +0000 UTC - event for execpodwl9ds: {kubelet node2} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/agnhost:2.32" Jun 3 22:07:30.221: INFO: At 2022-06-03 22:05:26 +0000 UTC - event for execpodwl9ds: {kubelet node2} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/agnhost:2.32" in 345.60668ms Jun 3 22:07:30.224: INFO: POD NODE PHASE GRACE CONDITIONS Jun 3 22:07:30.224: INFO: execpodwl9ds node2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-06-03 22:05:23 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2022-06-03 22:05:27 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-06-03 22:05:27 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-06-03 22:05:23 +0000 UTC }] Jun 3 22:07:30.224: INFO: externalname-service-bkpmw node2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-06-03 22:05:20 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2022-06-03 22:05:23 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-06-03 22:05:23 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-06-03 22:05:20 +0000 UTC }] Jun 3 22:07:30.224: INFO: externalname-service-ltl7q node1 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-06-03 22:05:20 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2022-06-03 22:05:23 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-06-03 22:05:23 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-06-03 22:05:20 +0000 UTC }] Jun 3 22:07:30.224: INFO: Jun 3 22:07:30.228: INFO: Logging node info for node master1 Jun 3 22:07:30.230: INFO: Node Info: &Node{ObjectMeta:{master1 4d289319-b343-4e96-a789-1a1cbeac007b 50448 0 2022-06-03 19:57:53 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master1 kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.202 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2022-06-03 19:57:56 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}}} {kube-controller-manager Update v1 2022-06-03 19:58:10 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.0.0/24\"":{}},"f:taints":{}}}} {flanneld Update v1 2022-06-03 20:00:37 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kubelet Update v1 2022-06-03 20:05:24 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{451201003520 0} {} 440625980Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234743296 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{406080902496 0} {} 406080902496 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324579328 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-06-03 20:03:30 +0000 UTC,LastTransitionTime:2022-06-03 20:03:30 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-06-03 22:07:24 +0000 UTC,LastTransitionTime:2022-06-03 19:57:50 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-06-03 22:07:24 +0000 UTC,LastTransitionTime:2022-06-03 19:57:50 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-06-03 22:07:24 +0000 UTC,LastTransitionTime:2022-06-03 19:57:50 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-06-03 22:07:24 +0000 UTC,LastTransitionTime:2022-06-03 20:00:47 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.202,},NodeAddress{Type:Hostname,Address:master1,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:3d668405f73a457bb0bcb4df5f4edac8,SystemUUID:00ACFB60-0631-E711-906E-0017A4403562,BootID:c08279e3-a5cb-4f4d-b9f0-f2cde655469f,KernelVersion:3.10.0-1160.66.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.16,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:727687199,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 sirot/netperf-latest:latest],SizeBytes:282025213,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e k8s.gcr.io/coredns/coredns:v1.8.0],SizeBytes:42454755,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[localhost:30500/tasextender@sha256:eddd5e176ac5f79e2e8ba9a1b7023bbf7200edfa835da39de54a6bf3568f9668 tasextender:latest localhost:30500/tasextender:0.4],SizeBytes:28910791,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[registry@sha256:1cd9409a311350c3072fe510b52046f104416376c126a479cef9a4dfe692cf57 registry:2.7.0],SizeBytes:24191168,},ContainerImage{Names:[nginx@sha256:b92d3b942c8b84da889ac3dc6e83bd20ffb8cd2d8298eba92c8b0bf88d52f03e nginx:1.20.1-alpine],SizeBytes:22721538,},ContainerImage{Names:[@ :],SizeBytes:5577654,},ContainerImage{Names:[alpine@sha256:c0e9560cda118f9ec63ddefb4a173a2b2a0347082d7dff7dc14272e7841a5b5a alpine:3.12.1],SizeBytes:5573013,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jun 3 22:07:30.231: INFO: Logging kubelet events for node master1 Jun 3 22:07:30.234: INFO: Logging pods the kubelet thinks is on node master1 Jun 3 22:07:30.256: INFO: dns-autoscaler-7df78bfcfb-vdtpl started at 2022-06-03 20:01:09 +0000 UTC (0+1 container statuses recorded) Jun 3 22:07:30.256: INFO: Container autoscaler ready: true, restart count 2 Jun 3 22:07:30.256: INFO: coredns-8474476ff8-rvc4v started at 2022-06-03 20:01:12 +0000 UTC (0+1 container statuses recorded) Jun 3 22:07:30.256: INFO: Container coredns ready: true, restart count 1 Jun 3 22:07:30.256: INFO: container-registry-65d7c44b96-2nzvn started at 2022-06-03 20:05:02 +0000 UTC (0+2 container statuses recorded) Jun 3 22:07:30.256: INFO: Container docker-registry ready: true, restart count 0 Jun 3 22:07:30.256: INFO: Container nginx ready: true, restart count 0 Jun 3 22:07:30.256: INFO: kube-scheduler-master1 started at 2022-06-03 20:06:52 +0000 UTC (0+1 container statuses recorded) Jun 3 22:07:30.256: INFO: Container kube-scheduler ready: true, restart count 0 Jun 3 22:07:30.256: INFO: kube-proxy-zgchh started at 2022-06-03 19:59:36 +0000 UTC (0+1 container statuses recorded) Jun 3 22:07:30.256: INFO: Container kube-proxy ready: true, restart count 2 Jun 3 22:07:30.256: INFO: kube-controller-manager-master1 started at 2022-06-03 19:58:57 +0000 UTC (0+1 container statuses recorded) Jun 3 22:07:30.257: INFO: Container kube-controller-manager ready: true, restart count 1 Jun 3 22:07:30.257: INFO: kube-flannel-m8sj7 started at 2022-06-03 20:00:31 +0000 UTC (1+1 container statuses recorded) Jun 3 22:07:30.257: INFO: Init container install-cni ready: true, restart count 0 Jun 3 22:07:30.257: INFO: Container kube-flannel ready: true, restart count 1 Jun 3 22:07:30.257: INFO: kube-multus-ds-amd64-n58qk started at 2022-06-03 20:00:40 +0000 UTC (0+1 container statuses recorded) Jun 3 22:07:30.257: INFO: Container kube-multus ready: true, restart count 1 Jun 3 22:07:30.257: INFO: node-exporter-45rhg started at 2022-06-03 20:13:28 +0000 UTC (0+2 container statuses recorded) Jun 3 22:07:30.257: INFO: Container kube-rbac-proxy ready: true, restart count 0 Jun 3 22:07:30.257: INFO: Container node-exporter ready: true, restart count 0 Jun 3 22:07:30.257: INFO: kube-apiserver-master1 started at 2022-06-03 19:58:57 +0000 UTC (0+1 container statuses recorded) Jun 3 22:07:30.257: INFO: Container kube-apiserver ready: true, restart count 0 Jun 3 22:07:30.350: INFO: Latency metrics for node master1 Jun 3 22:07:30.350: INFO: Logging node info for node master2 Jun 3 22:07:30.354: INFO: Node Info: &Node{ObjectMeta:{master2 a6ae2f0e-af0f-4dbb-a8e5-6d3a309310bc 50444 0 2022-06-03 19:58:21 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master2 kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.203 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2022-06-03 19:58:23 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}}} {flanneld Update v1 2022-06-03 20:00:37 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2022-06-03 20:00:45 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}},"f:taints":{}}}} {kubelet Update v1 2022-06-03 20:10:57 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:capacity":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.1.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{451201003520 0} {} 440625980Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234739200 0} {} 196518300Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{406080902496 0} {} 406080902496 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324575232 0} {} 195629468Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-06-03 20:03:28 +0000 UTC,LastTransitionTime:2022-06-03 20:03:28 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-06-03 22:07:23 +0000 UTC,LastTransitionTime:2022-06-03 19:58:21 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-06-03 22:07:23 +0000 UTC,LastTransitionTime:2022-06-03 19:58:21 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-06-03 22:07:23 +0000 UTC,LastTransitionTime:2022-06-03 19:58:21 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-06-03 22:07:23 +0000 UTC,LastTransitionTime:2022-06-03 20:00:45 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.203,},NodeAddress{Type:Hostname,Address:master2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:21e5c20b6e4a4d3fb07443d5575db572,SystemUUID:00A0DE53-E51D-E711-906E-0017A4403562,BootID:52401484-5222-49a3-a465-e7215ade9b1e,KernelVersion:3.10.0-1160.66.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.16,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:727687199,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 sirot/netperf-latest:latest],SizeBytes:282025213,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[quay.io/prometheus-operator/prometheus-operator@sha256:850c86bfeda4389bc9c757a9fd17ca5a090ea6b424968178d4467492cfa13921 quay.io/prometheus-operator/prometheus-operator:v0.44.1],SizeBytes:42617274,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e k8s.gcr.io/coredns/coredns:v1.8.0],SizeBytes:42454755,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jun 3 22:07:30.354: INFO: Logging kubelet events for node master2 Jun 3 22:07:30.357: INFO: Logging pods the kubelet thinks is on node master2 Jun 3 22:07:30.364: INFO: kube-scheduler-master2 started at 2022-06-03 19:58:55 +0000 UTC (0+1 container statuses recorded) Jun 3 22:07:30.365: INFO: Container kube-scheduler ready: true, restart count 3 Jun 3 22:07:30.365: INFO: kube-flannel-sbdcv started at 2022-06-03 20:00:32 +0000 UTC (1+1 container statuses recorded) Jun 3 22:07:30.365: INFO: Init container install-cni ready: true, restart count 2 Jun 3 22:07:30.365: INFO: Container kube-flannel ready: true, restart count 1 Jun 3 22:07:30.365: INFO: kube-multus-ds-amd64-ccvdq started at 2022-06-03 20:00:40 +0000 UTC (0+1 container statuses recorded) Jun 3 22:07:30.365: INFO: Container kube-multus ready: true, restart count 1 Jun 3 22:07:30.365: INFO: kube-apiserver-master2 started at 2022-06-03 19:58:55 +0000 UTC (0+1 container statuses recorded) Jun 3 22:07:30.365: INFO: Container kube-apiserver ready: true, restart count 0 Jun 3 22:07:30.365: INFO: kube-controller-manager-master2 started at 2022-06-03 19:58:55 +0000 UTC (0+1 container statuses recorded) Jun 3 22:07:30.365: INFO: Container kube-controller-manager ready: true, restart count 2 Jun 3 22:07:30.365: INFO: kube-proxy-nlc58 started at 2022-06-03 19:59:36 +0000 UTC (0+1 container statuses recorded) Jun 3 22:07:30.365: INFO: Container kube-proxy ready: true, restart count 1 Jun 3 22:07:30.365: INFO: prometheus-operator-585ccfb458-xp2lz started at 2022-06-03 20:13:21 +0000 UTC (0+2 container statuses recorded) Jun 3 22:07:30.365: INFO: Container kube-rbac-proxy ready: true, restart count 0 Jun 3 22:07:30.365: INFO: Container prometheus-operator ready: true, restart count 0 Jun 3 22:07:30.365: INFO: node-exporter-2h6sb started at 2022-06-03 20:13:28 +0000 UTC (0+2 container statuses recorded) Jun 3 22:07:30.365: INFO: Container kube-rbac-proxy ready: true, restart count 0 Jun 3 22:07:30.365: INFO: Container node-exporter ready: true, restart count 0 Jun 3 22:07:30.435: INFO: Latency metrics for node master2 Jun 3 22:07:30.435: INFO: Logging node info for node master3 Jun 3 22:07:30.439: INFO: Node Info: &Node{ObjectMeta:{master3 559b19e7-45b0-4589-9993-9bba259aae96 50449 0 2022-06-03 19:58:27 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master3 kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.204 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/master.version:v0.8.2 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2022-06-03 19:58:32 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}}} {flanneld Update v1 2022-06-03 20:00:37 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2022-06-03 20:00:47 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.2.0/24\"":{}},"f:taints":{}}}} {nfd-master Update v1 2022-06-03 20:08:15 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/master.version":{}}}}} {kubelet Update v1 2022-06-03 20:08:27 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:capacity":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.2.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{451201003520 0} {} 440625980Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234743296 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{406080902496 0} {} 406080902496 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324579328 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-06-03 20:03:22 +0000 UTC,LastTransitionTime:2022-06-03 20:03:22 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-06-03 22:07:24 +0000 UTC,LastTransitionTime:2022-06-03 19:58:27 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-06-03 22:07:24 +0000 UTC,LastTransitionTime:2022-06-03 19:58:27 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-06-03 22:07:24 +0000 UTC,LastTransitionTime:2022-06-03 19:58:27 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-06-03 22:07:24 +0000 UTC,LastTransitionTime:2022-06-03 20:03:18 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.204,},NodeAddress{Type:Hostname,Address:master3,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:5b399eed918a40dd8324debc1c0777a3,SystemUUID:008B1444-141E-E711-906E-0017A4403562,BootID:2fde35f0-2dc9-4531-9d2b-0bd4a6516b3a,KernelVersion:3.10.0-1160.66.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.16,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:727687199,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[k8s.gcr.io/nfd/node-feature-discovery@sha256:74a1cbd82354f148277e20cdce25d57816e355a896bc67f67a0f722164b16945 k8s.gcr.io/nfd/node-feature-discovery:v0.8.2],SizeBytes:108486428,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e k8s.gcr.io/coredns/coredns:v1.8.0],SizeBytes:42454755,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 aquasec/kube-bench:0.3.1],SizeBytes:19301876,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jun 3 22:07:30.440: INFO: Logging kubelet events for node master3 Jun 3 22:07:30.443: INFO: Logging pods the kubelet thinks is on node master3 Jun 3 22:07:30.452: INFO: kube-proxy-m8r9n started at 2022-06-03 19:59:36 +0000 UTC (0+1 container statuses recorded) Jun 3 22:07:30.453: INFO: Container kube-proxy ready: true, restart count 2 Jun 3 22:07:30.453: INFO: coredns-8474476ff8-dvwn7 started at 2022-06-03 20:01:07 +0000 UTC (0+1 container statuses recorded) Jun 3 22:07:30.453: INFO: Container coredns ready: true, restart count 1 Jun 3 22:07:30.453: INFO: node-exporter-jn8vv started at 2022-06-03 20:13:28 +0000 UTC (0+2 container statuses recorded) Jun 3 22:07:30.453: INFO: Container kube-rbac-proxy ready: true, restart count 0 Jun 3 22:07:30.453: INFO: Container node-exporter ready: true, restart count 0 Jun 3 22:07:30.453: INFO: kube-controller-manager-master3 started at 2022-06-03 20:03:18 +0000 UTC (0+1 container statuses recorded) Jun 3 22:07:30.453: INFO: Container kube-controller-manager ready: true, restart count 2 Jun 3 22:07:30.453: INFO: kube-scheduler-master3 started at 2022-06-03 19:58:27 +0000 UTC (0+1 container statuses recorded) Jun 3 22:07:30.453: INFO: Container kube-scheduler ready: true, restart count 3 Jun 3 22:07:30.453: INFO: kube-multus-ds-amd64-gjv49 started at 2022-06-03 20:00:40 +0000 UTC (0+1 container statuses recorded) Jun 3 22:07:30.453: INFO: Container kube-multus ready: true, restart count 1 Jun 3 22:07:30.453: INFO: node-feature-discovery-controller-cff799f9f-8fbbp started at 2022-06-03 20:08:09 +0000 UTC (0+1 container statuses recorded) Jun 3 22:07:30.453: INFO: Container nfd-controller ready: true, restart count 0 Jun 3 22:07:30.453: INFO: kube-apiserver-master3 started at 2022-06-03 20:03:18 +0000 UTC (0+1 container statuses recorded) Jun 3 22:07:30.453: INFO: Container kube-apiserver ready: true, restart count 0 Jun 3 22:07:30.453: INFO: kube-flannel-nx64t started at 2022-06-03 20:00:32 +0000 UTC (1+1 container statuses recorded) Jun 3 22:07:30.453: INFO: Init container install-cni ready: true, restart count 2 Jun 3 22:07:30.453: INFO: Container kube-flannel ready: true, restart count 2 Jun 3 22:07:30.546: INFO: Latency metrics for node master3 Jun 3 22:07:30.546: INFO: Logging node info for node node1 Jun 3 22:07:30.549: INFO: Node Info: &Node{ObjectMeta:{node1 482ecf0f-7f88-436d-a313-227096fe8b8d 50443 0 2022-06-03 19:59:31 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux cmk.intel.com/cmk-node:true feature.node.kubernetes.io/cpu-cpuid.ADX:true feature.node.kubernetes.io/cpu-cpuid.AESNI:true feature.node.kubernetes.io/cpu-cpuid.AVX:true feature.node.kubernetes.io/cpu-cpuid.AVX2:true feature.node.kubernetes.io/cpu-cpuid.AVX512BW:true feature.node.kubernetes.io/cpu-cpuid.AVX512CD:true feature.node.kubernetes.io/cpu-cpuid.AVX512DQ:true feature.node.kubernetes.io/cpu-cpuid.AVX512F:true feature.node.kubernetes.io/cpu-cpuid.AVX512VL:true feature.node.kubernetes.io/cpu-cpuid.FMA3:true feature.node.kubernetes.io/cpu-cpuid.HLE:true feature.node.kubernetes.io/cpu-cpuid.IBPB:true feature.node.kubernetes.io/cpu-cpuid.MPX:true feature.node.kubernetes.io/cpu-cpuid.RTM:true feature.node.kubernetes.io/cpu-cpuid.SSE4:true feature.node.kubernetes.io/cpu-cpuid.SSE42:true feature.node.kubernetes.io/cpu-cpuid.STIBP:true feature.node.kubernetes.io/cpu-cpuid.VMX:true feature.node.kubernetes.io/cpu-cstate.enabled:true feature.node.kubernetes.io/cpu-hardware_multithreading:true feature.node.kubernetes.io/cpu-pstate.status:active feature.node.kubernetes.io/cpu-pstate.turbo:true feature.node.kubernetes.io/cpu-rdt.RDTCMT:true feature.node.kubernetes.io/cpu-rdt.RDTL3CA:true feature.node.kubernetes.io/cpu-rdt.RDTMBA:true feature.node.kubernetes.io/cpu-rdt.RDTMBM:true feature.node.kubernetes.io/cpu-rdt.RDTMON:true feature.node.kubernetes.io/kernel-config.NO_HZ:true feature.node.kubernetes.io/kernel-config.NO_HZ_FULL:true feature.node.kubernetes.io/kernel-selinux.enabled:true feature.node.kubernetes.io/kernel-version.full:3.10.0-1160.66.1.el7.x86_64 feature.node.kubernetes.io/kernel-version.major:3 feature.node.kubernetes.io/kernel-version.minor:10 feature.node.kubernetes.io/kernel-version.revision:0 feature.node.kubernetes.io/memory-numa:true feature.node.kubernetes.io/network-sriov.capable:true feature.node.kubernetes.io/network-sriov.configured:true feature.node.kubernetes.io/pci-0300_1a03.present:true feature.node.kubernetes.io/storage-nonrotationaldisk:true feature.node.kubernetes.io/system-os_release.ID:centos feature.node.kubernetes.io/system-os_release.VERSION_ID:7 feature.node.kubernetes.io/system-os_release.VERSION_ID.major:7 kubernetes.io/arch:amd64 kubernetes.io/hostname:node1 kubernetes.io/os:linux] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.207 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/extended-resources: nfd.node.kubernetes.io/feature-labels:cpu-cpuid.ADX,cpu-cpuid.AESNI,cpu-cpuid.AVX,cpu-cpuid.AVX2,cpu-cpuid.AVX512BW,cpu-cpuid.AVX512CD,cpu-cpuid.AVX512DQ,cpu-cpuid.AVX512F,cpu-cpuid.AVX512VL,cpu-cpuid.FMA3,cpu-cpuid.HLE,cpu-cpuid.IBPB,cpu-cpuid.MPX,cpu-cpuid.RTM,cpu-cpuid.SSE4,cpu-cpuid.SSE42,cpu-cpuid.STIBP,cpu-cpuid.VMX,cpu-cstate.enabled,cpu-hardware_multithreading,cpu-pstate.status,cpu-pstate.turbo,cpu-rdt.RDTCMT,cpu-rdt.RDTL3CA,cpu-rdt.RDTMBA,cpu-rdt.RDTMBM,cpu-rdt.RDTMON,kernel-config.NO_HZ,kernel-config.NO_HZ_FULL,kernel-selinux.enabled,kernel-version.full,kernel-version.major,kernel-version.minor,kernel-version.revision,memory-numa,network-sriov.capable,network-sriov.configured,pci-0300_1a03.present,storage-nonrotationaldisk,system-os_release.ID,system-os_release.VERSION_ID,system-os_release.VERSION_ID.major nfd.node.kubernetes.io/worker.version:v0.8.2 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2022-06-03 19:59:31 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.3.0/24\"":{}}}}} {kubeadm Update v1 2022-06-03 19:59:32 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {flanneld Update v1 2022-06-03 20:00:37 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {nfd-master Update v1 2022-06-03 20:08:16 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/extended-resources":{},"f:nfd.node.kubernetes.io/feature-labels":{},"f:nfd.node.kubernetes.io/worker.version":{}},"f:labels":{"f:feature.node.kubernetes.io/cpu-cpuid.ADX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AESNI":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX2":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512BW":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512CD":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512DQ":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512F":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512VL":{},"f:feature.node.kubernetes.io/cpu-cpuid.FMA3":{},"f:feature.node.kubernetes.io/cpu-cpuid.HLE":{},"f:feature.node.kubernetes.io/cpu-cpuid.IBPB":{},"f:feature.node.kubernetes.io/cpu-cpuid.MPX":{},"f:feature.node.kubernetes.io/cpu-cpuid.RTM":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE4":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE42":{},"f:feature.node.kubernetes.io/cpu-cpuid.STIBP":{},"f:feature.node.kubernetes.io/cpu-cpuid.VMX":{},"f:feature.node.kubernetes.io/cpu-cstate.enabled":{},"f:feature.node.kubernetes.io/cpu-hardware_multithreading":{},"f:feature.node.kubernetes.io/cpu-pstate.status":{},"f:feature.node.kubernetes.io/cpu-pstate.turbo":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTCMT":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTL3CA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBM":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMON":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ_FULL":{},"f:feature.node.kubernetes.io/kernel-selinux.enabled":{},"f:feature.node.kubernetes.io/kernel-version.full":{},"f:feature.node.kubernetes.io/kernel-version.major":{},"f:feature.node.kubernetes.io/kernel-version.minor":{},"f:feature.node.kubernetes.io/kernel-version.revision":{},"f:feature.node.kubernetes.io/memory-numa":{},"f:feature.node.kubernetes.io/network-sriov.capable":{},"f:feature.node.kubernetes.io/network-sriov.configured":{},"f:feature.node.kubernetes.io/pci-0300_1a03.present":{},"f:feature.node.kubernetes.io/storage-nonrotationaldisk":{},"f:feature.node.kubernetes.io/system-os_release.ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID.major":{}}}}} {Swagger-Codegen Update v1 2022-06-03 20:11:45 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:cmk.intel.com/cmk-node":{}}},"f:status":{"f:capacity":{"f:cmk.intel.com/exclusive-cores":{}}}}} {kubelet Update v1 2022-06-03 20:11:51 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:cmk.intel.com/exclusive-cores":{},"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:capacity":{"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.3.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.3.0/24],},Status:NodeStatus{Capacity:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{451201003520 0} {} 440625980Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{201269608448 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{77 0} {} 77 DecimalSI},ephemeral-storage: {{406080902496 0} {} 406080902496 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{178884608000 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-06-03 20:03:39 +0000 UTC,LastTransitionTime:2022-06-03 20:03:39 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-06-03 22:07:23 +0000 UTC,LastTransitionTime:2022-06-03 19:59:31 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-06-03 22:07:23 +0000 UTC,LastTransitionTime:2022-06-03 19:59:31 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-06-03 22:07:23 +0000 UTC,LastTransitionTime:2022-06-03 19:59:31 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-06-03 22:07:23 +0000 UTC,LastTransitionTime:2022-06-03 20:00:42 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.207,},NodeAddress{Type:Hostname,Address:node1,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:d7b1fa7572024d5cac9eec5f4f2a75d3,SystemUUID:00CDA902-D022-E711-906E-0017A4403562,BootID:a1aa46cd-ec2c-417b-ae44-b808bdc04113,KernelVersion:3.10.0-1160.66.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.16,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[opnfv/barometer-collectd@sha256:f30e965aa6195e6ac4ca2410f5a15e3704c92e4afa5208178ca22a7911975d66],SizeBytes:1075575763,},ContainerImage{Names:[@ :],SizeBytes:1003977815,},ContainerImage{Names:[localhost:30500/cmk@sha256:196eade72a7e16bdb2d709d29fdec354c8a3dbbb68e384608929b41c5ec41520 cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:727687199,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[golang@sha256:db2475a1dbb2149508e5db31d7d77a75e6600d54be645f37681f03f2762169ba golang:alpine3.12],SizeBytes:301186719,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:702a992280fb7c3303e84a5801acbb4c9c7fcf48cffe0e9c8be3f0c60f74cf89 k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.4],SizeBytes:253371792,},ContainerImage{Names:[grafana/grafana@sha256:ba39bf5131dcc0464134a3ff0e26e8c6380415249fa725e5f619176601255172 grafana/grafana:7.5.4],SizeBytes:203572842,},ContainerImage{Names:[quay.io/prometheus/prometheus@sha256:b899dbd1b9017b9a379f76ce5b40eead01a62762c4f2057eacef945c3c22d210 quay.io/prometheus/prometheus:v2.22.1],SizeBytes:168344243,},ContainerImage{Names:[nginx@sha256:a05b0cdd4fc1be3b224ba9662ebdf98fe44c09c0c9215b45f84344c12867002e nginx:1.21.1],SizeBytes:133175493,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:716d2f68314c5c4ddd5ecdb45183fcb4ed8019015982c1321571f863989b70b0 k8s.gcr.io/e2e-test-images/httpd:2.4.39-1],SizeBytes:126894770,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:125930239,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:1f36a24cfb5e0c3f725d7565a867c2384282fcbeccc77b07b423c9da95763a9a k8s.gcr.io/e2e-test-images/nautilus:1.4],SizeBytes:121748345,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[k8s.gcr.io/nfd/node-feature-discovery@sha256:74a1cbd82354f148277e20cdce25d57816e355a896bc67f67a0f722164b16945 k8s.gcr.io/nfd/node-feature-discovery:v0.8.2],SizeBytes:108486428,},ContainerImage{Names:[directxman12/k8s-prometheus-adapter@sha256:2b09a571757a12c0245f2f1a74db4d1b9386ff901cf57f5ce48a0a682bd0e3af directxman12/k8s-prometheus-adapter:v0.8.2],SizeBytes:68230450,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[localhost:30500/sriov-device-plugin@sha256:bec5a478455b8244d18398355b5ec18540557180ddc029404300ca241638521b nfvpe/sriov-device-plugin:latest localhost:30500/sriov-device-plugin:v3.3.2],SizeBytes:42676189,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[prom/collectd-exporter@sha256:73fbda4d24421bff3b741c27efc36f1b6fbe7c57c378d56d4ff78101cd556654],SizeBytes:17463681,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:16032814,},ContainerImage{Names:[quay.io/prometheus-operator/prometheus-config-reloader@sha256:4dee0fcf1820355ddd6986c1317b555693776c731315544a99d6cc59a7e34ce9 quay.io/prometheus-operator/prometheus-config-reloader:v0.44.1],SizeBytes:13433274,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:11443478,},ContainerImage{Names:[appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172cead6502fec989198b98d8bb appropriate/curl:edge],SizeBytes:5654234,},ContainerImage{Names:[alpine@sha256:c75ac27b49326926b803b9ed43bf088bc220d22556de1bc5f72d742c91398f69 alpine:3.12],SizeBytes:5581590,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:1154361,},ContainerImage{Names:[busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 busybox:1.28],SizeBytes:1146369,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jun 3 22:07:30.550: INFO: Logging kubelet events for node node1 Jun 3 22:07:30.552: INFO: Logging pods the kubelet thinks is on node node1 Jun 3 22:07:30.566: INFO: kube-flannel-hm6bh started at 2022-06-03 20:00:32 +0000 UTC (1+1 container statuses recorded) Jun 3 22:07:30.567: INFO: Init container install-cni ready: true, restart count 2 Jun 3 22:07:30.567: INFO: Container kube-flannel ready: true, restart count 3 Jun 3 22:07:30.567: INFO: externalname-service-ltl7q started at 2022-06-03 22:05:20 +0000 UTC (0+1 container statuses recorded) Jun 3 22:07:30.567: INFO: Container externalname-service ready: true, restart count 0 Jun 3 22:07:30.567: INFO: nginx-proxy-node1 started at 2022-06-03 19:59:31 +0000 UTC (0+1 container statuses recorded) Jun 3 22:07:30.567: INFO: Container nginx-proxy ready: true, restart count 2 Jun 3 22:07:30.567: INFO: cmk-init-discover-node1-n75dv started at 2022-06-03 20:11:42 +0000 UTC (0+3 container statuses recorded) Jun 3 22:07:30.567: INFO: Container discover ready: false, restart count 0 Jun 3 22:07:30.567: INFO: Container init ready: false, restart count 0 Jun 3 22:07:30.567: INFO: Container install ready: false, restart count 0 Jun 3 22:07:30.567: INFO: node-feature-discovery-worker-rg6tx started at 2022-06-03 20:08:09 +0000 UTC (0+1 container statuses recorded) Jun 3 22:07:30.567: INFO: Container nfd-worker ready: true, restart count 0 Jun 3 22:07:30.567: INFO: node-exporter-f5xkq started at 2022-06-03 20:13:28 +0000 UTC (0+2 container statuses recorded) Jun 3 22:07:30.567: INFO: Container kube-rbac-proxy ready: true, restart count 0 Jun 3 22:07:30.567: INFO: Container node-exporter ready: true, restart count 0 Jun 3 22:07:30.567: INFO: kube-multus-ds-amd64-p7r6j started at 2022-06-03 20:00:40 +0000 UTC (0+1 container statuses recorded) Jun 3 22:07:30.567: INFO: Container kube-multus ready: true, restart count 1 Jun 3 22:07:30.567: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-qwqjx started at 2022-06-03 20:09:20 +0000 UTC (0+1 container statuses recorded) Jun 3 22:07:30.567: INFO: Container kube-sriovdp ready: true, restart count 0 Jun 3 22:07:30.567: INFO: cmk-webhook-6c9d5f8578-c927x started at 2022-06-03 20:12:25 +0000 UTC (0+1 container statuses recorded) Jun 3 22:07:30.567: INFO: Container cmk-webhook ready: true, restart count 0 Jun 3 22:07:30.567: INFO: collectd-nbx5z started at 2022-06-03 20:17:32 +0000 UTC (0+3 container statuses recorded) Jun 3 22:07:30.567: INFO: Container collectd ready: true, restart count 0 Jun 3 22:07:30.567: INFO: Container collectd-exporter ready: true, restart count 0 Jun 3 22:07:30.567: INFO: Container rbac-proxy ready: true, restart count 0 Jun 3 22:07:30.567: INFO: kube-proxy-b6zlv started at 2022-06-03 19:59:36 +0000 UTC (0+1 container statuses recorded) Jun 3 22:07:30.567: INFO: Container kube-proxy ready: true, restart count 2 Jun 3 22:07:30.567: INFO: prometheus-k8s-0 started at 2022-06-03 20:13:45 +0000 UTC (0+4 container statuses recorded) Jun 3 22:07:30.567: INFO: Container config-reloader ready: true, restart count 0 Jun 3 22:07:30.567: INFO: Container custom-metrics-apiserver ready: true, restart count 0 Jun 3 22:07:30.567: INFO: Container grafana ready: true, restart count 0 Jun 3 22:07:30.567: INFO: Container prometheus ready: true, restart count 1 Jun 3 22:07:30.567: INFO: cmk-84nbw started at 2022-06-03 20:12:24 +0000 UTC (0+2 container statuses recorded) Jun 3 22:07:30.567: INFO: Container nodereport ready: true, restart count 0 Jun 3 22:07:30.567: INFO: Container reconcile ready: true, restart count 0 Jun 3 22:07:30.795: INFO: Latency metrics for node node1 Jun 3 22:07:30.795: INFO: Logging node info for node node2 Jun 3 22:07:30.798: INFO: Node Info: &Node{ObjectMeta:{node2 bb95e261-57f4-4e78-b1f6-cbf8d9287d74 50446 0 2022-06-03 19:59:32 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux cmk.intel.com/cmk-node:true feature.node.kubernetes.io/cpu-cpuid.ADX:true feature.node.kubernetes.io/cpu-cpuid.AESNI:true feature.node.kubernetes.io/cpu-cpuid.AVX:true feature.node.kubernetes.io/cpu-cpuid.AVX2:true feature.node.kubernetes.io/cpu-cpuid.AVX512BW:true feature.node.kubernetes.io/cpu-cpuid.AVX512CD:true feature.node.kubernetes.io/cpu-cpuid.AVX512DQ:true feature.node.kubernetes.io/cpu-cpuid.AVX512F:true feature.node.kubernetes.io/cpu-cpuid.AVX512VL:true feature.node.kubernetes.io/cpu-cpuid.FMA3:true feature.node.kubernetes.io/cpu-cpuid.HLE:true feature.node.kubernetes.io/cpu-cpuid.IBPB:true feature.node.kubernetes.io/cpu-cpuid.MPX:true feature.node.kubernetes.io/cpu-cpuid.RTM:true feature.node.kubernetes.io/cpu-cpuid.SSE4:true feature.node.kubernetes.io/cpu-cpuid.SSE42:true feature.node.kubernetes.io/cpu-cpuid.STIBP:true feature.node.kubernetes.io/cpu-cpuid.VMX:true feature.node.kubernetes.io/cpu-cstate.enabled:true feature.node.kubernetes.io/cpu-hardware_multithreading:true feature.node.kubernetes.io/cpu-pstate.status:active feature.node.kubernetes.io/cpu-pstate.turbo:true feature.node.kubernetes.io/cpu-rdt.RDTCMT:true feature.node.kubernetes.io/cpu-rdt.RDTL3CA:true feature.node.kubernetes.io/cpu-rdt.RDTMBA:true feature.node.kubernetes.io/cpu-rdt.RDTMBM:true feature.node.kubernetes.io/cpu-rdt.RDTMON:true feature.node.kubernetes.io/kernel-config.NO_HZ:true feature.node.kubernetes.io/kernel-config.NO_HZ_FULL:true feature.node.kubernetes.io/kernel-selinux.enabled:true feature.node.kubernetes.io/kernel-version.full:3.10.0-1160.66.1.el7.x86_64 feature.node.kubernetes.io/kernel-version.major:3 feature.node.kubernetes.io/kernel-version.minor:10 feature.node.kubernetes.io/kernel-version.revision:0 feature.node.kubernetes.io/memory-numa:true feature.node.kubernetes.io/network-sriov.capable:true feature.node.kubernetes.io/network-sriov.configured:true feature.node.kubernetes.io/pci-0300_1a03.present:true feature.node.kubernetes.io/storage-nonrotationaldisk:true feature.node.kubernetes.io/system-os_release.ID:centos feature.node.kubernetes.io/system-os_release.VERSION_ID:7 feature.node.kubernetes.io/system-os_release.VERSION_ID.major:7 kubernetes.io/arch:amd64 kubernetes.io/hostname:node2 kubernetes.io/os:linux] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.208 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/extended-resources: nfd.node.kubernetes.io/feature-labels:cpu-cpuid.ADX,cpu-cpuid.AESNI,cpu-cpuid.AVX,cpu-cpuid.AVX2,cpu-cpuid.AVX512BW,cpu-cpuid.AVX512CD,cpu-cpuid.AVX512DQ,cpu-cpuid.AVX512F,cpu-cpuid.AVX512VL,cpu-cpuid.FMA3,cpu-cpuid.HLE,cpu-cpuid.IBPB,cpu-cpuid.MPX,cpu-cpuid.RTM,cpu-cpuid.SSE4,cpu-cpuid.SSE42,cpu-cpuid.STIBP,cpu-cpuid.VMX,cpu-cstate.enabled,cpu-hardware_multithreading,cpu-pstate.status,cpu-pstate.turbo,cpu-rdt.RDTCMT,cpu-rdt.RDTL3CA,cpu-rdt.RDTMBA,cpu-rdt.RDTMBM,cpu-rdt.RDTMON,kernel-config.NO_HZ,kernel-config.NO_HZ_FULL,kernel-selinux.enabled,kernel-version.full,kernel-version.major,kernel-version.minor,kernel-version.revision,memory-numa,network-sriov.capable,network-sriov.configured,pci-0300_1a03.present,storage-nonrotationaldisk,system-os_release.ID,system-os_release.VERSION_ID,system-os_release.VERSION_ID.major nfd.node.kubernetes.io/worker.version:v0.8.2 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2022-06-03 19:59:32 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.4.0/24\"":{}}}}} {kubeadm Update v1 2022-06-03 19:59:33 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {flanneld Update v1 2022-06-03 20:00:37 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {nfd-master Update v1 2022-06-03 20:08:16 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/extended-resources":{},"f:nfd.node.kubernetes.io/feature-labels":{},"f:nfd.node.kubernetes.io/worker.version":{}},"f:labels":{"f:feature.node.kubernetes.io/cpu-cpuid.ADX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AESNI":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX2":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512BW":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512CD":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512DQ":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512F":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512VL":{},"f:feature.node.kubernetes.io/cpu-cpuid.FMA3":{},"f:feature.node.kubernetes.io/cpu-cpuid.HLE":{},"f:feature.node.kubernetes.io/cpu-cpuid.IBPB":{},"f:feature.node.kubernetes.io/cpu-cpuid.MPX":{},"f:feature.node.kubernetes.io/cpu-cpuid.RTM":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE4":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE42":{},"f:feature.node.kubernetes.io/cpu-cpuid.STIBP":{},"f:feature.node.kubernetes.io/cpu-cpuid.VMX":{},"f:feature.node.kubernetes.io/cpu-cstate.enabled":{},"f:feature.node.kubernetes.io/cpu-hardware_multithreading":{},"f:feature.node.kubernetes.io/cpu-pstate.status":{},"f:feature.node.kubernetes.io/cpu-pstate.turbo":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTCMT":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTL3CA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBM":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMON":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ_FULL":{},"f:feature.node.kubernetes.io/kernel-selinux.enabled":{},"f:feature.node.kubernetes.io/kernel-version.full":{},"f:feature.node.kubernetes.io/kernel-version.major":{},"f:feature.node.kubernetes.io/kernel-version.minor":{},"f:feature.node.kubernetes.io/kernel-version.revision":{},"f:feature.node.kubernetes.io/memory-numa":{},"f:feature.node.kubernetes.io/network-sriov.capable":{},"f:feature.node.kubernetes.io/network-sriov.configured":{},"f:feature.node.kubernetes.io/pci-0300_1a03.present":{},"f:feature.node.kubernetes.io/storage-nonrotationaldisk":{},"f:feature.node.kubernetes.io/system-os_release.ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID.major":{}}}}} {Swagger-Codegen Update v1 2022-06-03 20:12:07 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:cmk.intel.com/cmk-node":{}}},"f:status":{"f:capacity":{"f:cmk.intel.com/exclusive-cores":{}}}}} {kubelet Update v1 2022-06-03 20:12:10 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:cmk.intel.com/exclusive-cores":{},"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:capacity":{"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.4.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.4.0/24],},Status:NodeStatus{Capacity:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{451201003520 0} {} 440625980Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{201269604352 0} {} 196552348Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{77 0} {} 77 DecimalSI},ephemeral-storage: {{406080902496 0} {} 406080902496 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{178884603904 0} {} 174691996Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-06-03 20:03:25 +0000 UTC,LastTransitionTime:2022-06-03 20:03:25 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-06-03 22:07:23 +0000 UTC,LastTransitionTime:2022-06-03 19:59:32 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-06-03 22:07:23 +0000 UTC,LastTransitionTime:2022-06-03 19:59:32 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-06-03 22:07:23 +0000 UTC,LastTransitionTime:2022-06-03 19:59:32 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-06-03 22:07:23 +0000 UTC,LastTransitionTime:2022-06-03 20:03:20 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.208,},NodeAddress{Type:Hostname,Address:node2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:73f6f7c4482d4ddfadf38b35a5d03575,SystemUUID:80B3CD56-852F-E711-906E-0017A4403562,BootID:14b04379-324d-413e-8b7f-b1dff077c955,KernelVersion:3.10.0-1160.66.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.16,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[opnfv/barometer-collectd@sha256:f30e965aa6195e6ac4ca2410f5a15e3704c92e4afa5208178ca22a7911975d66],SizeBytes:1075575763,},ContainerImage{Names:[localhost:30500/cmk@sha256:196eade72a7e16bdb2d709d29fdec354c8a3dbbb68e384608929b41c5ec41520 localhost:30500/cmk:v1.5.1],SizeBytes:727687199,},ContainerImage{Names:[cmk:v1.5.1],SizeBytes:727687199,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[aquasec/kube-hunter@sha256:2be6820bc1d7e0f57193a9a27d5a3e16b2fd93c53747b03ce8ca48c6fc323781 aquasec/kube-hunter:0.3.1],SizeBytes:347611549,},ContainerImage{Names:[sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 sirot/netperf-latest:latest],SizeBytes:282025213,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[k8s.gcr.io/etcd@sha256:4ad90a11b55313b182afc186b9876c8e891531b8db4c9bf1541953021618d0e2 k8s.gcr.io/etcd:3.4.13-0],SizeBytes:253392289,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[nginx@sha256:a05b0cdd4fc1be3b224ba9662ebdf98fe44c09c0c9215b45f84344c12867002e nginx:1.21.1],SizeBytes:133175493,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:716d2f68314c5c4ddd5ecdb45183fcb4ed8019015982c1321571f863989b70b0 k8s.gcr.io/e2e-test-images/httpd:2.4.39-1],SizeBytes:126894770,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:125930239,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:1f36a24cfb5e0c3f725d7565a867c2384282fcbeccc77b07b423c9da95763a9a k8s.gcr.io/e2e-test-images/nautilus:1.4],SizeBytes:121748345,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[k8s.gcr.io/nfd/node-feature-discovery@sha256:74a1cbd82354f148277e20cdce25d57816e355a896bc67f67a0f722164b16945 k8s.gcr.io/nfd/node-feature-discovery:v0.8.2],SizeBytes:108486428,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/sample-apiserver@sha256:e7fddbaac4c3451da2365ab90bad149d32f11409738034e41e0f460927f7c276 k8s.gcr.io/e2e-test-images/sample-apiserver:1.17.4],SizeBytes:58172101,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[localhost:30500/sriov-device-plugin@sha256:bec5a478455b8244d18398355b5ec18540557180ddc029404300ca241638521b localhost:30500/sriov-device-plugin:v3.3.2],SizeBytes:42676189,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[localhost:30500/tasextender@sha256:eddd5e176ac5f79e2e8ba9a1b7023bbf7200edfa835da39de54a6bf3568f9668 localhost:30500/tasextender:0.4],SizeBytes:28910791,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 aquasec/kube-bench:0.3.1],SizeBytes:19301876,},ContainerImage{Names:[prom/collectd-exporter@sha256:73fbda4d24421bff3b741c27efc36f1b6fbe7c57c378d56d4ff78101cd556654],SizeBytes:17463681,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:16032814,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:11443478,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonewprivs@sha256:8ac1264691820febacf3aea5d152cbde6d10685731ec14966a9401c6f47a68ac k8s.gcr.io/e2e-test-images/nonewprivs:1.3],SizeBytes:7107254,},ContainerImage{Names:[appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172cead6502fec989198b98d8bb appropriate/curl:edge],SizeBytes:5654234,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:1154361,},ContainerImage{Names:[busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 busybox:1.28],SizeBytes:1146369,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jun 3 22:07:30.799: INFO: Logging kubelet events for node node2 Jun 3 22:07:30.801: INFO: Logging pods the kubelet thinks is on node node2 Jun 3 22:07:31.269: INFO: execpodwl9ds started at 2022-06-03 22:05:23 +0000 UTC (0+1 container statuses recorded) Jun 3 22:07:31.269: INFO: Container agnhost-container ready: true, restart count 0 Jun 3 22:07:31.269: INFO: nginx-proxy-node2 started at 2022-06-03 19:59:32 +0000 UTC (0+1 container statuses recorded) Jun 3 22:07:31.269: INFO: Container nginx-proxy ready: true, restart count 2 Jun 3 22:07:31.269: INFO: cmk-init-discover-node2-xvf8p started at 2022-06-03 20:12:02 +0000 UTC (0+3 container statuses recorded) Jun 3 22:07:31.269: INFO: Container discover ready: false, restart count 0 Jun 3 22:07:31.269: INFO: Container init ready: false, restart count 0 Jun 3 22:07:31.269: INFO: Container install ready: false, restart count 0 Jun 3 22:07:31.269: INFO: affinity-nodeport-timeout-2xw5d started at 2022-06-03 22:06:18 +0000 UTC (0+1 container statuses recorded) Jun 3 22:07:31.269: INFO: Container affinity-nodeport-timeout ready: true, restart count 0 Jun 3 22:07:31.269: INFO: tas-telemetry-aware-scheduling-84ff454dfb-j2kg5 started at 2022-06-03 20:16:39 +0000 UTC (0+1 container statuses recorded) Jun 3 22:07:31.269: INFO: Container tas-extender ready: true, restart count 0 Jun 3 22:07:31.269: INFO: affinity-nodeport-timeout-wnsmp started at 2022-06-03 22:06:18 +0000 UTC (0+1 container statuses recorded) Jun 3 22:07:31.269: INFO: Container affinity-nodeport-timeout ready: true, restart count 0 Jun 3 22:07:31.269: INFO: externalname-service-bkpmw started at 2022-06-03 22:05:20 +0000 UTC (0+1 container statuses recorded) Jun 3 22:07:31.269: INFO: Container externalname-service ready: true, restart count 0 Jun 3 22:07:31.269: INFO: kube-flannel-pc7wj started at 2022-06-03 20:00:32 +0000 UTC (1+1 container statuses recorded) Jun 3 22:07:31.269: INFO: Init container install-cni ready: true, restart count 0 Jun 3 22:07:31.269: INFO: Container kube-flannel ready: true, restart count 1 Jun 3 22:07:31.269: INFO: kube-multus-ds-amd64-n7spl started at 2022-06-03 20:00:40 +0000 UTC (0+1 container statuses recorded) Jun 3 22:07:31.269: INFO: Container kube-multus ready: true, restart count 1 Jun 3 22:07:31.269: INFO: collectd-q2l4t started at 2022-06-03 20:17:32 +0000 UTC (0+3 container statuses recorded) Jun 3 22:07:31.269: INFO: Container collectd ready: true, restart count 0 Jun 3 22:07:31.269: INFO: Container collectd-exporter ready: true, restart count 0 Jun 3 22:07:31.269: INFO: Container rbac-proxy ready: true, restart count 0 Jun 3 22:07:31.269: INFO: execpod-affinitygk6sd started at 2022-06-03 22:06:24 +0000 UTC (0+1 container statuses recorded) Jun 3 22:07:31.269: INFO: Container agnhost-container ready: true, restart count 0 Jun 3 22:07:31.270: INFO: kube-proxy-qmkcq started at 2022-06-03 19:59:36 +0000 UTC (0+1 container statuses recorded) Jun 3 22:07:31.270: INFO: Container kube-proxy ready: true, restart count 1 Jun 3 22:07:31.270: INFO: node-feature-discovery-worker-gn855 started at 2022-06-03 20:08:09 +0000 UTC (0+1 container statuses recorded) Jun 3 22:07:31.270: INFO: Container nfd-worker ready: true, restart count 0 Jun 3 22:07:31.270: INFO: node-exporter-g45bm started at 2022-06-03 20:13:28 +0000 UTC (0+2 container statuses recorded) Jun 3 22:07:31.270: INFO: Container kube-rbac-proxy ready: true, restart count 0 Jun 3 22:07:31.270: INFO: Container node-exporter ready: true, restart count 0 Jun 3 22:07:31.270: INFO: affinity-nodeport-timeout-7w8xs started at 2022-06-03 22:06:18 +0000 UTC (0+1 container statuses recorded) Jun 3 22:07:31.270: INFO: Container affinity-nodeport-timeout ready: true, restart count 0 Jun 3 22:07:31.270: INFO: kubernetes-dashboard-785dcbb76d-25c95 started at 2022-06-03 20:01:12 +0000 UTC (0+1 container statuses recorded) Jun 3 22:07:31.270: INFO: Container kubernetes-dashboard ready: true, restart count 1 Jun 3 22:07:31.270: INFO: cmk-v446x started at 2022-06-03 20:12:24 +0000 UTC (0+2 container statuses recorded) Jun 3 22:07:31.270: INFO: Container nodereport ready: true, restart count 0 Jun 3 22:07:31.270: INFO: Container reconcile ready: true, restart count 0 Jun 3 22:07:31.270: INFO: kubernetes-metrics-scraper-5558854cb-fz4kn started at 2022-06-03 20:01:12 +0000 UTC (0+1 container statuses recorded) Jun 3 22:07:31.270: INFO: Container kubernetes-metrics-scraper ready: true, restart count 1 Jun 3 22:07:31.270: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-49xzt started at 2022-06-03 20:09:20 +0000 UTC (0+1 container statuses recorded) Jun 3 22:07:31.270: INFO: Container kube-sriovdp ready: true, restart count 0 Jun 3 22:07:31.669: INFO: Latency metrics for node node2 Jun 3 22:07:31.669: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-7454" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750 • Failure [130.986 seconds] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should be able to change the type from ExternalName to NodePort [Conformance] [It] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Jun 3 22:07:30.186: Unexpected error: <*errors.errorString | 0xc004440110>: { s: "service is not reachable within 2m0s timeout on endpoint 10.10.190.207:30908 over TCP protocol", } service is not reachable within 2m0s timeout on endpoint 10.10.190.207:30908 over TCP protocol occurred /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1351 ------------------------------ {"msg":"FAILED [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","total":-1,"completed":45,"skipped":896,"failed":1,"failures":["[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]"]} Jun 3 22:07:31.689: INFO: Running AfterSuite actions on all nodes [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 3 22:06:14.300: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746 [It] should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating service in namespace services-2233 Jun 3 22:06:14.337: INFO: The status of Pod kube-proxy-mode-detector is Pending, waiting for it to be Running (with Ready = true) Jun 3 22:06:16.348: INFO: The status of Pod kube-proxy-mode-detector is Pending, waiting for it to be Running (with Ready = true) Jun 3 22:06:18.342: INFO: The status of Pod kube-proxy-mode-detector is Running (Ready = true) Jun 3 22:06:18.344: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2233 exec kube-proxy-mode-detector -- /bin/sh -x -c curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode' Jun 3 22:06:18.810: INFO: stderr: "+ curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode\n" Jun 3 22:06:18.810: INFO: stdout: "iptables" Jun 3 22:06:18.810: INFO: proxyMode: iptables Jun 3 22:06:18.817: INFO: Waiting for pod kube-proxy-mode-detector to disappear Jun 3 22:06:18.819: INFO: Pod kube-proxy-mode-detector no longer exists STEP: creating service affinity-nodeport-timeout in namespace services-2233 STEP: creating replication controller affinity-nodeport-timeout in namespace services-2233 I0603 22:06:18.831182 33 runners.go:190] Created replication controller with name: affinity-nodeport-timeout, namespace: services-2233, replica count: 3 I0603 22:06:21.881673 33 runners.go:190] affinity-nodeport-timeout Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0603 22:06:24.882642 33 runners.go:190] affinity-nodeport-timeout Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jun 3 22:06:24.896: INFO: Creating new exec pod Jun 3 22:06:29.922: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2233 exec execpod-affinitygk6sd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-timeout 80' Jun 3 22:06:30.154: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-nodeport-timeout 80\nConnection to affinity-nodeport-timeout 80 port [tcp/http] succeeded!\n" Jun 3 22:06:30.154: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" Jun 3 22:06:30.154: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2233 exec execpod-affinitygk6sd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.233.4.44 80' Jun 3 22:06:30.402: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 10.233.4.44 80\nConnection to 10.233.4.44 80 port [tcp/http] succeeded!\n" Jun 3 22:06:30.402: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" Jun 3 22:06:30.402: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2233 exec execpod-affinitygk6sd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30573' Jun 3 22:06:30.653: INFO: rc: 1 Jun 3 22:06:30.653: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2233 exec execpod-affinitygk6sd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30573: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30573 nc: connect to 10.10.190.207 port 30573 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 22:06:31.653: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2233 exec execpod-affinitygk6sd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30573' Jun 3 22:06:31.906: INFO: rc: 1 Jun 3 22:06:31.906: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2233 exec execpod-affinitygk6sd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30573: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30573 nc: connect to 10.10.190.207 port 30573 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 22:06:32.653: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2233 exec execpod-affinitygk6sd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30573' Jun 3 22:06:32.908: INFO: rc: 1 Jun 3 22:06:32.908: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2233 exec execpod-affinitygk6sd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30573: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30573 nc: connect to 10.10.190.207 port 30573 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 22:06:33.655: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2233 exec execpod-affinitygk6sd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30573' Jun 3 22:06:33.902: INFO: rc: 1 Jun 3 22:06:33.902: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2233 exec execpod-affinitygk6sd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30573: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30573 nc: connect to 10.10.190.207 port 30573 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 22:06:34.653: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2233 exec execpod-affinitygk6sd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30573' Jun 3 22:06:34.921: INFO: rc: 1 Jun 3 22:06:34.922: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2233 exec execpod-affinitygk6sd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30573: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30573 nc: connect to 10.10.190.207 port 30573 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 22:06:35.653: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2233 exec execpod-affinitygk6sd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30573' Jun 3 22:06:35.919: INFO: rc: 1 Jun 3 22:06:35.920: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2233 exec execpod-affinitygk6sd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30573: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30573 nc: connect to 10.10.190.207 port 30573 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 22:06:36.654: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2233 exec execpod-affinitygk6sd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30573' Jun 3 22:06:36.890: INFO: rc: 1 Jun 3 22:06:36.890: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2233 exec execpod-affinitygk6sd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30573: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30573 nc: connect to 10.10.190.207 port 30573 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 22:06:37.653: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2233 exec execpod-affinitygk6sd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30573' Jun 3 22:06:37.884: INFO: rc: 1 Jun 3 22:06:37.884: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2233 exec execpod-affinitygk6sd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30573: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30573 nc: connect to 10.10.190.207 port 30573 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 22:06:38.654: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2233 exec execpod-affinitygk6sd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30573' Jun 3 22:06:38.894: INFO: rc: 1 Jun 3 22:06:38.895: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2233 exec execpod-affinitygk6sd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30573: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30573 nc: connect to 10.10.190.207 port 30573 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 22:06:39.654: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2233 exec execpod-affinitygk6sd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30573' Jun 3 22:06:39.899: INFO: rc: 1 Jun 3 22:06:39.899: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2233 exec execpod-affinitygk6sd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30573: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30573 nc: connect to 10.10.190.207 port 30573 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 22:06:40.654: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2233 exec execpod-affinitygk6sd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30573' Jun 3 22:06:40.910: INFO: rc: 1 Jun 3 22:06:40.910: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2233 exec execpod-affinitygk6sd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30573: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30573 nc: connect to 10.10.190.207 port 30573 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 22:06:41.653: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2233 exec execpod-affinitygk6sd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30573' Jun 3 22:06:41.916: INFO: rc: 1 Jun 3 22:06:41.917: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2233 exec execpod-affinitygk6sd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30573: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30573 nc: connect to 10.10.190.207 port 30573 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 22:06:42.653: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2233 exec execpod-affinitygk6sd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30573' Jun 3 22:06:42.895: INFO: rc: 1 Jun 3 22:06:42.895: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2233 exec execpod-affinitygk6sd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30573: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30573 nc: connect to 10.10.190.207 port 30573 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 22:06:43.654: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2233 exec execpod-affinitygk6sd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30573' Jun 3 22:06:43.910: INFO: rc: 1 Jun 3 22:06:43.910: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2233 exec execpod-affinitygk6sd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30573: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30573 nc: connect to 10.10.190.207 port 30573 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 22:06:44.654: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2233 exec execpod-affinitygk6sd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30573' Jun 3 22:06:44.902: INFO: rc: 1 Jun 3 22:06:44.902: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2233 exec execpod-affinitygk6sd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30573: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30573 nc: connect to 10.10.190.207 port 30573 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 22:06:45.654: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2233 exec execpod-affinitygk6sd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30573' Jun 3 22:06:45.896: INFO: rc: 1 Jun 3 22:06:45.896: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2233 exec execpod-affinitygk6sd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30573: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30573 nc: connect to 10.10.190.207 port 30573 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 22:06:46.653: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2233 exec execpod-affinitygk6sd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30573' Jun 3 22:06:46.890: INFO: rc: 1 Jun 3 22:06:46.890: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2233 exec execpod-affinitygk6sd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30573: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30573 nc: connect to 10.10.190.207 port 30573 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 22:06:47.653: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2233 exec execpod-affinitygk6sd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30573' Jun 3 22:06:47.877: INFO: rc: 1 Jun 3 22:06:47.877: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2233 exec execpod-affinitygk6sd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30573: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30573 nc: connect to 10.10.190.207 port 30573 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 22:06:48.655: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2233 exec execpod-affinitygk6sd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30573' Jun 3 22:06:48.911: INFO: rc: 1 Jun 3 22:06:48.911: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2233 exec execpod-affinitygk6sd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30573: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30573 nc: connect to 10.10.190.207 port 30573 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 22:06:49.654: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2233 exec execpod-affinitygk6sd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30573' Jun 3 22:06:49.901: INFO: rc: 1 Jun 3 22:06:49.901: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2233 exec execpod-affinitygk6sd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30573: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30573 nc: connect to 10.10.190.207 port 30573 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 22:06:50.654: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2233 exec execpod-affinitygk6sd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30573' Jun 3 22:06:50.897: INFO: rc: 1 Jun 3 22:06:50.897: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2233 exec execpod-affinitygk6sd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30573: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30573 nc: connect to 10.10.190.207 port 30573 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 22:06:51.654: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2233 exec execpod-affinitygk6sd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30573' Jun 3 22:06:51.908: INFO: rc: 1 Jun 3 22:06:51.908: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2233 exec execpod-affinitygk6sd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30573: Command stdout: stderr: + echo hostName+ nc -v -t -w 2 10.10.190.207 30573 nc: connect to 10.10.190.207 port 30573 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 22:06:52.654: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2233 exec execpod-affinitygk6sd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30573' Jun 3 22:06:52.909: INFO: rc: 1 Jun 3 22:06:52.909: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2233 exec execpod-affinitygk6sd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30573: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30573 nc: connect to 10.10.190.207 port 30573 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 22:06:53.655: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2233 exec execpod-affinitygk6sd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30573' Jun 3 22:06:53.896: INFO: rc: 1 Jun 3 22:06:53.896: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2233 exec execpod-affinitygk6sd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30573: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30573 nc: connect to 10.10.190.207 port 30573 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 22:06:54.654: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2233 exec execpod-affinitygk6sd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30573' Jun 3 22:06:54.894: INFO: rc: 1 Jun 3 22:06:54.895: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2233 exec execpod-affinitygk6sd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30573: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30573 nc: connect to 10.10.190.207 port 30573 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 22:06:55.653: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2233 exec execpod-affinitygk6sd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30573' Jun 3 22:06:55.879: INFO: rc: 1 Jun 3 22:06:55.879: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2233 exec execpod-affinitygk6sd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30573: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30573 nc: connect to 10.10.190.207 port 30573 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 22:06:56.655: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2233 exec execpod-affinitygk6sd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30573' Jun 3 22:06:56.897: INFO: rc: 1 Jun 3 22:06:56.897: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2233 exec execpod-affinitygk6sd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30573: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30573 nc: connect to 10.10.190.207 port 30573 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 22:06:57.654: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2233 exec execpod-affinitygk6sd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30573' Jun 3 22:06:57.906: INFO: rc: 1 Jun 3 22:06:57.906: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2233 exec execpod-affinitygk6sd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30573: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30573 nc: connect to 10.10.190.207 port 30573 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 22:06:58.655: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2233 exec execpod-affinitygk6sd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30573' Jun 3 22:06:58.892: INFO: rc: 1 Jun 3 22:06:58.892: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2233 exec execpod-affinitygk6sd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30573: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30573 nc: connect to 10.10.190.207 port 30573 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 22:06:59.654: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2233 exec execpod-affinitygk6sd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30573' Jun 3 22:06:59.906: INFO: rc: 1 Jun 3 22:06:59.906: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2233 exec execpod-affinitygk6sd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30573: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30573 nc: connect to 10.10.190.207 port 30573 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 22:07:00.655: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2233 exec execpod-affinitygk6sd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30573' Jun 3 22:07:00.912: INFO: rc: 1 Jun 3 22:07:00.912: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2233 exec execpod-affinitygk6sd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30573: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30573 nc: connect to 10.10.190.207 port 30573 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 22:07:01.654: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2233 exec execpod-affinitygk6sd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30573' Jun 3 22:07:01.904: INFO: rc: 1 Jun 3 22:07:01.904: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2233 exec execpod-affinitygk6sd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30573: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30573 nc: connect to 10.10.190.207 port 30573 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 22:07:02.654: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2233 exec execpod-affinitygk6sd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30573' Jun 3 22:07:02.894: INFO: rc: 1 Jun 3 22:07:02.894: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2233 exec execpod-affinitygk6sd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30573: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30573 nc: connect to 10.10.190.207 port 30573 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 22:07:03.655: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2233 exec execpod-affinitygk6sd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30573' Jun 3 22:07:03.905: INFO: rc: 1 Jun 3 22:07:03.905: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2233 exec execpod-affinitygk6sd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30573: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30573 nc: connect to 10.10.190.207 port 30573 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 22:07:04.653: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2233 exec execpod-affinitygk6sd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30573' Jun 3 22:07:04.885: INFO: rc: 1 Jun 3 22:07:04.885: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2233 exec execpod-affinitygk6sd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30573: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30573 nc: connect to 10.10.190.207 port 30573 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 22:07:05.655: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2233 exec execpod-affinitygk6sd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30573' Jun 3 22:07:05.894: INFO: rc: 1 Jun 3 22:07:05.894: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2233 exec execpod-affinitygk6sd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30573: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30573 nc: connect to 10.10.190.207 port 30573 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 22:07:06.655: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2233 exec execpod-affinitygk6sd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30573' Jun 3 22:07:06.894: INFO: rc: 1 Jun 3 22:07:06.894: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2233 exec execpod-affinitygk6sd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30573: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30573 nc: connect to 10.10.190.207 port 30573 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 22:07:07.654: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2233 exec execpod-affinitygk6sd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30573' Jun 3 22:07:07.899: INFO: rc: 1 Jun 3 22:07:07.899: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2233 exec execpod-affinitygk6sd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30573: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30573 nc: connect to 10.10.190.207 port 30573 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 22:07:08.654: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2233 exec execpod-affinitygk6sd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30573' Jun 3 22:07:08.900: INFO: rc: 1 Jun 3 22:07:08.900: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2233 exec execpod-affinitygk6sd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30573: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30573 nc: connect to 10.10.190.207 port 30573 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 22:07:09.654: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2233 exec execpod-affinitygk6sd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30573' Jun 3 22:07:09.905: INFO: rc: 1 Jun 3 22:07:09.905: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2233 exec execpod-affinitygk6sd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30573: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30573 nc: connect to 10.10.190.207 port 30573 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 22:07:10.656: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2233 exec execpod-affinitygk6sd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30573' Jun 3 22:07:10.884: INFO: rc: 1 Jun 3 22:07:10.884: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2233 exec execpod-affinitygk6sd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30573: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30573 nc: connect to 10.10.190.207 port 30573 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 22:07:11.655: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2233 exec execpod-affinitygk6sd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30573' Jun 3 22:07:11.898: INFO: rc: 1 Jun 3 22:07:11.898: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2233 exec execpod-affinitygk6sd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30573: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30573 nc: connect to 10.10.190.207 port 30573 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 22:07:12.654: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2233 exec execpod-affinitygk6sd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30573' Jun 3 22:07:12.880: INFO: rc: 1 Jun 3 22:07:12.880: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2233 exec execpod-affinitygk6sd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30573: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30573 nc: connect to 10.10.190.207 port 30573 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 22:07:13.654: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2233 exec execpod-affinitygk6sd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30573' Jun 3 22:07:13.898: INFO: rc: 1 Jun 3 22:07:13.898: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2233 exec execpod-affinitygk6sd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30573: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30573 nc: connect to 10.10.190.207 port 30573 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 22:07:14.654: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2233 exec execpod-affinitygk6sd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30573' Jun 3 22:07:14.908: INFO: rc: 1 Jun 3 22:07:14.908: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2233 exec execpod-affinitygk6sd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30573: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30573 nc: connect to 10.10.190.207 port 30573 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 22:07:15.655: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2233 exec execpod-affinitygk6sd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30573' Jun 3 22:07:15.879: INFO: rc: 1 Jun 3 22:07:15.879: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2233 exec execpod-affinitygk6sd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30573: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30573 nc: connect to 10.10.190.207 port 30573 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 22:07:16.654: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2233 exec execpod-affinitygk6sd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30573' Jun 3 22:07:16.898: INFO: rc: 1 Jun 3 22:07:16.899: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2233 exec execpod-affinitygk6sd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30573: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30573 nc: connect to 10.10.190.207 port 30573 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 22:07:17.654: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2233 exec execpod-affinitygk6sd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30573' Jun 3 22:07:17.885: INFO: rc: 1 Jun 3 22:07:17.885: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2233 exec execpod-affinitygk6sd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30573: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30573 nc: connect to 10.10.190.207 port 30573 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 22:07:18.654: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2233 exec execpod-affinitygk6sd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30573' Jun 3 22:07:18.905: INFO: rc: 1 Jun 3 22:07:18.905: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2233 exec execpod-affinitygk6sd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30573: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30573 nc: connect to 10.10.190.207 port 30573 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 22:07:19.654: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2233 exec execpod-affinitygk6sd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30573' Jun 3 22:07:19.912: INFO: rc: 1 Jun 3 22:07:19.912: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2233 exec execpod-affinitygk6sd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30573: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30573 nc: connect to 10.10.190.207 port 30573 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 22:07:20.655: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2233 exec execpod-affinitygk6sd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30573' Jun 3 22:07:20.906: INFO: rc: 1 Jun 3 22:07:20.906: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2233 exec execpod-affinitygk6sd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30573: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30573 nc: connect to 10.10.190.207 port 30573 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 22:07:21.653: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2233 exec execpod-affinitygk6sd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30573' Jun 3 22:07:21.903: INFO: rc: 1 Jun 3 22:07:21.904: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2233 exec execpod-affinitygk6sd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30573: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30573 nc: connect to 10.10.190.207 port 30573 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 22:07:22.654: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2233 exec execpod-affinitygk6sd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30573' Jun 3 22:07:22.889: INFO: rc: 1 Jun 3 22:07:22.890: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2233 exec execpod-affinitygk6sd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30573: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30573 nc: connect to 10.10.190.207 port 30573 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 22:07:23.654: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2233 exec execpod-affinitygk6sd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30573' Jun 3 22:07:23.883: INFO: rc: 1 Jun 3 22:07:23.883: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2233 exec execpod-affinitygk6sd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30573: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30573 nc: connect to 10.10.190.207 port 30573 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 22:07:24.653: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2233 exec execpod-affinitygk6sd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30573' Jun 3 22:07:24.883: INFO: rc: 1 Jun 3 22:07:24.883: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2233 exec execpod-affinitygk6sd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30573: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30573 nc: connect to 10.10.190.207 port 30573 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 22:07:25.654: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2233 exec execpod-affinitygk6sd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30573' Jun 3 22:07:25.882: INFO: rc: 1 Jun 3 22:07:25.882: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2233 exec execpod-affinitygk6sd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30573: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30573 nc: connect to 10.10.190.207 port 30573 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 22:07:26.655: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2233 exec execpod-affinitygk6sd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30573' Jun 3 22:07:26.890: INFO: rc: 1 Jun 3 22:07:26.890: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2233 exec execpod-affinitygk6sd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30573: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30573 nc: connect to 10.10.190.207 port 30573 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 22:07:27.655: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2233 exec execpod-affinitygk6sd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30573' Jun 3 22:07:27.918: INFO: rc: 1 Jun 3 22:07:27.918: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2233 exec execpod-affinitygk6sd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30573: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30573 nc: connect to 10.10.190.207 port 30573 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 22:07:28.654: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2233 exec execpod-affinitygk6sd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30573' Jun 3 22:07:28.910: INFO: rc: 1 Jun 3 22:07:28.910: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2233 exec execpod-affinitygk6sd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30573: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30573 nc: connect to 10.10.190.207 port 30573 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 22:07:29.654: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2233 exec execpod-affinitygk6sd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30573' Jun 3 22:07:29.901: INFO: rc: 1 Jun 3 22:07:29.901: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2233 exec execpod-affinitygk6sd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30573: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30573 nc: connect to 10.10.190.207 port 30573 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 22:07:30.654: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2233 exec execpod-affinitygk6sd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30573' Jun 3 22:07:31.548: INFO: rc: 1 Jun 3 22:07:31.548: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2233 exec execpod-affinitygk6sd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30573: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30573 nc: connect to 10.10.190.207 port 30573 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 22:07:31.653: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2233 exec execpod-affinitygk6sd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30573' Jun 3 22:07:31.908: INFO: rc: 1 Jun 3 22:07:31.908: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2233 exec execpod-affinitygk6sd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30573: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30573 nc: connect to 10.10.190.207 port 30573 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 22:07:32.654: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2233 exec execpod-affinitygk6sd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30573' Jun 3 22:07:32.904: INFO: rc: 1 Jun 3 22:07:32.904: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2233 exec execpod-affinitygk6sd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30573: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30573 nc: connect to 10.10.190.207 port 30573 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 22:07:33.654: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2233 exec execpod-affinitygk6sd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30573' Jun 3 22:07:33.900: INFO: rc: 1 Jun 3 22:07:33.901: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2233 exec execpod-affinitygk6sd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30573: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30573 nc: connect to 10.10.190.207 port 30573 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 22:07:34.654: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2233 exec execpod-affinitygk6sd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30573' Jun 3 22:07:34.897: INFO: rc: 1 Jun 3 22:07:34.897: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2233 exec execpod-affinitygk6sd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30573: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30573 nc: connect to 10.10.190.207 port 30573 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 22:07:35.654: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2233 exec execpod-affinitygk6sd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30573' Jun 3 22:07:35.911: INFO: rc: 1 Jun 3 22:07:35.911: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2233 exec execpod-affinitygk6sd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30573: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30573 nc: connect to 10.10.190.207 port 30573 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 22:07:36.655: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2233 exec execpod-affinitygk6sd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30573' Jun 3 22:07:36.928: INFO: rc: 1 Jun 3 22:07:36.928: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2233 exec execpod-affinitygk6sd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30573: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30573 nc: connect to 10.10.190.207 port 30573 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 22:07:37.654: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2233 exec execpod-affinitygk6sd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30573' Jun 3 22:07:37.902: INFO: rc: 1 Jun 3 22:07:37.902: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2233 exec execpod-affinitygk6sd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30573: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30573 nc: connect to 10.10.190.207 port 30573 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 22:07:38.656: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2233 exec execpod-affinitygk6sd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30573' Jun 3 22:07:38.931: INFO: rc: 1 Jun 3 22:07:38.932: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2233 exec execpod-affinitygk6sd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30573: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30573 nc: connect to 10.10.190.207 port 30573 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 22:07:39.654: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2233 exec execpod-affinitygk6sd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30573' Jun 3 22:07:39.904: INFO: rc: 1 Jun 3 22:07:39.905: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2233 exec execpod-affinitygk6sd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30573: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30573 nc: connect to 10.10.190.207 port 30573 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 22:07:40.655: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2233 exec execpod-affinitygk6sd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30573' Jun 3 22:07:40.885: INFO: rc: 1 Jun 3 22:07:40.886: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2233 exec execpod-affinitygk6sd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30573: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30573 nc: connect to 10.10.190.207 port 30573 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 22:07:41.654: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2233 exec execpod-affinitygk6sd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30573' Jun 3 22:07:41.904: INFO: rc: 1 Jun 3 22:07:41.904: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2233 exec execpod-affinitygk6sd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30573: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30573 nc: connect to 10.10.190.207 port 30573 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 22:07:42.654: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2233 exec execpod-affinitygk6sd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30573' Jun 3 22:07:42.930: INFO: rc: 1 Jun 3 22:07:42.930: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2233 exec execpod-affinitygk6sd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30573: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30573 nc: connect to 10.10.190.207 port 30573 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 22:07:43.656: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2233 exec execpod-affinitygk6sd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30573' Jun 3 22:07:43.876: INFO: rc: 1 Jun 3 22:07:43.876: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2233 exec execpod-affinitygk6sd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30573: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30573 nc: connect to 10.10.190.207 port 30573 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 22:07:44.654: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2233 exec execpod-affinitygk6sd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30573' Jun 3 22:07:44.912: INFO: rc: 1 Jun 3 22:07:44.913: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2233 exec execpod-affinitygk6sd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30573: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30573 nc: connect to 10.10.190.207 port 30573 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 22:07:45.653: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2233 exec execpod-affinitygk6sd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30573' Jun 3 22:07:45.897: INFO: rc: 1 Jun 3 22:07:45.897: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2233 exec execpod-affinitygk6sd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30573: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30573 nc: connect to 10.10.190.207 port 30573 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 22:07:46.654: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2233 exec execpod-affinitygk6sd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30573' Jun 3 22:07:46.881: INFO: rc: 1 Jun 3 22:07:46.881: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2233 exec execpod-affinitygk6sd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30573: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30573 nc: connect to 10.10.190.207 port 30573 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 22:07:47.653: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2233 exec execpod-affinitygk6sd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30573' Jun 3 22:07:47.884: INFO: rc: 1 Jun 3 22:07:47.884: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2233 exec execpod-affinitygk6sd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30573: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30573 nc: connect to 10.10.190.207 port 30573 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 22:07:48.654: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2233 exec execpod-affinitygk6sd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30573' Jun 3 22:07:48.898: INFO: rc: 1 Jun 3 22:07:48.898: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2233 exec execpod-affinitygk6sd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30573: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30573 nc: connect to 10.10.190.207 port 30573 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 22:07:49.654: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2233 exec execpod-affinitygk6sd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30573' Jun 3 22:07:49.895: INFO: rc: 1 Jun 3 22:07:49.895: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2233 exec execpod-affinitygk6sd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30573: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30573 nc: connect to 10.10.190.207 port 30573 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 22:07:50.656: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2233 exec execpod-affinitygk6sd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30573' Jun 3 22:07:50.884: INFO: rc: 1 Jun 3 22:07:50.884: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2233 exec execpod-affinitygk6sd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30573: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30573 nc: connect to 10.10.190.207 port 30573 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 22:07:51.655: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2233 exec execpod-affinitygk6sd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30573' Jun 3 22:07:51.905: INFO: rc: 1 Jun 3 22:07:51.905: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2233 exec execpod-affinitygk6sd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30573: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30573 nc: connect to 10.10.190.207 port 30573 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 22:07:52.655: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2233 exec execpod-affinitygk6sd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30573' Jun 3 22:07:52.885: INFO: rc: 1 Jun 3 22:07:52.885: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2233 exec execpod-affinitygk6sd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30573: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30573 nc: connect to 10.10.190.207 port 30573 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 22:07:53.654: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2233 exec execpod-affinitygk6sd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30573' Jun 3 22:07:53.897: INFO: rc: 1 Jun 3 22:07:53.897: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2233 exec execpod-affinitygk6sd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30573: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30573 nc: connect to 10.10.190.207 port 30573 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 22:07:54.654: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2233 exec execpod-affinitygk6sd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30573' Jun 3 22:07:54.888: INFO: rc: 1 Jun 3 22:07:54.888: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2233 exec execpod-affinitygk6sd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30573: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30573 nc: connect to 10.10.190.207 port 30573 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 22:07:55.656: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2233 exec execpod-affinitygk6sd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30573' Jun 3 22:07:55.894: INFO: rc: 1 Jun 3 22:07:55.894: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2233 exec execpod-affinitygk6sd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30573: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30573 nc: connect to 10.10.190.207 port 30573 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 22:07:56.655: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2233 exec execpod-affinitygk6sd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30573' Jun 3 22:07:56.903: INFO: rc: 1 Jun 3 22:07:56.903: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2233 exec execpod-affinitygk6sd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30573: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30573 nc: connect to 10.10.190.207 port 30573 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 22:07:57.654: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2233 exec execpod-affinitygk6sd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30573' Jun 3 22:07:57.888: INFO: rc: 1 Jun 3 22:07:57.888: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2233 exec execpod-affinitygk6sd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30573: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30573 nc: connect to 10.10.190.207 port 30573 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 22:07:58.656: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2233 exec execpod-affinitygk6sd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30573' Jun 3 22:07:58.915: INFO: rc: 1 Jun 3 22:07:58.916: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2233 exec execpod-affinitygk6sd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30573: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30573 nc: connect to 10.10.190.207 port 30573 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 22:07:59.654: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2233 exec execpod-affinitygk6sd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30573' Jun 3 22:07:59.915: INFO: rc: 1 Jun 3 22:07:59.915: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2233 exec execpod-affinitygk6sd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30573: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30573 nc: connect to 10.10.190.207 port 30573 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 22:08:00.654: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2233 exec execpod-affinitygk6sd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30573' Jun 3 22:08:00.898: INFO: rc: 1 Jun 3 22:08:00.899: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2233 exec execpod-affinitygk6sd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30573: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30573 nc: connect to 10.10.190.207 port 30573 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 22:08:01.654: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2233 exec execpod-affinitygk6sd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30573' Jun 3 22:08:01.899: INFO: rc: 1 Jun 3 22:08:01.899: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2233 exec execpod-affinitygk6sd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30573: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30573 nc: connect to 10.10.190.207 port 30573 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 22:08:02.653: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2233 exec execpod-affinitygk6sd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30573' Jun 3 22:08:02.905: INFO: rc: 1 Jun 3 22:08:02.905: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2233 exec execpod-affinitygk6sd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30573: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30573 nc: connect to 10.10.190.207 port 30573 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 22:08:03.655: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2233 exec execpod-affinitygk6sd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30573' Jun 3 22:08:03.901: INFO: rc: 1 Jun 3 22:08:03.901: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2233 exec execpod-affinitygk6sd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30573: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30573 nc: connect to 10.10.190.207 port 30573 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 22:08:04.653: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2233 exec execpod-affinitygk6sd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30573' Jun 3 22:08:04.885: INFO: rc: 1 Jun 3 22:08:04.885: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2233 exec execpod-affinitygk6sd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30573: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30573 nc: connect to 10.10.190.207 port 30573 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 22:08:05.654: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2233 exec execpod-affinitygk6sd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30573' Jun 3 22:08:05.908: INFO: rc: 1 Jun 3 22:08:05.908: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2233 exec execpod-affinitygk6sd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30573: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30573 nc: connect to 10.10.190.207 port 30573 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 22:08:06.654: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2233 exec execpod-affinitygk6sd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30573' Jun 3 22:08:06.935: INFO: rc: 1 Jun 3 22:08:06.935: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2233 exec execpod-affinitygk6sd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30573: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30573 nc: connect to 10.10.190.207 port 30573 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 22:08:07.656: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2233 exec execpod-affinitygk6sd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30573' Jun 3 22:08:07.901: INFO: rc: 1 Jun 3 22:08:07.901: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2233 exec execpod-affinitygk6sd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30573: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30573 nc: connect to 10.10.190.207 port 30573 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 22:08:08.656: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2233 exec execpod-affinitygk6sd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30573' Jun 3 22:08:08.903: INFO: rc: 1 Jun 3 22:08:08.903: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2233 exec execpod-affinitygk6sd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30573: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30573 nc: connect to 10.10.190.207 port 30573 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 22:08:09.653: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2233 exec execpod-affinitygk6sd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30573' Jun 3 22:08:09.891: INFO: rc: 1 Jun 3 22:08:09.892: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2233 exec execpod-affinitygk6sd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30573: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30573 nc: connect to 10.10.190.207 port 30573 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 22:08:10.655: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2233 exec execpod-affinitygk6sd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30573' Jun 3 22:08:10.897: INFO: rc: 1 Jun 3 22:08:10.897: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2233 exec execpod-affinitygk6sd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30573: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30573 nc: connect to 10.10.190.207 port 30573 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 22:08:11.654: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2233 exec execpod-affinitygk6sd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30573' Jun 3 22:08:11.893: INFO: rc: 1 Jun 3 22:08:11.893: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2233 exec execpod-affinitygk6sd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30573: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30573 nc: connect to 10.10.190.207 port 30573 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 22:08:12.656: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2233 exec execpod-affinitygk6sd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30573' Jun 3 22:08:12.868: INFO: rc: 1 Jun 3 22:08:12.868: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2233 exec execpod-affinitygk6sd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30573: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30573 nc: connect to 10.10.190.207 port 30573 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 22:08:13.654: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2233 exec execpod-affinitygk6sd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30573' Jun 3 22:08:13.895: INFO: rc: 1 Jun 3 22:08:13.895: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2233 exec execpod-affinitygk6sd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30573: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30573 nc: connect to 10.10.190.207 port 30573 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 22:08:14.654: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2233 exec execpod-affinitygk6sd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30573' Jun 3 22:08:14.907: INFO: rc: 1 Jun 3 22:08:14.907: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2233 exec execpod-affinitygk6sd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30573: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30573 nc: connect to 10.10.190.207 port 30573 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 22:08:15.654: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2233 exec execpod-affinitygk6sd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30573' Jun 3 22:08:15.890: INFO: rc: 1 Jun 3 22:08:15.890: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2233 exec execpod-affinitygk6sd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30573: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30573 nc: connect to 10.10.190.207 port 30573 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 22:08:16.654: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2233 exec execpod-affinitygk6sd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30573' Jun 3 22:08:16.925: INFO: rc: 1 Jun 3 22:08:16.925: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2233 exec execpod-affinitygk6sd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30573: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30573 nc: connect to 10.10.190.207 port 30573 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 22:08:17.654: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2233 exec execpod-affinitygk6sd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30573' Jun 3 22:08:17.892: INFO: rc: 1 Jun 3 22:08:17.892: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2233 exec execpod-affinitygk6sd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30573: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30573 nc: connect to 10.10.190.207 port 30573 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 22:08:18.654: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2233 exec execpod-affinitygk6sd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30573' Jun 3 22:08:18.913: INFO: rc: 1 Jun 3 22:08:18.913: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2233 exec execpod-affinitygk6sd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30573: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30573 nc: connect to 10.10.190.207 port 30573 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 22:08:19.654: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2233 exec execpod-affinitygk6sd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30573' Jun 3 22:08:19.886: INFO: rc: 1 Jun 3 22:08:19.887: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2233 exec execpod-affinitygk6sd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30573: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30573 nc: connect to 10.10.190.207 port 30573 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 22:08:20.655: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2233 exec execpod-affinitygk6sd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30573' Jun 3 22:08:20.903: INFO: rc: 1 Jun 3 22:08:20.903: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2233 exec execpod-affinitygk6sd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30573: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30573 nc: connect to 10.10.190.207 port 30573 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 22:08:21.653: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2233 exec execpod-affinitygk6sd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30573' Jun 3 22:08:21.888: INFO: rc: 1 Jun 3 22:08:21.888: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2233 exec execpod-affinitygk6sd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30573: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30573 nc: connect to 10.10.190.207 port 30573 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 22:08:22.656: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2233 exec execpod-affinitygk6sd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30573' Jun 3 22:08:22.904: INFO: rc: 1 Jun 3 22:08:22.904: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2233 exec execpod-affinitygk6sd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30573: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30573 nc: connect to 10.10.190.207 port 30573 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 22:08:23.656: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2233 exec execpod-affinitygk6sd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30573' Jun 3 22:08:23.906: INFO: rc: 1 Jun 3 22:08:23.907: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2233 exec execpod-affinitygk6sd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30573: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30573 nc: connect to 10.10.190.207 port 30573 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 22:08:24.654: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2233 exec execpod-affinitygk6sd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30573' Jun 3 22:08:24.891: INFO: rc: 1 Jun 3 22:08:24.891: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2233 exec execpod-affinitygk6sd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30573: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30573 nc: connect to 10.10.190.207 port 30573 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 22:08:25.654: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2233 exec execpod-affinitygk6sd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30573' Jun 3 22:08:25.903: INFO: rc: 1 Jun 3 22:08:25.903: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2233 exec execpod-affinitygk6sd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30573: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30573 nc: connect to 10.10.190.207 port 30573 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 22:08:26.654: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2233 exec execpod-affinitygk6sd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30573' Jun 3 22:08:26.917: INFO: rc: 1 Jun 3 22:08:26.917: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2233 exec execpod-affinitygk6sd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30573: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30573 nc: connect to 10.10.190.207 port 30573 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 22:08:27.656: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2233 exec execpod-affinitygk6sd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30573' Jun 3 22:08:27.892: INFO: rc: 1 Jun 3 22:08:27.892: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2233 exec execpod-affinitygk6sd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30573: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30573 nc: connect to 10.10.190.207 port 30573 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 22:08:28.655: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2233 exec execpod-affinitygk6sd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30573' Jun 3 22:08:28.898: INFO: rc: 1 Jun 3 22:08:28.898: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2233 exec execpod-affinitygk6sd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30573: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30573 nc: connect to 10.10.190.207 port 30573 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 22:08:29.654: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2233 exec execpod-affinitygk6sd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30573' Jun 3 22:08:29.909: INFO: rc: 1 Jun 3 22:08:29.909: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2233 exec execpod-affinitygk6sd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30573: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30573 nc: connect to 10.10.190.207 port 30573 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 22:08:30.656: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2233 exec execpod-affinitygk6sd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30573' Jun 3 22:08:31.332: INFO: rc: 1 Jun 3 22:08:31.332: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2233 exec execpod-affinitygk6sd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30573: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30573 nc: connect to 10.10.190.207 port 30573 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 22:08:31.332: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2233 exec execpod-affinitygk6sd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30573' Jun 3 22:08:31.577: INFO: rc: 1 Jun 3 22:08:31.578: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2233 exec execpod-affinitygk6sd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30573: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30573 nc: connect to 10.10.190.207 port 30573 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 3 22:08:31.578: FAIL: Unexpected error: <*errors.errorString | 0xc0042e2760>: { s: "service is not reachable within 2m0s timeout on endpoint 10.10.190.207:30573 over TCP protocol", } service is not reachable within 2m0s timeout on endpoint 10.10.190.207:30573 over TCP protocol occurred Full Stack Trace k8s.io/kubernetes/test/e2e/network.execAffinityTestForSessionAffinityTimeout(0xc00104a580, 0x77b33d8, 0xc001ba0000, 0xc0015f1900) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:2497 +0x751 k8s.io/kubernetes/test/e2e/network.glob..func24.26() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1846 +0x9c k8s.io/kubernetes/test/e2e.RunE2ETests(0xc0026ab200) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x36c k8s.io/kubernetes/test/e2e.TestE2E(0xc0026ab200) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:144 +0x2b testing.tRunner(0xc0026ab200, 0x70f99e8) /usr/local/go/src/testing/testing.go:1193 +0xef created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1238 +0x2b3 Jun 3 22:08:31.579: INFO: Cleaning up the exec pod STEP: deleting ReplicationController affinity-nodeport-timeout in namespace services-2233, will wait for the garbage collector to delete the pods Jun 3 22:08:31.655: INFO: Deleting ReplicationController affinity-nodeport-timeout took: 3.809861ms Jun 3 22:08:31.756: INFO: Terminating ReplicationController affinity-nodeport-timeout pods took: 101.054751ms [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 STEP: Collecting events from namespace "services-2233". STEP: Found 36 events. Jun 3 22:08:40.275: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for affinity-nodeport-timeout-2xw5d: { } Scheduled: Successfully assigned services-2233/affinity-nodeport-timeout-2xw5d to node2 Jun 3 22:08:40.275: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for affinity-nodeport-timeout-7w8xs: { } Scheduled: Successfully assigned services-2233/affinity-nodeport-timeout-7w8xs to node2 Jun 3 22:08:40.275: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for affinity-nodeport-timeout-wnsmp: { } Scheduled: Successfully assigned services-2233/affinity-nodeport-timeout-wnsmp to node2 Jun 3 22:08:40.275: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for execpod-affinitygk6sd: { } Scheduled: Successfully assigned services-2233/execpod-affinitygk6sd to node2 Jun 3 22:08:40.275: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for kube-proxy-mode-detector: { } Scheduled: Successfully assigned services-2233/kube-proxy-mode-detector to node2 Jun 3 22:08:40.275: INFO: At 2022-06-03 22:06:15 +0000 UTC - event for kube-proxy-mode-detector: {kubelet node2} Created: Created container agnhost-container Jun 3 22:08:40.275: INFO: At 2022-06-03 22:06:15 +0000 UTC - event for kube-proxy-mode-detector: {kubelet node2} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/agnhost:2.32" in 262.213136ms Jun 3 22:08:40.275: INFO: At 2022-06-03 22:06:15 +0000 UTC - event for kube-proxy-mode-detector: {kubelet node2} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/agnhost:2.32" Jun 3 22:08:40.275: INFO: At 2022-06-03 22:06:16 +0000 UTC - event for kube-proxy-mode-detector: {kubelet node2} Started: Started container agnhost-container Jun 3 22:08:40.275: INFO: At 2022-06-03 22:06:18 +0000 UTC - event for affinity-nodeport-timeout: {replication-controller } SuccessfulCreate: Created pod: affinity-nodeport-timeout-2xw5d Jun 3 22:08:40.275: INFO: At 2022-06-03 22:06:18 +0000 UTC - event for affinity-nodeport-timeout: {replication-controller } SuccessfulCreate: Created pod: affinity-nodeport-timeout-wnsmp Jun 3 22:08:40.275: INFO: At 2022-06-03 22:06:18 +0000 UTC - event for affinity-nodeport-timeout: {replication-controller } SuccessfulCreate: Created pod: affinity-nodeport-timeout-7w8xs Jun 3 22:08:40.275: INFO: At 2022-06-03 22:06:18 +0000 UTC - event for kube-proxy-mode-detector: {kubelet node2} Killing: Stopping container agnhost-container Jun 3 22:08:40.275: INFO: At 2022-06-03 22:06:19 +0000 UTC - event for kube-proxy-mode-detector: {kubelet node2} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jun 3 22:08:40.275: INFO: At 2022-06-03 22:06:20 +0000 UTC - event for kube-proxy-mode-detector: {kubelet node2} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/agnhost:2.32" in 359.256733ms Jun 3 22:08:40.275: INFO: At 2022-06-03 22:06:20 +0000 UTC - event for kube-proxy-mode-detector: {kubelet node2} Failed: Error: cannot find volume "kube-api-access-fr8bj" to mount into container "agnhost-container" Jun 3 22:08:40.275: INFO: At 2022-06-03 22:06:21 +0000 UTC - event for affinity-nodeport-timeout-7w8xs: {kubelet node2} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/agnhost:2.32" Jun 3 22:08:40.275: INFO: At 2022-06-03 22:06:21 +0000 UTC - event for affinity-nodeport-timeout-7w8xs: {kubelet node2} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/agnhost:2.32" in 267.257983ms Jun 3 22:08:40.275: INFO: At 2022-06-03 22:06:21 +0000 UTC - event for affinity-nodeport-timeout-wnsmp: {kubelet node2} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/agnhost:2.32" Jun 3 22:08:40.275: INFO: At 2022-06-03 22:06:22 +0000 UTC - event for affinity-nodeport-timeout-2xw5d: {kubelet node2} Created: Created container affinity-nodeport-timeout Jun 3 22:08:40.275: INFO: At 2022-06-03 22:06:22 +0000 UTC - event for affinity-nodeport-timeout-2xw5d: {kubelet node2} Started: Started container affinity-nodeport-timeout Jun 3 22:08:40.276: INFO: At 2022-06-03 22:06:22 +0000 UTC - event for affinity-nodeport-timeout-2xw5d: {kubelet node2} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/agnhost:2.32" Jun 3 22:08:40.276: INFO: At 2022-06-03 22:06:22 +0000 UTC - event for affinity-nodeport-timeout-2xw5d: {kubelet node2} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/agnhost:2.32" in 313.070502ms Jun 3 22:08:40.276: INFO: At 2022-06-03 22:06:22 +0000 UTC - event for affinity-nodeport-timeout-7w8xs: {kubelet node2} Started: Started container affinity-nodeport-timeout Jun 3 22:08:40.276: INFO: At 2022-06-03 22:06:22 +0000 UTC - event for affinity-nodeport-timeout-7w8xs: {kubelet node2} Created: Created container affinity-nodeport-timeout Jun 3 22:08:40.276: INFO: At 2022-06-03 22:06:22 +0000 UTC - event for affinity-nodeport-timeout-wnsmp: {kubelet node2} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/agnhost:2.32" in 564.864486ms Jun 3 22:08:40.276: INFO: At 2022-06-03 22:06:22 +0000 UTC - event for affinity-nodeport-timeout-wnsmp: {kubelet node2} Started: Started container affinity-nodeport-timeout Jun 3 22:08:40.276: INFO: At 2022-06-03 22:06:22 +0000 UTC - event for affinity-nodeport-timeout-wnsmp: {kubelet node2} Created: Created container affinity-nodeport-timeout Jun 3 22:08:40.276: INFO: At 2022-06-03 22:06:26 +0000 UTC - event for execpod-affinitygk6sd: {kubelet node2} Created: Created container agnhost-container Jun 3 22:08:40.276: INFO: At 2022-06-03 22:06:26 +0000 UTC - event for execpod-affinitygk6sd: {kubelet node2} Started: Started container agnhost-container Jun 3 22:08:40.276: INFO: At 2022-06-03 22:06:26 +0000 UTC - event for execpod-affinitygk6sd: {kubelet node2} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/agnhost:2.32" in 256.427481ms Jun 3 22:08:40.276: INFO: At 2022-06-03 22:06:26 +0000 UTC - event for execpod-affinitygk6sd: {kubelet node2} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/agnhost:2.32" Jun 3 22:08:40.276: INFO: At 2022-06-03 22:08:31 +0000 UTC - event for affinity-nodeport-timeout-2xw5d: {kubelet node2} Killing: Stopping container affinity-nodeport-timeout Jun 3 22:08:40.276: INFO: At 2022-06-03 22:08:31 +0000 UTC - event for affinity-nodeport-timeout-7w8xs: {kubelet node2} Killing: Stopping container affinity-nodeport-timeout Jun 3 22:08:40.276: INFO: At 2022-06-03 22:08:31 +0000 UTC - event for affinity-nodeport-timeout-wnsmp: {kubelet node2} Killing: Stopping container affinity-nodeport-timeout Jun 3 22:08:40.276: INFO: At 2022-06-03 22:08:31 +0000 UTC - event for execpod-affinitygk6sd: {kubelet node2} Killing: Stopping container agnhost-container Jun 3 22:08:40.277: INFO: POD NODE PHASE GRACE CONDITIONS Jun 3 22:08:40.278: INFO: Jun 3 22:08:40.282: INFO: Logging node info for node master1 Jun 3 22:08:40.285: INFO: Node Info: &Node{ObjectMeta:{master1 4d289319-b343-4e96-a789-1a1cbeac007b 50679 0 2022-06-03 19:57:53 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master1 kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.202 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2022-06-03 19:57:56 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}}} {kube-controller-manager Update v1 2022-06-03 19:58:10 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.0.0/24\"":{}},"f:taints":{}}}} {flanneld Update v1 2022-06-03 20:00:37 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kubelet Update v1 2022-06-03 20:05:24 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{451201003520 0} {} 440625980Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234743296 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{406080902496 0} {} 406080902496 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324579328 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-06-03 20:03:30 +0000 UTC,LastTransitionTime:2022-06-03 20:03:30 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-06-03 22:08:34 +0000 UTC,LastTransitionTime:2022-06-03 19:57:50 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-06-03 22:08:34 +0000 UTC,LastTransitionTime:2022-06-03 19:57:50 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-06-03 22:08:34 +0000 UTC,LastTransitionTime:2022-06-03 19:57:50 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-06-03 22:08:34 +0000 UTC,LastTransitionTime:2022-06-03 20:00:47 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.202,},NodeAddress{Type:Hostname,Address:master1,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:3d668405f73a457bb0bcb4df5f4edac8,SystemUUID:00ACFB60-0631-E711-906E-0017A4403562,BootID:c08279e3-a5cb-4f4d-b9f0-f2cde655469f,KernelVersion:3.10.0-1160.66.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.16,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:727687199,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 sirot/netperf-latest:latest],SizeBytes:282025213,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e k8s.gcr.io/coredns/coredns:v1.8.0],SizeBytes:42454755,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[localhost:30500/tasextender@sha256:eddd5e176ac5f79e2e8ba9a1b7023bbf7200edfa835da39de54a6bf3568f9668 tasextender:latest localhost:30500/tasextender:0.4],SizeBytes:28910791,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[registry@sha256:1cd9409a311350c3072fe510b52046f104416376c126a479cef9a4dfe692cf57 registry:2.7.0],SizeBytes:24191168,},ContainerImage{Names:[nginx@sha256:b92d3b942c8b84da889ac3dc6e83bd20ffb8cd2d8298eba92c8b0bf88d52f03e nginx:1.20.1-alpine],SizeBytes:22721538,},ContainerImage{Names:[@ :],SizeBytes:5577654,},ContainerImage{Names:[alpine@sha256:c0e9560cda118f9ec63ddefb4a173a2b2a0347082d7dff7dc14272e7841a5b5a alpine:3.12.1],SizeBytes:5573013,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jun 3 22:08:40.285: INFO: Logging kubelet events for node master1 Jun 3 22:08:40.288: INFO: Logging pods the kubelet thinks is on node master1 Jun 3 22:08:40.310: INFO: kube-proxy-zgchh started at 2022-06-03 19:59:36 +0000 UTC (0+1 container statuses recorded) Jun 3 22:08:40.310: INFO: Container kube-proxy ready: true, restart count 2 Jun 3 22:08:40.310: INFO: dns-autoscaler-7df78bfcfb-vdtpl started at 2022-06-03 20:01:09 +0000 UTC (0+1 container statuses recorded) Jun 3 22:08:40.310: INFO: Container autoscaler ready: true, restart count 2 Jun 3 22:08:40.310: INFO: coredns-8474476ff8-rvc4v started at 2022-06-03 20:01:12 +0000 UTC (0+1 container statuses recorded) Jun 3 22:08:40.310: INFO: Container coredns ready: true, restart count 1 Jun 3 22:08:40.310: INFO: container-registry-65d7c44b96-2nzvn started at 2022-06-03 20:05:02 +0000 UTC (0+2 container statuses recorded) Jun 3 22:08:40.310: INFO: Container docker-registry ready: true, restart count 0 Jun 3 22:08:40.310: INFO: Container nginx ready: true, restart count 0 Jun 3 22:08:40.310: INFO: kube-scheduler-master1 started at 2022-06-03 20:06:52 +0000 UTC (0+1 container statuses recorded) Jun 3 22:08:40.310: INFO: Container kube-scheduler ready: true, restart count 0 Jun 3 22:08:40.310: INFO: kube-apiserver-master1 started at 2022-06-03 19:58:57 +0000 UTC (0+1 container statuses recorded) Jun 3 22:08:40.310: INFO: Container kube-apiserver ready: true, restart count 0 Jun 3 22:08:40.310: INFO: kube-controller-manager-master1 started at 2022-06-03 19:58:57 +0000 UTC (0+1 container statuses recorded) Jun 3 22:08:40.310: INFO: Container kube-controller-manager ready: true, restart count 1 Jun 3 22:08:40.310: INFO: kube-flannel-m8sj7 started at 2022-06-03 20:00:31 +0000 UTC (1+1 container statuses recorded) Jun 3 22:08:40.310: INFO: Init container install-cni ready: true, restart count 0 Jun 3 22:08:40.310: INFO: Container kube-flannel ready: true, restart count 1 Jun 3 22:08:40.310: INFO: kube-multus-ds-amd64-n58qk started at 2022-06-03 20:00:40 +0000 UTC (0+1 container statuses recorded) Jun 3 22:08:40.310: INFO: Container kube-multus ready: true, restart count 1 Jun 3 22:08:40.310: INFO: node-exporter-45rhg started at 2022-06-03 20:13:28 +0000 UTC (0+2 container statuses recorded) Jun 3 22:08:40.310: INFO: Container kube-rbac-proxy ready: true, restart count 0 Jun 3 22:08:40.310: INFO: Container node-exporter ready: true, restart count 0 Jun 3 22:08:40.408: INFO: Latency metrics for node master1 Jun 3 22:08:40.408: INFO: Logging node info for node master2 Jun 3 22:08:40.411: INFO: Node Info: &Node{ObjectMeta:{master2 a6ae2f0e-af0f-4dbb-a8e5-6d3a309310bc 50675 0 2022-06-03 19:58:21 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master2 kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.203 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2022-06-03 19:58:23 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}}} {flanneld Update v1 2022-06-03 20:00:37 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2022-06-03 20:00:45 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}},"f:taints":{}}}} {kubelet Update v1 2022-06-03 20:10:57 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:capacity":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.1.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{451201003520 0} {} 440625980Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234739200 0} {} 196518300Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{406080902496 0} {} 406080902496 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324575232 0} {} 195629468Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-06-03 20:03:28 +0000 UTC,LastTransitionTime:2022-06-03 20:03:28 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-06-03 22:08:34 +0000 UTC,LastTransitionTime:2022-06-03 19:58:21 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-06-03 22:08:34 +0000 UTC,LastTransitionTime:2022-06-03 19:58:21 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-06-03 22:08:34 +0000 UTC,LastTransitionTime:2022-06-03 19:58:21 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-06-03 22:08:34 +0000 UTC,LastTransitionTime:2022-06-03 20:00:45 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.203,},NodeAddress{Type:Hostname,Address:master2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:21e5c20b6e4a4d3fb07443d5575db572,SystemUUID:00A0DE53-E51D-E711-906E-0017A4403562,BootID:52401484-5222-49a3-a465-e7215ade9b1e,KernelVersion:3.10.0-1160.66.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.16,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:727687199,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 sirot/netperf-latest:latest],SizeBytes:282025213,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[quay.io/prometheus-operator/prometheus-operator@sha256:850c86bfeda4389bc9c757a9fd17ca5a090ea6b424968178d4467492cfa13921 quay.io/prometheus-operator/prometheus-operator:v0.44.1],SizeBytes:42617274,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e k8s.gcr.io/coredns/coredns:v1.8.0],SizeBytes:42454755,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jun 3 22:08:40.411: INFO: Logging kubelet events for node master2 Jun 3 22:08:40.413: INFO: Logging pods the kubelet thinks is on node master2 Jun 3 22:08:40.419: INFO: kube-scheduler-master2 started at 2022-06-03 19:58:55 +0000 UTC (0+1 container statuses recorded) Jun 3 22:08:40.419: INFO: Container kube-scheduler ready: true, restart count 3 Jun 3 22:08:40.419: INFO: kube-flannel-sbdcv started at 2022-06-03 20:00:32 +0000 UTC (1+1 container statuses recorded) Jun 3 22:08:40.419: INFO: Init container install-cni ready: true, restart count 2 Jun 3 22:08:40.419: INFO: Container kube-flannel ready: true, restart count 1 Jun 3 22:08:40.419: INFO: kube-multus-ds-amd64-ccvdq started at 2022-06-03 20:00:40 +0000 UTC (0+1 container statuses recorded) Jun 3 22:08:40.419: INFO: Container kube-multus ready: true, restart count 1 Jun 3 22:08:40.419: INFO: kube-apiserver-master2 started at 2022-06-03 19:58:55 +0000 UTC (0+1 container statuses recorded) Jun 3 22:08:40.419: INFO: Container kube-apiserver ready: true, restart count 0 Jun 3 22:08:40.419: INFO: kube-controller-manager-master2 started at 2022-06-03 19:58:55 +0000 UTC (0+1 container statuses recorded) Jun 3 22:08:40.419: INFO: Container kube-controller-manager ready: true, restart count 2 Jun 3 22:08:40.419: INFO: kube-proxy-nlc58 started at 2022-06-03 19:59:36 +0000 UTC (0+1 container statuses recorded) Jun 3 22:08:40.419: INFO: Container kube-proxy ready: true, restart count 1 Jun 3 22:08:40.419: INFO: prometheus-operator-585ccfb458-xp2lz started at 2022-06-03 20:13:21 +0000 UTC (0+2 container statuses recorded) Jun 3 22:08:40.419: INFO: Container kube-rbac-proxy ready: true, restart count 0 Jun 3 22:08:40.419: INFO: Container prometheus-operator ready: true, restart count 0 Jun 3 22:08:40.419: INFO: node-exporter-2h6sb started at 2022-06-03 20:13:28 +0000 UTC (0+2 container statuses recorded) Jun 3 22:08:40.419: INFO: Container kube-rbac-proxy ready: true, restart count 0 Jun 3 22:08:40.419: INFO: Container node-exporter ready: true, restart count 0 Jun 3 22:08:40.501: INFO: Latency metrics for node master2 Jun 3 22:08:40.501: INFO: Logging node info for node master3 Jun 3 22:08:40.504: INFO: Node Info: &Node{ObjectMeta:{master3 559b19e7-45b0-4589-9993-9bba259aae96 50680 0 2022-06-03 19:58:27 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master3 kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.204 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/master.version:v0.8.2 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2022-06-03 19:58:32 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}}} {flanneld Update v1 2022-06-03 20:00:37 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2022-06-03 20:00:47 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.2.0/24\"":{}},"f:taints":{}}}} {nfd-master Update v1 2022-06-03 20:08:15 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/master.version":{}}}}} {kubelet Update v1 2022-06-03 20:08:27 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:capacity":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.2.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{451201003520 0} {} 440625980Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234743296 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{406080902496 0} {} 406080902496 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324579328 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-06-03 20:03:22 +0000 UTC,LastTransitionTime:2022-06-03 20:03:22 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-06-03 22:08:34 +0000 UTC,LastTransitionTime:2022-06-03 19:58:27 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-06-03 22:08:34 +0000 UTC,LastTransitionTime:2022-06-03 19:58:27 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-06-03 22:08:34 +0000 UTC,LastTransitionTime:2022-06-03 19:58:27 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-06-03 22:08:34 +0000 UTC,LastTransitionTime:2022-06-03 20:03:18 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.204,},NodeAddress{Type:Hostname,Address:master3,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:5b399eed918a40dd8324debc1c0777a3,SystemUUID:008B1444-141E-E711-906E-0017A4403562,BootID:2fde35f0-2dc9-4531-9d2b-0bd4a6516b3a,KernelVersion:3.10.0-1160.66.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.16,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:727687199,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[k8s.gcr.io/nfd/node-feature-discovery@sha256:74a1cbd82354f148277e20cdce25d57816e355a896bc67f67a0f722164b16945 k8s.gcr.io/nfd/node-feature-discovery:v0.8.2],SizeBytes:108486428,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e k8s.gcr.io/coredns/coredns:v1.8.0],SizeBytes:42454755,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 aquasec/kube-bench:0.3.1],SizeBytes:19301876,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jun 3 22:08:40.505: INFO: Logging kubelet events for node master3 Jun 3 22:08:40.508: INFO: Logging pods the kubelet thinks is on node master3 Jun 3 22:08:40.517: INFO: kube-apiserver-master3 started at 2022-06-03 20:03:18 +0000 UTC (0+1 container statuses recorded) Jun 3 22:08:40.517: INFO: Container kube-apiserver ready: true, restart count 0 Jun 3 22:08:40.517: INFO: kube-flannel-nx64t started at 2022-06-03 20:00:32 +0000 UTC (1+1 container statuses recorded) Jun 3 22:08:40.517: INFO: Init container install-cni ready: true, restart count 2 Jun 3 22:08:40.517: INFO: Container kube-flannel ready: true, restart count 2 Jun 3 22:08:40.517: INFO: kube-multus-ds-amd64-gjv49 started at 2022-06-03 20:00:40 +0000 UTC (0+1 container statuses recorded) Jun 3 22:08:40.517: INFO: Container kube-multus ready: true, restart count 1 Jun 3 22:08:40.517: INFO: node-feature-discovery-controller-cff799f9f-8fbbp started at 2022-06-03 20:08:09 +0000 UTC (0+1 container statuses recorded) Jun 3 22:08:40.517: INFO: Container nfd-controller ready: true, restart count 0 Jun 3 22:08:40.517: INFO: kube-controller-manager-master3 started at 2022-06-03 20:03:18 +0000 UTC (0+1 container statuses recorded) Jun 3 22:08:40.517: INFO: Container kube-controller-manager ready: true, restart count 2 Jun 3 22:08:40.517: INFO: kube-scheduler-master3 started at 2022-06-03 19:58:27 +0000 UTC (0+1 container statuses recorded) Jun 3 22:08:40.517: INFO: Container kube-scheduler ready: true, restart count 3 Jun 3 22:08:40.517: INFO: kube-proxy-m8r9n started at 2022-06-03 19:59:36 +0000 UTC (0+1 container statuses recorded) Jun 3 22:08:40.517: INFO: Container kube-proxy ready: true, restart count 2 Jun 3 22:08:40.517: INFO: coredns-8474476ff8-dvwn7 started at 2022-06-03 20:01:07 +0000 UTC (0+1 container statuses recorded) Jun 3 22:08:40.517: INFO: Container coredns ready: true, restart count 1 Jun 3 22:08:40.517: INFO: node-exporter-jn8vv started at 2022-06-03 20:13:28 +0000 UTC (0+2 container statuses recorded) Jun 3 22:08:40.517: INFO: Container kube-rbac-proxy ready: true, restart count 0 Jun 3 22:08:40.517: INFO: Container node-exporter ready: true, restart count 0 Jun 3 22:08:40.607: INFO: Latency metrics for node master3 Jun 3 22:08:40.607: INFO: Logging node info for node node1 Jun 3 22:08:40.610: INFO: Node Info: &Node{ObjectMeta:{node1 482ecf0f-7f88-436d-a313-227096fe8b8d 50678 0 2022-06-03 19:59:31 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux cmk.intel.com/cmk-node:true feature.node.kubernetes.io/cpu-cpuid.ADX:true feature.node.kubernetes.io/cpu-cpuid.AESNI:true feature.node.kubernetes.io/cpu-cpuid.AVX:true feature.node.kubernetes.io/cpu-cpuid.AVX2:true feature.node.kubernetes.io/cpu-cpuid.AVX512BW:true feature.node.kubernetes.io/cpu-cpuid.AVX512CD:true feature.node.kubernetes.io/cpu-cpuid.AVX512DQ:true feature.node.kubernetes.io/cpu-cpuid.AVX512F:true feature.node.kubernetes.io/cpu-cpuid.AVX512VL:true feature.node.kubernetes.io/cpu-cpuid.FMA3:true feature.node.kubernetes.io/cpu-cpuid.HLE:true feature.node.kubernetes.io/cpu-cpuid.IBPB:true feature.node.kubernetes.io/cpu-cpuid.MPX:true feature.node.kubernetes.io/cpu-cpuid.RTM:true feature.node.kubernetes.io/cpu-cpuid.SSE4:true feature.node.kubernetes.io/cpu-cpuid.SSE42:true feature.node.kubernetes.io/cpu-cpuid.STIBP:true feature.node.kubernetes.io/cpu-cpuid.VMX:true feature.node.kubernetes.io/cpu-cstate.enabled:true feature.node.kubernetes.io/cpu-hardware_multithreading:true feature.node.kubernetes.io/cpu-pstate.status:active feature.node.kubernetes.io/cpu-pstate.turbo:true feature.node.kubernetes.io/cpu-rdt.RDTCMT:true feature.node.kubernetes.io/cpu-rdt.RDTL3CA:true feature.node.kubernetes.io/cpu-rdt.RDTMBA:true feature.node.kubernetes.io/cpu-rdt.RDTMBM:true feature.node.kubernetes.io/cpu-rdt.RDTMON:true feature.node.kubernetes.io/kernel-config.NO_HZ:true feature.node.kubernetes.io/kernel-config.NO_HZ_FULL:true feature.node.kubernetes.io/kernel-selinux.enabled:true feature.node.kubernetes.io/kernel-version.full:3.10.0-1160.66.1.el7.x86_64 feature.node.kubernetes.io/kernel-version.major:3 feature.node.kubernetes.io/kernel-version.minor:10 feature.node.kubernetes.io/kernel-version.revision:0 feature.node.kubernetes.io/memory-numa:true feature.node.kubernetes.io/network-sriov.capable:true feature.node.kubernetes.io/network-sriov.configured:true feature.node.kubernetes.io/pci-0300_1a03.present:true feature.node.kubernetes.io/storage-nonrotationaldisk:true feature.node.kubernetes.io/system-os_release.ID:centos feature.node.kubernetes.io/system-os_release.VERSION_ID:7 feature.node.kubernetes.io/system-os_release.VERSION_ID.major:7 kubernetes.io/arch:amd64 kubernetes.io/hostname:node1 kubernetes.io/os:linux] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.207 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/extended-resources: nfd.node.kubernetes.io/feature-labels:cpu-cpuid.ADX,cpu-cpuid.AESNI,cpu-cpuid.AVX,cpu-cpuid.AVX2,cpu-cpuid.AVX512BW,cpu-cpuid.AVX512CD,cpu-cpuid.AVX512DQ,cpu-cpuid.AVX512F,cpu-cpuid.AVX512VL,cpu-cpuid.FMA3,cpu-cpuid.HLE,cpu-cpuid.IBPB,cpu-cpuid.MPX,cpu-cpuid.RTM,cpu-cpuid.SSE4,cpu-cpuid.SSE42,cpu-cpuid.STIBP,cpu-cpuid.VMX,cpu-cstate.enabled,cpu-hardware_multithreading,cpu-pstate.status,cpu-pstate.turbo,cpu-rdt.RDTCMT,cpu-rdt.RDTL3CA,cpu-rdt.RDTMBA,cpu-rdt.RDTMBM,cpu-rdt.RDTMON,kernel-config.NO_HZ,kernel-config.NO_HZ_FULL,kernel-selinux.enabled,kernel-version.full,kernel-version.major,kernel-version.minor,kernel-version.revision,memory-numa,network-sriov.capable,network-sriov.configured,pci-0300_1a03.present,storage-nonrotationaldisk,system-os_release.ID,system-os_release.VERSION_ID,system-os_release.VERSION_ID.major nfd.node.kubernetes.io/worker.version:v0.8.2 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2022-06-03 19:59:31 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.3.0/24\"":{}}}}} {kubeadm Update v1 2022-06-03 19:59:32 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {flanneld Update v1 2022-06-03 20:00:37 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {nfd-master Update v1 2022-06-03 20:08:16 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/extended-resources":{},"f:nfd.node.kubernetes.io/feature-labels":{},"f:nfd.node.kubernetes.io/worker.version":{}},"f:labels":{"f:feature.node.kubernetes.io/cpu-cpuid.ADX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AESNI":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX2":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512BW":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512CD":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512DQ":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512F":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512VL":{},"f:feature.node.kubernetes.io/cpu-cpuid.FMA3":{},"f:feature.node.kubernetes.io/cpu-cpuid.HLE":{},"f:feature.node.kubernetes.io/cpu-cpuid.IBPB":{},"f:feature.node.kubernetes.io/cpu-cpuid.MPX":{},"f:feature.node.kubernetes.io/cpu-cpuid.RTM":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE4":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE42":{},"f:feature.node.kubernetes.io/cpu-cpuid.STIBP":{},"f:feature.node.kubernetes.io/cpu-cpuid.VMX":{},"f:feature.node.kubernetes.io/cpu-cstate.enabled":{},"f:feature.node.kubernetes.io/cpu-hardware_multithreading":{},"f:feature.node.kubernetes.io/cpu-pstate.status":{},"f:feature.node.kubernetes.io/cpu-pstate.turbo":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTCMT":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTL3CA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBM":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMON":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ_FULL":{},"f:feature.node.kubernetes.io/kernel-selinux.enabled":{},"f:feature.node.kubernetes.io/kernel-version.full":{},"f:feature.node.kubernetes.io/kernel-version.major":{},"f:feature.node.kubernetes.io/kernel-version.minor":{},"f:feature.node.kubernetes.io/kernel-version.revision":{},"f:feature.node.kubernetes.io/memory-numa":{},"f:feature.node.kubernetes.io/network-sriov.capable":{},"f:feature.node.kubernetes.io/network-sriov.configured":{},"f:feature.node.kubernetes.io/pci-0300_1a03.present":{},"f:feature.node.kubernetes.io/storage-nonrotationaldisk":{},"f:feature.node.kubernetes.io/system-os_release.ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID.major":{}}}}} {Swagger-Codegen Update v1 2022-06-03 20:11:45 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:cmk.intel.com/cmk-node":{}}},"f:status":{"f:capacity":{"f:cmk.intel.com/exclusive-cores":{}}}}} {kubelet Update v1 2022-06-03 20:11:51 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:cmk.intel.com/exclusive-cores":{},"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:capacity":{"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.3.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.3.0/24],},Status:NodeStatus{Capacity:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{451201003520 0} {} 440625980Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{201269608448 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{77 0} {} 77 DecimalSI},ephemeral-storage: {{406080902496 0} {} 406080902496 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{178884608000 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-06-03 20:03:39 +0000 UTC,LastTransitionTime:2022-06-03 20:03:39 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-06-03 22:08:34 +0000 UTC,LastTransitionTime:2022-06-03 19:59:31 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-06-03 22:08:34 +0000 UTC,LastTransitionTime:2022-06-03 19:59:31 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-06-03 22:08:34 +0000 UTC,LastTransitionTime:2022-06-03 19:59:31 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-06-03 22:08:34 +0000 UTC,LastTransitionTime:2022-06-03 20:00:42 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.207,},NodeAddress{Type:Hostname,Address:node1,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:d7b1fa7572024d5cac9eec5f4f2a75d3,SystemUUID:00CDA902-D022-E711-906E-0017A4403562,BootID:a1aa46cd-ec2c-417b-ae44-b808bdc04113,KernelVersion:3.10.0-1160.66.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.16,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[opnfv/barometer-collectd@sha256:f30e965aa6195e6ac4ca2410f5a15e3704c92e4afa5208178ca22a7911975d66],SizeBytes:1075575763,},ContainerImage{Names:[@ :],SizeBytes:1003977815,},ContainerImage{Names:[localhost:30500/cmk@sha256:196eade72a7e16bdb2d709d29fdec354c8a3dbbb68e384608929b41c5ec41520 cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:727687199,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[golang@sha256:db2475a1dbb2149508e5db31d7d77a75e6600d54be645f37681f03f2762169ba golang:alpine3.12],SizeBytes:301186719,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:702a992280fb7c3303e84a5801acbb4c9c7fcf48cffe0e9c8be3f0c60f74cf89 k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.4],SizeBytes:253371792,},ContainerImage{Names:[grafana/grafana@sha256:ba39bf5131dcc0464134a3ff0e26e8c6380415249fa725e5f619176601255172 grafana/grafana:7.5.4],SizeBytes:203572842,},ContainerImage{Names:[quay.io/prometheus/prometheus@sha256:b899dbd1b9017b9a379f76ce5b40eead01a62762c4f2057eacef945c3c22d210 quay.io/prometheus/prometheus:v2.22.1],SizeBytes:168344243,},ContainerImage{Names:[nginx@sha256:a05b0cdd4fc1be3b224ba9662ebdf98fe44c09c0c9215b45f84344c12867002e nginx:1.21.1],SizeBytes:133175493,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:716d2f68314c5c4ddd5ecdb45183fcb4ed8019015982c1321571f863989b70b0 k8s.gcr.io/e2e-test-images/httpd:2.4.39-1],SizeBytes:126894770,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:125930239,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:1f36a24cfb5e0c3f725d7565a867c2384282fcbeccc77b07b423c9da95763a9a k8s.gcr.io/e2e-test-images/nautilus:1.4],SizeBytes:121748345,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[k8s.gcr.io/nfd/node-feature-discovery@sha256:74a1cbd82354f148277e20cdce25d57816e355a896bc67f67a0f722164b16945 k8s.gcr.io/nfd/node-feature-discovery:v0.8.2],SizeBytes:108486428,},ContainerImage{Names:[directxman12/k8s-prometheus-adapter@sha256:2b09a571757a12c0245f2f1a74db4d1b9386ff901cf57f5ce48a0a682bd0e3af directxman12/k8s-prometheus-adapter:v0.8.2],SizeBytes:68230450,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[localhost:30500/sriov-device-plugin@sha256:bec5a478455b8244d18398355b5ec18540557180ddc029404300ca241638521b nfvpe/sriov-device-plugin:latest localhost:30500/sriov-device-plugin:v3.3.2],SizeBytes:42676189,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[prom/collectd-exporter@sha256:73fbda4d24421bff3b741c27efc36f1b6fbe7c57c378d56d4ff78101cd556654],SizeBytes:17463681,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:16032814,},ContainerImage{Names:[quay.io/prometheus-operator/prometheus-config-reloader@sha256:4dee0fcf1820355ddd6986c1317b555693776c731315544a99d6cc59a7e34ce9 quay.io/prometheus-operator/prometheus-config-reloader:v0.44.1],SizeBytes:13433274,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:11443478,},ContainerImage{Names:[appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172cead6502fec989198b98d8bb appropriate/curl:edge],SizeBytes:5654234,},ContainerImage{Names:[alpine@sha256:c75ac27b49326926b803b9ed43bf088bc220d22556de1bc5f72d742c91398f69 alpine:3.12],SizeBytes:5581590,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:1154361,},ContainerImage{Names:[busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 busybox:1.28],SizeBytes:1146369,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jun 3 22:08:40.611: INFO: Logging kubelet events for node node1 Jun 3 22:08:40.614: INFO: Logging pods the kubelet thinks is on node node1 Jun 3 22:08:40.827: INFO: node-exporter-f5xkq started at 2022-06-03 20:13:28 +0000 UTC (0+2 container statuses recorded) Jun 3 22:08:40.827: INFO: Container kube-rbac-proxy ready: true, restart count 0 Jun 3 22:08:40.827: INFO: Container node-exporter ready: true, restart count 0 Jun 3 22:08:40.827: INFO: node-feature-discovery-worker-rg6tx started at 2022-06-03 20:08:09 +0000 UTC (0+1 container statuses recorded) Jun 3 22:08:40.827: INFO: Container nfd-worker ready: true, restart count 0 Jun 3 22:08:40.827: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-qwqjx started at 2022-06-03 20:09:20 +0000 UTC (0+1 container statuses recorded) Jun 3 22:08:40.827: INFO: Container kube-sriovdp ready: true, restart count 0 Jun 3 22:08:40.827: INFO: cmk-webhook-6c9d5f8578-c927x started at 2022-06-03 20:12:25 +0000 UTC (0+1 container statuses recorded) Jun 3 22:08:40.827: INFO: Container cmk-webhook ready: true, restart count 0 Jun 3 22:08:40.827: INFO: collectd-nbx5z started at 2022-06-03 20:17:32 +0000 UTC (0+3 container statuses recorded) Jun 3 22:08:40.827: INFO: Container collectd ready: true, restart count 0 Jun 3 22:08:40.827: INFO: Container collectd-exporter ready: true, restart count 0 Jun 3 22:08:40.827: INFO: Container rbac-proxy ready: true, restart count 0 Jun 3 22:08:40.827: INFO: kube-multus-ds-amd64-p7r6j started at 2022-06-03 20:00:40 +0000 UTC (0+1 container statuses recorded) Jun 3 22:08:40.827: INFO: Container kube-multus ready: true, restart count 1 Jun 3 22:08:40.827: INFO: kube-proxy-b6zlv started at 2022-06-03 19:59:36 +0000 UTC (0+1 container statuses recorded) Jun 3 22:08:40.827: INFO: Container kube-proxy ready: true, restart count 2 Jun 3 22:08:40.827: INFO: prometheus-k8s-0 started at 2022-06-03 20:13:45 +0000 UTC (0+4 container statuses recorded) Jun 3 22:08:40.827: INFO: Container config-reloader ready: true, restart count 0 Jun 3 22:08:40.827: INFO: Container custom-metrics-apiserver ready: true, restart count 0 Jun 3 22:08:40.827: INFO: Container grafana ready: true, restart count 0 Jun 3 22:08:40.827: INFO: Container prometheus ready: true, restart count 1 Jun 3 22:08:40.827: INFO: cmk-84nbw started at 2022-06-03 20:12:24 +0000 UTC (0+2 container statuses recorded) Jun 3 22:08:40.827: INFO: Container nodereport ready: true, restart count 0 Jun 3 22:08:40.827: INFO: Container reconcile ready: true, restart count 0 Jun 3 22:08:40.827: INFO: kube-flannel-hm6bh started at 2022-06-03 20:00:32 +0000 UTC (1+1 container statuses recorded) Jun 3 22:08:40.827: INFO: Init container install-cni ready: true, restart count 2 Jun 3 22:08:40.828: INFO: Container kube-flannel ready: true, restart count 3 Jun 3 22:08:40.828: INFO: cmk-init-discover-node1-n75dv started at 2022-06-03 20:11:42 +0000 UTC (0+3 container statuses recorded) Jun 3 22:08:40.828: INFO: Container discover ready: false, restart count 0 Jun 3 22:08:40.828: INFO: Container init ready: false, restart count 0 Jun 3 22:08:40.828: INFO: Container install ready: false, restart count 0 Jun 3 22:08:40.828: INFO: nginx-proxy-node1 started at 2022-06-03 19:59:31 +0000 UTC (0+1 container statuses recorded) Jun 3 22:08:40.828: INFO: Container nginx-proxy ready: true, restart count 2 Jun 3 22:08:42.056: INFO: Latency metrics for node node1 Jun 3 22:08:42.056: INFO: Logging node info for node node2 Jun 3 22:08:42.060: INFO: Node Info: &Node{ObjectMeta:{node2 bb95e261-57f4-4e78-b1f6-cbf8d9287d74 50681 0 2022-06-03 19:59:32 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux cmk.intel.com/cmk-node:true feature.node.kubernetes.io/cpu-cpuid.ADX:true feature.node.kubernetes.io/cpu-cpuid.AESNI:true feature.node.kubernetes.io/cpu-cpuid.AVX:true feature.node.kubernetes.io/cpu-cpuid.AVX2:true feature.node.kubernetes.io/cpu-cpuid.AVX512BW:true feature.node.kubernetes.io/cpu-cpuid.AVX512CD:true feature.node.kubernetes.io/cpu-cpuid.AVX512DQ:true feature.node.kubernetes.io/cpu-cpuid.AVX512F:true feature.node.kubernetes.io/cpu-cpuid.AVX512VL:true feature.node.kubernetes.io/cpu-cpuid.FMA3:true feature.node.kubernetes.io/cpu-cpuid.HLE:true feature.node.kubernetes.io/cpu-cpuid.IBPB:true feature.node.kubernetes.io/cpu-cpuid.MPX:true feature.node.kubernetes.io/cpu-cpuid.RTM:true feature.node.kubernetes.io/cpu-cpuid.SSE4:true feature.node.kubernetes.io/cpu-cpuid.SSE42:true feature.node.kubernetes.io/cpu-cpuid.STIBP:true feature.node.kubernetes.io/cpu-cpuid.VMX:true feature.node.kubernetes.io/cpu-cstate.enabled:true feature.node.kubernetes.io/cpu-hardware_multithreading:true feature.node.kubernetes.io/cpu-pstate.status:active feature.node.kubernetes.io/cpu-pstate.turbo:true feature.node.kubernetes.io/cpu-rdt.RDTCMT:true feature.node.kubernetes.io/cpu-rdt.RDTL3CA:true feature.node.kubernetes.io/cpu-rdt.RDTMBA:true feature.node.kubernetes.io/cpu-rdt.RDTMBM:true feature.node.kubernetes.io/cpu-rdt.RDTMON:true feature.node.kubernetes.io/kernel-config.NO_HZ:true feature.node.kubernetes.io/kernel-config.NO_HZ_FULL:true feature.node.kubernetes.io/kernel-selinux.enabled:true feature.node.kubernetes.io/kernel-version.full:3.10.0-1160.66.1.el7.x86_64 feature.node.kubernetes.io/kernel-version.major:3 feature.node.kubernetes.io/kernel-version.minor:10 feature.node.kubernetes.io/kernel-version.revision:0 feature.node.kubernetes.io/memory-numa:true feature.node.kubernetes.io/network-sriov.capable:true feature.node.kubernetes.io/network-sriov.configured:true feature.node.kubernetes.io/pci-0300_1a03.present:true feature.node.kubernetes.io/storage-nonrotationaldisk:true feature.node.kubernetes.io/system-os_release.ID:centos feature.node.kubernetes.io/system-os_release.VERSION_ID:7 feature.node.kubernetes.io/system-os_release.VERSION_ID.major:7 kubernetes.io/arch:amd64 kubernetes.io/hostname:node2 kubernetes.io/os:linux] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.208 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/extended-resources: nfd.node.kubernetes.io/feature-labels:cpu-cpuid.ADX,cpu-cpuid.AESNI,cpu-cpuid.AVX,cpu-cpuid.AVX2,cpu-cpuid.AVX512BW,cpu-cpuid.AVX512CD,cpu-cpuid.AVX512DQ,cpu-cpuid.AVX512F,cpu-cpuid.AVX512VL,cpu-cpuid.FMA3,cpu-cpuid.HLE,cpu-cpuid.IBPB,cpu-cpuid.MPX,cpu-cpuid.RTM,cpu-cpuid.SSE4,cpu-cpuid.SSE42,cpu-cpuid.STIBP,cpu-cpuid.VMX,cpu-cstate.enabled,cpu-hardware_multithreading,cpu-pstate.status,cpu-pstate.turbo,cpu-rdt.RDTCMT,cpu-rdt.RDTL3CA,cpu-rdt.RDTMBA,cpu-rdt.RDTMBM,cpu-rdt.RDTMON,kernel-config.NO_HZ,kernel-config.NO_HZ_FULL,kernel-selinux.enabled,kernel-version.full,kernel-version.major,kernel-version.minor,kernel-version.revision,memory-numa,network-sriov.capable,network-sriov.configured,pci-0300_1a03.present,storage-nonrotationaldisk,system-os_release.ID,system-os_release.VERSION_ID,system-os_release.VERSION_ID.major nfd.node.kubernetes.io/worker.version:v0.8.2 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2022-06-03 19:59:32 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.4.0/24\"":{}}}}} {kubeadm Update v1 2022-06-03 19:59:33 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {flanneld Update v1 2022-06-03 20:00:37 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {nfd-master Update v1 2022-06-03 20:08:16 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/extended-resources":{},"f:nfd.node.kubernetes.io/feature-labels":{},"f:nfd.node.kubernetes.io/worker.version":{}},"f:labels":{"f:feature.node.kubernetes.io/cpu-cpuid.ADX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AESNI":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX2":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512BW":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512CD":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512DQ":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512F":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512VL":{},"f:feature.node.kubernetes.io/cpu-cpuid.FMA3":{},"f:feature.node.kubernetes.io/cpu-cpuid.HLE":{},"f:feature.node.kubernetes.io/cpu-cpuid.IBPB":{},"f:feature.node.kubernetes.io/cpu-cpuid.MPX":{},"f:feature.node.kubernetes.io/cpu-cpuid.RTM":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE4":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE42":{},"f:feature.node.kubernetes.io/cpu-cpuid.STIBP":{},"f:feature.node.kubernetes.io/cpu-cpuid.VMX":{},"f:feature.node.kubernetes.io/cpu-cstate.enabled":{},"f:feature.node.kubernetes.io/cpu-hardware_multithreading":{},"f:feature.node.kubernetes.io/cpu-pstate.status":{},"f:feature.node.kubernetes.io/cpu-pstate.turbo":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTCMT":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTL3CA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBM":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMON":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ_FULL":{},"f:feature.node.kubernetes.io/kernel-selinux.enabled":{},"f:feature.node.kubernetes.io/kernel-version.full":{},"f:feature.node.kubernetes.io/kernel-version.major":{},"f:feature.node.kubernetes.io/kernel-version.minor":{},"f:feature.node.kubernetes.io/kernel-version.revision":{},"f:feature.node.kubernetes.io/memory-numa":{},"f:feature.node.kubernetes.io/network-sriov.capable":{},"f:feature.node.kubernetes.io/network-sriov.configured":{},"f:feature.node.kubernetes.io/pci-0300_1a03.present":{},"f:feature.node.kubernetes.io/storage-nonrotationaldisk":{},"f:feature.node.kubernetes.io/system-os_release.ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID.major":{}}}}} {Swagger-Codegen Update v1 2022-06-03 20:12:07 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:cmk.intel.com/cmk-node":{}}},"f:status":{"f:capacity":{"f:cmk.intel.com/exclusive-cores":{}}}}} {kubelet Update v1 2022-06-03 20:12:10 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:cmk.intel.com/exclusive-cores":{},"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:capacity":{"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.4.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.4.0/24],},Status:NodeStatus{Capacity:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{451201003520 0} {} 440625980Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{201269604352 0} {} 196552348Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{77 0} {} 77 DecimalSI},ephemeral-storage: {{406080902496 0} {} 406080902496 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{178884603904 0} {} 174691996Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-06-03 20:03:25 +0000 UTC,LastTransitionTime:2022-06-03 20:03:25 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-06-03 22:08:34 +0000 UTC,LastTransitionTime:2022-06-03 19:59:32 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-06-03 22:08:34 +0000 UTC,LastTransitionTime:2022-06-03 19:59:32 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-06-03 22:08:34 +0000 UTC,LastTransitionTime:2022-06-03 19:59:32 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-06-03 22:08:34 +0000 UTC,LastTransitionTime:2022-06-03 20:03:20 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.208,},NodeAddress{Type:Hostname,Address:node2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:73f6f7c4482d4ddfadf38b35a5d03575,SystemUUID:80B3CD56-852F-E711-906E-0017A4403562,BootID:14b04379-324d-413e-8b7f-b1dff077c955,KernelVersion:3.10.0-1160.66.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.16,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[opnfv/barometer-collectd@sha256:f30e965aa6195e6ac4ca2410f5a15e3704c92e4afa5208178ca22a7911975d66],SizeBytes:1075575763,},ContainerImage{Names:[localhost:30500/cmk@sha256:196eade72a7e16bdb2d709d29fdec354c8a3dbbb68e384608929b41c5ec41520 localhost:30500/cmk:v1.5.1],SizeBytes:727687199,},ContainerImage{Names:[cmk:v1.5.1],SizeBytes:727687199,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[aquasec/kube-hunter@sha256:2be6820bc1d7e0f57193a9a27d5a3e16b2fd93c53747b03ce8ca48c6fc323781 aquasec/kube-hunter:0.3.1],SizeBytes:347611549,},ContainerImage{Names:[sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 sirot/netperf-latest:latest],SizeBytes:282025213,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[k8s.gcr.io/etcd@sha256:4ad90a11b55313b182afc186b9876c8e891531b8db4c9bf1541953021618d0e2 k8s.gcr.io/etcd:3.4.13-0],SizeBytes:253392289,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[nginx@sha256:a05b0cdd4fc1be3b224ba9662ebdf98fe44c09c0c9215b45f84344c12867002e nginx:1.21.1],SizeBytes:133175493,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:716d2f68314c5c4ddd5ecdb45183fcb4ed8019015982c1321571f863989b70b0 k8s.gcr.io/e2e-test-images/httpd:2.4.39-1],SizeBytes:126894770,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:125930239,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:1f36a24cfb5e0c3f725d7565a867c2384282fcbeccc77b07b423c9da95763a9a k8s.gcr.io/e2e-test-images/nautilus:1.4],SizeBytes:121748345,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[k8s.gcr.io/nfd/node-feature-discovery@sha256:74a1cbd82354f148277e20cdce25d57816e355a896bc67f67a0f722164b16945 k8s.gcr.io/nfd/node-feature-discovery:v0.8.2],SizeBytes:108486428,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/sample-apiserver@sha256:e7fddbaac4c3451da2365ab90bad149d32f11409738034e41e0f460927f7c276 k8s.gcr.io/e2e-test-images/sample-apiserver:1.17.4],SizeBytes:58172101,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[localhost:30500/sriov-device-plugin@sha256:bec5a478455b8244d18398355b5ec18540557180ddc029404300ca241638521b localhost:30500/sriov-device-plugin:v3.3.2],SizeBytes:42676189,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[localhost:30500/tasextender@sha256:eddd5e176ac5f79e2e8ba9a1b7023bbf7200edfa835da39de54a6bf3568f9668 localhost:30500/tasextender:0.4],SizeBytes:28910791,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 aquasec/kube-bench:0.3.1],SizeBytes:19301876,},ContainerImage{Names:[prom/collectd-exporter@sha256:73fbda4d24421bff3b741c27efc36f1b6fbe7c57c378d56d4ff78101cd556654],SizeBytes:17463681,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:16032814,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:11443478,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonewprivs@sha256:8ac1264691820febacf3aea5d152cbde6d10685731ec14966a9401c6f47a68ac k8s.gcr.io/e2e-test-images/nonewprivs:1.3],SizeBytes:7107254,},ContainerImage{Names:[appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172cead6502fec989198b98d8bb appropriate/curl:edge],SizeBytes:5654234,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:1154361,},ContainerImage{Names:[busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 busybox:1.28],SizeBytes:1146369,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jun 3 22:08:42.061: INFO: Logging kubelet events for node node2 Jun 3 22:08:42.064: INFO: Logging pods the kubelet thinks is on node node2 Jun 3 22:08:42.076: INFO: nginx-proxy-node2 started at 2022-06-03 19:59:32 +0000 UTC (0+1 container statuses recorded) Jun 3 22:08:42.076: INFO: Container nginx-proxy ready: true, restart count 2 Jun 3 22:08:42.076: INFO: cmk-init-discover-node2-xvf8p started at 2022-06-03 20:12:02 +0000 UTC (0+3 container statuses recorded) Jun 3 22:08:42.076: INFO: Container discover ready: false, restart count 0 Jun 3 22:08:42.076: INFO: Container init ready: false, restart count 0 Jun 3 22:08:42.076: INFO: Container install ready: false, restart count 0 Jun 3 22:08:42.076: INFO: tas-telemetry-aware-scheduling-84ff454dfb-j2kg5 started at 2022-06-03 20:16:39 +0000 UTC (0+1 container statuses recorded) Jun 3 22:08:42.076: INFO: Container tas-extender ready: true, restart count 0 Jun 3 22:08:42.076: INFO: kube-flannel-pc7wj started at 2022-06-03 20:00:32 +0000 UTC (1+1 container statuses recorded) Jun 3 22:08:42.076: INFO: Init container install-cni ready: true, restart count 0 Jun 3 22:08:42.076: INFO: Container kube-flannel ready: true, restart count 1 Jun 3 22:08:42.076: INFO: kube-multus-ds-amd64-n7spl started at 2022-06-03 20:00:40 +0000 UTC (0+1 container statuses recorded) Jun 3 22:08:42.076: INFO: Container kube-multus ready: true, restart count 1 Jun 3 22:08:42.076: INFO: kube-proxy-qmkcq started at 2022-06-03 19:59:36 +0000 UTC (0+1 container statuses recorded) Jun 3 22:08:42.076: INFO: Container kube-proxy ready: true, restart count 1 Jun 3 22:08:42.076: INFO: node-feature-discovery-worker-gn855 started at 2022-06-03 20:08:09 +0000 UTC (0+1 container statuses recorded) Jun 3 22:08:42.076: INFO: Container nfd-worker ready: true, restart count 0 Jun 3 22:08:42.076: INFO: collectd-q2l4t started at 2022-06-03 20:17:32 +0000 UTC (0+3 container statuses recorded) Jun 3 22:08:42.076: INFO: Container collectd ready: true, restart count 0 Jun 3 22:08:42.076: INFO: Container collectd-exporter ready: true, restart count 0 Jun 3 22:08:42.076: INFO: Container rbac-proxy ready: true, restart count 0 Jun 3 22:08:42.076: INFO: kubernetes-dashboard-785dcbb76d-25c95 started at 2022-06-03 20:01:12 +0000 UTC (0+1 container statuses recorded) Jun 3 22:08:42.076: INFO: Container kubernetes-dashboard ready: true, restart count 1 Jun 3 22:08:42.076: INFO: cmk-v446x started at 2022-06-03 20:12:24 +0000 UTC (0+2 container statuses recorded) Jun 3 22:08:42.076: INFO: Container nodereport ready: true, restart count 0 Jun 3 22:08:42.076: INFO: Container reconcile ready: true, restart count 0 Jun 3 22:08:42.076: INFO: node-exporter-g45bm started at 2022-06-03 20:13:28 +0000 UTC (0+2 container statuses recorded) Jun 3 22:08:42.076: INFO: Container kube-rbac-proxy ready: true, restart count 0 Jun 3 22:08:42.076: INFO: Container node-exporter ready: true, restart count 0 Jun 3 22:08:42.076: INFO: kubernetes-metrics-scraper-5558854cb-fz4kn started at 2022-06-03 20:01:12 +0000 UTC (0+1 container statuses recorded) Jun 3 22:08:42.076: INFO: Container kubernetes-metrics-scraper ready: true, restart count 1 Jun 3 22:08:42.076: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-49xzt started at 2022-06-03 20:09:20 +0000 UTC (0+1 container statuses recorded) Jun 3 22:08:42.076: INFO: Container kube-sriovdp ready: true, restart count 0 Jun 3 22:08:42.199: INFO: Latency metrics for node node2 Jun 3 22:08:42.199: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-2233" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750 • Failure [147.910 seconds] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance] [It] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Jun 3 22:08:31.578: Unexpected error: <*errors.errorString | 0xc0042e2760>: { s: "service is not reachable within 2m0s timeout on endpoint 10.10.190.207:30573 over TCP protocol", } service is not reachable within 2m0s timeout on endpoint 10.10.190.207:30573 over TCP protocol occurred /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:2497 ------------------------------ {"msg":"FAILED [sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","total":-1,"completed":25,"skipped":321,"failed":1,"failures":["[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]"]} Jun 3 22:08:42.217: INFO: Running AfterSuite actions on all nodes {"msg":"PASSED [sig-apps] DisruptionController Listing PodDisruptionBudgets for all namespaces should list and delete a collection of PodDisruptionBudgets [Conformance]","total":-1,"completed":34,"skipped":804,"failed":0} Jun 3 22:06:18.045: INFO: Running AfterSuite actions on all nodes Jun 3 22:08:42.295: INFO: Running AfterSuite actions on node 1 Jun 3 22:08:42.295: INFO: Skipping dumping logs from cluster Summarizing 6 Failures: [Fail] [sig-network] Services [It] should have session affinity work for NodePort service [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:2576 [Fail] [sig-network] Services [It] should be able to create a functioning NodePort service [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1169 [Fail] [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] [It] Should recreate evicted statefulset [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes/runner.go:113 [Fail] [sig-network] Services [It] should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:2576 [Fail] [sig-network] Services [It] should be able to change the type from ExternalName to NodePort [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1351 [Fail] [sig-network] Services [It] should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:2497 Ran 320 of 5773 Specs in 791.375 seconds FAIL! -- 314 Passed | 6 Failed | 0 Pending | 5453 Skipped Ginkgo ran 1 suite in 13m13.031334127s Test Suite Failed