Running Suite: Kubernetes e2e suite =================================== Random Seed: 1655503147 - Will randomize all specs Will run 5773 specs Running in parallel across 10 nodes Jun 17 21:59:08.914: INFO: >>> kubeConfig: /root/.kube/config Jun 17 21:59:08.919: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Jun 17 21:59:08.942: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Jun 17 21:59:09.009: INFO: The status of Pod cmk-init-discover-node1-bvmrv is Succeeded, skipping waiting Jun 17 21:59:09.009: INFO: The status of Pod cmk-init-discover-node2-z2vgz is Succeeded, skipping waiting Jun 17 21:59:09.009: INFO: 40 / 42 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Jun 17 21:59:09.009: INFO: expected 8 pod replicas in namespace 'kube-system', 8 are Running and Ready. Jun 17 21:59:09.009: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Jun 17 21:59:09.020: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'cmk' (0 seconds elapsed) Jun 17 21:59:09.020: INFO: 5 / 5 pods ready in namespace 'kube-system' in daemonset 'kube-flannel' (0 seconds elapsed) Jun 17 21:59:09.020: INFO: 0 / 0 pods ready in namespace 'kube-system' in daemonset 'kube-flannel-ds-arm' (0 seconds elapsed) Jun 17 21:59:09.020: INFO: 0 / 0 pods ready in namespace 'kube-system' in daemonset 'kube-flannel-ds-arm64' (0 seconds elapsed) Jun 17 21:59:09.020: INFO: 0 / 0 pods ready in namespace 'kube-system' in daemonset 'kube-flannel-ds-ppc64le' (0 seconds elapsed) Jun 17 21:59:09.020: INFO: 0 / 0 pods ready in namespace 'kube-system' in daemonset 'kube-flannel-ds-s390x' (0 seconds elapsed) Jun 17 21:59:09.020: INFO: 5 / 5 pods ready in namespace 'kube-system' in daemonset 'kube-multus-ds-amd64' (0 seconds elapsed) Jun 17 21:59:09.021: INFO: 5 / 5 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Jun 17 21:59:09.021: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'node-feature-discovery-worker' (0 seconds elapsed) Jun 17 21:59:09.021: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'sriov-net-dp-kube-sriov-device-plugin-amd64' (0 seconds elapsed) Jun 17 21:59:09.021: INFO: e2e test version: v1.21.9 Jun 17 21:59:09.021: INFO: kube-apiserver version: v1.21.1 Jun 17 21:59:09.022: INFO: >>> kubeConfig: /root/.kube/config Jun 17 21:59:09.028: INFO: Cluster IP family: ipv4 Jun 17 21:59:09.024: INFO: >>> kubeConfig: /root/.kube/config Jun 17 21:59:09.047: INFO: Cluster IP family: ipv4 Jun 17 21:59:09.034: INFO: >>> kubeConfig: /root/.kube/config Jun 17 21:59:09.056: INFO: Cluster IP family: ipv4 SSSSSSS ------------------------------ Jun 17 21:59:09.041: INFO: >>> kubeConfig: /root/.kube/config Jun 17 21:59:09.064: INFO: Cluster IP family: ipv4 S ------------------------------ Jun 17 21:59:09.043: INFO: >>> kubeConfig: /root/.kube/config Jun 17 21:59:09.066: INFO: Cluster IP family: ipv4 Jun 17 21:59:09.053: INFO: >>> kubeConfig: /root/.kube/config Jun 17 21:59:09.066: INFO: Cluster IP family: ipv4 SS ------------------------------ Jun 17 21:59:09.047: INFO: >>> kubeConfig: /root/.kube/config Jun 17 21:59:09.067: INFO: Cluster IP family: ipv4 SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ Jun 17 21:59:09.062: INFO: >>> kubeConfig: /root/.kube/config Jun 17 21:59:09.083: INFO: Cluster IP family: ipv4 S ------------------------------ Jun 17 21:59:09.062: INFO: >>> kubeConfig: /root/.kube/config Jun 17 21:59:09.084: INFO: Cluster IP family: ipv4 SSSSSSSSSSSSSSS ------------------------------ Jun 17 21:59:09.071: INFO: >>> kubeConfig: /root/.kube/config Jun 17 21:59:09.094: INFO: Cluster IP family: ipv4 SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 17 21:59:09.074: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl W0617 21:59:09.103892 31 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Jun 17 21:59:09.104: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Jun 17 21:59:09.106: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:241 [It] should support proxy with --port 0 [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: starting the proxy server Jun 17 21:59:09.110: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-1289 proxy -p 0 --disable-filter' STEP: curling proxy /api/ output [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 17 21:59:09.218: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1289" for this suite. • ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Proxy server should support proxy with --port 0 [Conformance]","total":-1,"completed":1,"skipped":2,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-apps] DisruptionController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 17 21:59:09.036: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename disruption W0617 21:59:09.065766 30 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Jun 17 21:59:09.066: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Jun 17 21:59:09.071: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] DisruptionController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/disruption.go:69 [It] should update/patch PodDisruptionBudget status [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Waiting for the pdb to be processed STEP: Updating PodDisruptionBudget status STEP: Waiting for all pods to be running Jun 17 21:59:11.098: INFO: running pods: 0 < 1 Jun 17 21:59:13.104: INFO: running pods: 0 < 1 Jun 17 21:59:15.104: INFO: running pods: 0 < 1 STEP: locating a running pod STEP: Waiting for the pdb to be processed STEP: Patching PodDisruptionBudget status STEP: Waiting for the pdb to be processed [AfterEach] [sig-apps] DisruptionController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 17 21:59:17.131: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "disruption-5467" for this suite. • [SLOW TEST:8.103 seconds] [sig-apps] DisruptionController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should update/patch PodDisruptionBudget status [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] DisruptionController should update/patch PodDisruptionBudget status [Conformance]","total":-1,"completed":1,"skipped":3,"failed":0} SSSS ------------------------------ [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 17 21:59:09.096: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api W0617 21:59:09.128109 37 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Jun 17 21:59:09.128: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Jun 17 21:59:09.130: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test downward API volume plugin Jun 17 21:59:09.145: INFO: Waiting up to 5m0s for pod "downwardapi-volume-5e0f16cd-eb53-42eb-991d-f74bd137e26d" in namespace "downward-api-4982" to be "Succeeded or Failed" Jun 17 21:59:09.147: INFO: Pod "downwardapi-volume-5e0f16cd-eb53-42eb-991d-f74bd137e26d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.135698ms Jun 17 21:59:11.154: INFO: Pod "downwardapi-volume-5e0f16cd-eb53-42eb-991d-f74bd137e26d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008908839s Jun 17 21:59:13.157: INFO: Pod "downwardapi-volume-5e0f16cd-eb53-42eb-991d-f74bd137e26d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.011725924s Jun 17 21:59:15.160: INFO: Pod "downwardapi-volume-5e0f16cd-eb53-42eb-991d-f74bd137e26d": Phase="Pending", Reason="", readiness=false. Elapsed: 6.015069091s Jun 17 21:59:17.165: INFO: Pod "downwardapi-volume-5e0f16cd-eb53-42eb-991d-f74bd137e26d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.01961368s STEP: Saw pod success Jun 17 21:59:17.165: INFO: Pod "downwardapi-volume-5e0f16cd-eb53-42eb-991d-f74bd137e26d" satisfied condition "Succeeded or Failed" Jun 17 21:59:17.167: INFO: Trying to get logs from node node2 pod downwardapi-volume-5e0f16cd-eb53-42eb-991d-f74bd137e26d container client-container: STEP: delete the pod Jun 17 21:59:17.186: INFO: Waiting for pod downwardapi-volume-5e0f16cd-eb53-42eb-991d-f74bd137e26d to disappear Jun 17 21:59:17.188: INFO: Pod downwardapi-volume-5e0f16cd-eb53-42eb-991d-f74bd137e26d no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 17 21:59:17.188: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-4982" for this suite. • [SLOW TEST:8.103 seconds] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ [BeforeEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 17 21:59:09.091: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected W0617 21:59:09.115041 35 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Jun 17 21:59:09.115: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Jun 17 21:59:09.117: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating configMap with name cm-test-opt-del-c1b06060-2121-47b2-97fd-4b76e9376cc8 STEP: Creating configMap with name cm-test-opt-upd-3fb8ff6e-d93e-455d-b0f1-c40b3cd685b0 STEP: Creating the pod Jun 17 21:59:09.142: INFO: The status of Pod pod-projected-configmaps-a1ea41cd-7c29-49a9-b0a5-8da473ea351f is Pending, waiting for it to be Running (with Ready = true) Jun 17 21:59:11.147: INFO: The status of Pod pod-projected-configmaps-a1ea41cd-7c29-49a9-b0a5-8da473ea351f is Pending, waiting for it to be Running (with Ready = true) Jun 17 21:59:13.149: INFO: The status of Pod pod-projected-configmaps-a1ea41cd-7c29-49a9-b0a5-8da473ea351f is Pending, waiting for it to be Running (with Ready = true) Jun 17 21:59:15.149: INFO: The status of Pod pod-projected-configmaps-a1ea41cd-7c29-49a9-b0a5-8da473ea351f is Pending, waiting for it to be Running (with Ready = true) Jun 17 21:59:17.153: INFO: The status of Pod pod-projected-configmaps-a1ea41cd-7c29-49a9-b0a5-8da473ea351f is Running (Ready = true) STEP: Deleting configmap cm-test-opt-del-c1b06060-2121-47b2-97fd-4b76e9376cc8 STEP: Updating configmap cm-test-opt-upd-3fb8ff6e-d93e-455d-b0f1-c40b3cd685b0 STEP: Creating configMap with name cm-test-opt-create-b766ea98-e9b5-495b-a45a-d08c45abd509 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 17 21:59:19.210: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5085" for this suite. • [SLOW TEST:10.128 seconds] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":1,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 17 21:59:19.286: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:241 [It] should support --unix-socket=/path [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Starting the proxy Jun 17 21:59:19.313: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-1861 proxy --unix-socket=/tmp/kubectl-proxy-unix167785204/test' STEP: retrieving proxy /api/ output [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 17 21:59:19.411: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1861" for this suite. • ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Proxy server should support --unix-socket=/path [Conformance]","total":-1,"completed":2,"skipped":28,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":10,"failed":0} [BeforeEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 17 21:59:17.201: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted STEP: Gathering metrics Jun 17 21:59:23.277: INFO: The status of Pod kube-controller-manager-master3 is Running (Ready = true) Jun 17 21:59:23.345: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 17 21:59:23.345: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-8170" for this suite. • [SLOW TEST:6.154 seconds] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]","total":-1,"completed":2,"skipped":10,"failed":0} SSSSSSSS ------------------------------ [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 17 21:59:09.106: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir W0617 21:59:09.136044 40 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Jun 17 21:59:09.136: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Jun 17 21:59:09.138: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test emptydir 0777 on node default medium Jun 17 21:59:09.160: INFO: Waiting up to 5m0s for pod "pod-8bd4fe17-fb05-4ec4-a323-f590349bf9a1" in namespace "emptydir-9667" to be "Succeeded or Failed" Jun 17 21:59:09.164: INFO: Pod "pod-8bd4fe17-fb05-4ec4-a323-f590349bf9a1": Phase="Pending", Reason="", readiness=false. Elapsed: 4.149963ms Jun 17 21:59:11.168: INFO: Pod "pod-8bd4fe17-fb05-4ec4-a323-f590349bf9a1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007494795s Jun 17 21:59:13.171: INFO: Pod "pod-8bd4fe17-fb05-4ec4-a323-f590349bf9a1": Phase="Pending", Reason="", readiness=false. Elapsed: 4.010349798s Jun 17 21:59:15.175: INFO: Pod "pod-8bd4fe17-fb05-4ec4-a323-f590349bf9a1": Phase="Pending", Reason="", readiness=false. Elapsed: 6.014371872s Jun 17 21:59:17.178: INFO: Pod "pod-8bd4fe17-fb05-4ec4-a323-f590349bf9a1": Phase="Pending", Reason="", readiness=false. Elapsed: 8.017804592s Jun 17 21:59:19.181: INFO: Pod "pod-8bd4fe17-fb05-4ec4-a323-f590349bf9a1": Phase="Pending", Reason="", readiness=false. Elapsed: 10.020989025s Jun 17 21:59:21.185: INFO: Pod "pod-8bd4fe17-fb05-4ec4-a323-f590349bf9a1": Phase="Pending", Reason="", readiness=false. Elapsed: 12.025182253s Jun 17 21:59:23.191: INFO: Pod "pod-8bd4fe17-fb05-4ec4-a323-f590349bf9a1": Phase="Pending", Reason="", readiness=false. Elapsed: 14.03059165s Jun 17 21:59:25.196: INFO: Pod "pod-8bd4fe17-fb05-4ec4-a323-f590349bf9a1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 16.035307774s STEP: Saw pod success Jun 17 21:59:25.196: INFO: Pod "pod-8bd4fe17-fb05-4ec4-a323-f590349bf9a1" satisfied condition "Succeeded or Failed" Jun 17 21:59:25.199: INFO: Trying to get logs from node node2 pod pod-8bd4fe17-fb05-4ec4-a323-f590349bf9a1 container test-container: STEP: delete the pod Jun 17 21:59:25.212: INFO: Waiting for pod pod-8bd4fe17-fb05-4ec4-a323-f590349bf9a1 to disappear Jun 17 21:59:25.214: INFO: Pod pod-8bd4fe17-fb05-4ec4-a323-f590349bf9a1 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 17 21:59:25.214: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-9667" for this suite. • [SLOW TEST:16.116 seconds] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":4,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 17 21:59:25.296: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should patch a secret [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating a secret STEP: listing secrets in all namespaces to ensure that there are more than zero STEP: patching the secret STEP: deleting the secret using a LabelSelector STEP: listing secrets in all namespaces, searching for label name and value in patch [AfterEach] [sig-node] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 17 21:59:25.346: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-8906" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Secrets should patch a secret [Conformance]","total":-1,"completed":2,"skipped":42,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 17 21:59:25.436: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:241 [BeforeEach] Kubectl run pod /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1514 [It] should create a pod from an image when restart is Never [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: running the image k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 Jun 17 21:59:25.457: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-7648 run e2e-test-httpd-pod --restart=Never --image=k8s.gcr.io/e2e-test-images/httpd:2.4.38-1' Jun 17 21:59:25.628: INFO: stderr: "" Jun 17 21:59:25.628: INFO: stdout: "pod/e2e-test-httpd-pod created\n" STEP: verifying the pod e2e-test-httpd-pod was created [AfterEach] Kubectl run pod /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1518 Jun 17 21:59:25.631: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-7648 delete pods e2e-test-httpd-pod' Jun 17 21:59:25.820: INFO: stderr: "" Jun 17 21:59:25.820: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 17 21:59:25.820: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7648" for this suite. • ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never [Conformance]","total":-1,"completed":3,"skipped":78,"failed":0} SSSSS ------------------------------ [BeforeEach] version v1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 17 21:59:17.152: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] A set of valid responses are returned for both pod and service ProxyWithPath [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Jun 17 21:59:17.175: INFO: Creating pod... Jun 17 21:59:17.188: INFO: Pod Quantity: 1 Status: Pending Jun 17 21:59:18.192: INFO: Pod Quantity: 1 Status: Pending Jun 17 21:59:19.191: INFO: Pod Quantity: 1 Status: Pending Jun 17 21:59:20.191: INFO: Pod Quantity: 1 Status: Pending Jun 17 21:59:21.192: INFO: Pod Quantity: 1 Status: Pending Jun 17 21:59:22.193: INFO: Pod Quantity: 1 Status: Pending Jun 17 21:59:23.191: INFO: Pod Quantity: 1 Status: Pending Jun 17 21:59:24.191: INFO: Pod Quantity: 1 Status: Pending Jun 17 21:59:25.194: INFO: Pod Quantity: 1 Status: Pending Jun 17 21:59:26.191: INFO: Pod Quantity: 1 Status: Pending Jun 17 21:59:27.194: INFO: Pod Status: Running Jun 17 21:59:27.195: INFO: Creating service... Jun 17 21:59:27.202: INFO: Starting http.Client for https://10.10.190.202:6443/api/v1/namespaces/proxy-950/pods/agnhost/proxy/some/path/with/DELETE Jun 17 21:59:27.206: INFO: http.Client request:DELETE | StatusCode:200 | Response:foo | Method:DELETE Jun 17 21:59:27.206: INFO: Starting http.Client for https://10.10.190.202:6443/api/v1/namespaces/proxy-950/pods/agnhost/proxy/some/path/with/GET Jun 17 21:59:27.208: INFO: http.Client request:GET | StatusCode:200 | Response:foo | Method:GET Jun 17 21:59:27.208: INFO: Starting http.Client for https://10.10.190.202:6443/api/v1/namespaces/proxy-950/pods/agnhost/proxy/some/path/with/HEAD Jun 17 21:59:27.210: INFO: http.Client request:HEAD | StatusCode:200 Jun 17 21:59:27.211: INFO: Starting http.Client for https://10.10.190.202:6443/api/v1/namespaces/proxy-950/pods/agnhost/proxy/some/path/with/OPTIONS Jun 17 21:59:27.213: INFO: http.Client request:OPTIONS | StatusCode:200 | Response:foo | Method:OPTIONS Jun 17 21:59:27.213: INFO: Starting http.Client for https://10.10.190.202:6443/api/v1/namespaces/proxy-950/pods/agnhost/proxy/some/path/with/PATCH Jun 17 21:59:27.215: INFO: http.Client request:PATCH | StatusCode:200 | Response:foo | Method:PATCH Jun 17 21:59:27.215: INFO: Starting http.Client for https://10.10.190.202:6443/api/v1/namespaces/proxy-950/pods/agnhost/proxy/some/path/with/POST Jun 17 21:59:27.217: INFO: http.Client request:POST | StatusCode:200 | Response:foo | Method:POST Jun 17 21:59:27.217: INFO: Starting http.Client for https://10.10.190.202:6443/api/v1/namespaces/proxy-950/pods/agnhost/proxy/some/path/with/PUT Jun 17 21:59:27.220: INFO: http.Client request:PUT | StatusCode:200 | Response:foo | Method:PUT Jun 17 21:59:27.220: INFO: Starting http.Client for https://10.10.190.202:6443/api/v1/namespaces/proxy-950/services/test-service/proxy/some/path/with/DELETE Jun 17 21:59:27.224: INFO: http.Client request:DELETE | StatusCode:200 | Response:foo | Method:DELETE Jun 17 21:59:27.224: INFO: Starting http.Client for https://10.10.190.202:6443/api/v1/namespaces/proxy-950/services/test-service/proxy/some/path/with/GET Jun 17 21:59:27.227: INFO: http.Client request:GET | StatusCode:200 | Response:foo | Method:GET Jun 17 21:59:27.227: INFO: Starting http.Client for https://10.10.190.202:6443/api/v1/namespaces/proxy-950/services/test-service/proxy/some/path/with/HEAD Jun 17 21:59:27.230: INFO: http.Client request:HEAD | StatusCode:200 Jun 17 21:59:27.230: INFO: Starting http.Client for https://10.10.190.202:6443/api/v1/namespaces/proxy-950/services/test-service/proxy/some/path/with/OPTIONS Jun 17 21:59:27.234: INFO: http.Client request:OPTIONS | StatusCode:200 | Response:foo | Method:OPTIONS Jun 17 21:59:27.234: INFO: Starting http.Client for https://10.10.190.202:6443/api/v1/namespaces/proxy-950/services/test-service/proxy/some/path/with/PATCH Jun 17 21:59:27.237: INFO: http.Client request:PATCH | StatusCode:200 | Response:foo | Method:PATCH Jun 17 21:59:27.237: INFO: Starting http.Client for https://10.10.190.202:6443/api/v1/namespaces/proxy-950/services/test-service/proxy/some/path/with/POST Jun 17 21:59:27.240: INFO: http.Client request:POST | StatusCode:200 | Response:foo | Method:POST Jun 17 21:59:27.240: INFO: Starting http.Client for https://10.10.190.202:6443/api/v1/namespaces/proxy-950/services/test-service/proxy/some/path/with/PUT Jun 17 21:59:27.244: INFO: http.Client request:PUT | StatusCode:200 | Response:foo | Method:PUT [AfterEach] version v1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 17 21:59:27.244: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "proxy-950" for this suite. • [SLOW TEST:10.101 seconds] [sig-network] Proxy /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 version v1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:74 A set of valid responses are returned for both pod and service ProxyWithPath [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-network] Proxy version v1 A set of valid responses are returned for both pod and service ProxyWithPath [Conformance]","total":-1,"completed":2,"skipped":7,"failed":0} SSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 17 21:59:09.216: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime W0617 21:59:09.238067 29 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Jun 17 21:59:09.238: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Jun 17 21:59:09.239: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: create the container STEP: wait for the container to reach Failed STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Jun 17 21:59:28.321: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 17 21:59:28.331: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-2120" for this suite. • [SLOW TEST:19.124 seconds] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 blackbox test /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:41 on terminated container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:134 should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":59,"failed":0} SSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 17 21:59:19.509: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating projection with secret that has name projected-secret-test-bebcaef5-6494-4c86-8df3-2b16e187fd4e STEP: Creating a pod to test consume secrets Jun 17 21:59:19.547: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-71a20a71-9492-4ce8-a815-26643cef905a" in namespace "projected-4406" to be "Succeeded or Failed" Jun 17 21:59:19.549: INFO: Pod "pod-projected-secrets-71a20a71-9492-4ce8-a815-26643cef905a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.341605ms Jun 17 21:59:21.553: INFO: Pod "pod-projected-secrets-71a20a71-9492-4ce8-a815-26643cef905a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005933188s Jun 17 21:59:23.557: INFO: Pod "pod-projected-secrets-71a20a71-9492-4ce8-a815-26643cef905a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.010087551s Jun 17 21:59:25.561: INFO: Pod "pod-projected-secrets-71a20a71-9492-4ce8-a815-26643cef905a": Phase="Pending", Reason="", readiness=false. Elapsed: 6.014302363s Jun 17 21:59:27.565: INFO: Pod "pod-projected-secrets-71a20a71-9492-4ce8-a815-26643cef905a": Phase="Pending", Reason="", readiness=false. Elapsed: 8.018449029s Jun 17 21:59:29.569: INFO: Pod "pod-projected-secrets-71a20a71-9492-4ce8-a815-26643cef905a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.022056232s STEP: Saw pod success Jun 17 21:59:29.569: INFO: Pod "pod-projected-secrets-71a20a71-9492-4ce8-a815-26643cef905a" satisfied condition "Succeeded or Failed" Jun 17 21:59:29.572: INFO: Trying to get logs from node node1 pod pod-projected-secrets-71a20a71-9492-4ce8-a815-26643cef905a container projected-secret-volume-test: STEP: delete the pod Jun 17 21:59:29.582: INFO: Waiting for pod pod-projected-secrets-71a20a71-9492-4ce8-a815-26643cef905a to disappear Jun 17 21:59:29.584: INFO: Pod pod-projected-secrets-71a20a71-9492-4ce8-a815-26643cef905a no longer exists [AfterEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 17 21:59:29.584: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4406" for this suite. • [SLOW TEST:10.093 seconds] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ [BeforeEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 17 21:59:09.051: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets W0617 21:59:09.072943 38 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Jun 17 21:59:09.073: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Jun 17 21:59:09.074: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating secret with name s-test-opt-del-50b5fddc-e07e-43d0-8891-812ef6c8759e STEP: Creating secret with name s-test-opt-upd-2427d53c-da05-44c6-a2cb-09e715481575 STEP: Creating the pod Jun 17 21:59:09.114: INFO: The status of Pod pod-secrets-1bfef263-c06b-4b72-8371-92533561f1ce is Pending, waiting for it to be Running (with Ready = true) Jun 17 21:59:11.118: INFO: The status of Pod pod-secrets-1bfef263-c06b-4b72-8371-92533561f1ce is Pending, waiting for it to be Running (with Ready = true) Jun 17 21:59:13.119: INFO: The status of Pod pod-secrets-1bfef263-c06b-4b72-8371-92533561f1ce is Pending, waiting for it to be Running (with Ready = true) Jun 17 21:59:15.121: INFO: The status of Pod pod-secrets-1bfef263-c06b-4b72-8371-92533561f1ce is Pending, waiting for it to be Running (with Ready = true) Jun 17 21:59:17.123: INFO: The status of Pod pod-secrets-1bfef263-c06b-4b72-8371-92533561f1ce is Pending, waiting for it to be Running (with Ready = true) Jun 17 21:59:19.117: INFO: The status of Pod pod-secrets-1bfef263-c06b-4b72-8371-92533561f1ce is Pending, waiting for it to be Running (with Ready = true) Jun 17 21:59:21.119: INFO: The status of Pod pod-secrets-1bfef263-c06b-4b72-8371-92533561f1ce is Pending, waiting for it to be Running (with Ready = true) Jun 17 21:59:23.119: INFO: The status of Pod pod-secrets-1bfef263-c06b-4b72-8371-92533561f1ce is Pending, waiting for it to be Running (with Ready = true) Jun 17 21:59:25.119: INFO: The status of Pod pod-secrets-1bfef263-c06b-4b72-8371-92533561f1ce is Pending, waiting for it to be Running (with Ready = true) Jun 17 21:59:27.121: INFO: The status of Pod pod-secrets-1bfef263-c06b-4b72-8371-92533561f1ce is Pending, waiting for it to be Running (with Ready = true) Jun 17 21:59:29.117: INFO: The status of Pod pod-secrets-1bfef263-c06b-4b72-8371-92533561f1ce is Running (Ready = true) STEP: Deleting secret s-test-opt-del-50b5fddc-e07e-43d0-8891-812ef6c8759e STEP: Updating secret s-test-opt-upd-2427d53c-da05-44c6-a2cb-09e715481575 STEP: Creating secret with name s-test-opt-create-b09ba4b4-5d1a-4ef7-8cfe-43a0fbe410b9 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 17 21:59:35.222: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-9108" for this suite. • [SLOW TEST:26.180 seconds] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":1,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 17 21:59:25.841: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41 [It] should update labels on modification [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating the pod Jun 17 21:59:25.876: INFO: The status of Pod labelsupdate1c744e02-2c9c-42db-a1e7-4ccac903bc24 is Pending, waiting for it to be Running (with Ready = true) Jun 17 21:59:27.879: INFO: The status of Pod labelsupdate1c744e02-2c9c-42db-a1e7-4ccac903bc24 is Pending, waiting for it to be Running (with Ready = true) Jun 17 21:59:29.880: INFO: The status of Pod labelsupdate1c744e02-2c9c-42db-a1e7-4ccac903bc24 is Pending, waiting for it to be Running (with Ready = true) Jun 17 21:59:31.880: INFO: The status of Pod labelsupdate1c744e02-2c9c-42db-a1e7-4ccac903bc24 is Pending, waiting for it to be Running (with Ready = true) Jun 17 21:59:33.881: INFO: The status of Pod labelsupdate1c744e02-2c9c-42db-a1e7-4ccac903bc24 is Running (Ready = true) Jun 17 21:59:34.401: INFO: Successfully updated pod "labelsupdate1c744e02-2c9c-42db-a1e7-4ccac903bc24" [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 17 21:59:36.428: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-5338" for this suite. • [SLOW TEST:10.596 seconds] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should update labels on modification [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance]","total":-1,"completed":4,"skipped":83,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 17 21:59:09.175: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook W0617 21:59:09.196600 26 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Jun 17 21:59:09.196: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Jun 17 21:59:09.198: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/lifecycle_hook.go:52 STEP: create the container to handle the HTTPGet hook request. Jun 17 21:59:09.212: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Jun 17 21:59:11.216: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Jun 17 21:59:13.218: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Jun 17 21:59:15.217: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Jun 17 21:59:17.218: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Jun 17 21:59:19.215: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Jun 17 21:59:21.217: INFO: The status of Pod pod-handle-http-request is Running (Ready = true) [It] should execute poststart http hook properly [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: create the pod with lifecycle hook Jun 17 21:59:21.237: INFO: The status of Pod pod-with-poststart-http-hook is Pending, waiting for it to be Running (with Ready = true) Jun 17 21:59:23.242: INFO: The status of Pod pod-with-poststart-http-hook is Pending, waiting for it to be Running (with Ready = true) Jun 17 21:59:25.240: INFO: The status of Pod pod-with-poststart-http-hook is Pending, waiting for it to be Running (with Ready = true) Jun 17 21:59:27.240: INFO: The status of Pod pod-with-poststart-http-hook is Pending, waiting for it to be Running (with Ready = true) Jun 17 21:59:29.241: INFO: The status of Pod pod-with-poststart-http-hook is Running (Ready = true) STEP: check poststart hook STEP: delete the pod with lifecycle hook Jun 17 21:59:29.307: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Jun 17 21:59:29.309: INFO: Pod pod-with-poststart-http-hook still exists Jun 17 21:59:31.311: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Jun 17 21:59:31.314: INFO: Pod pod-with-poststart-http-hook still exists Jun 17 21:59:33.310: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Jun 17 21:59:33.313: INFO: Pod pod-with-poststart-http-hook still exists Jun 17 21:59:35.310: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Jun 17 21:59:35.313: INFO: Pod pod-with-poststart-http-hook still exists Jun 17 21:59:37.310: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Jun 17 21:59:37.313: INFO: Pod pod-with-poststart-http-hook no longer exists [AfterEach] [sig-node] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 17 21:59:37.313: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-1579" for this suite. • [SLOW TEST:28.147 seconds] [sig-node] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 when create a pod with lifecycle hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/lifecycle_hook.go:43 should execute poststart http hook properly [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":52,"failed":0} SS ------------------------------ [BeforeEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 17 21:59:09.091: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test W0617 21:59:09.124370 28 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Jun 17 21:59:09.124: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Jun 17 21:59:09.126: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Performing setup for networking test in namespace pod-network-test-7298 STEP: creating a selector STEP: Creating the service pods in kubernetes Jun 17 21:59:09.128: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Jun 17 21:59:09.169: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jun 17 21:59:11.173: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jun 17 21:59:13.172: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jun 17 21:59:15.173: INFO: The status of Pod netserver-0 is Running (Ready = false) Jun 17 21:59:17.173: INFO: The status of Pod netserver-0 is Running (Ready = false) Jun 17 21:59:19.173: INFO: The status of Pod netserver-0 is Running (Ready = false) Jun 17 21:59:21.175: INFO: The status of Pod netserver-0 is Running (Ready = false) Jun 17 21:59:23.173: INFO: The status of Pod netserver-0 is Running (Ready = false) Jun 17 21:59:25.173: INFO: The status of Pod netserver-0 is Running (Ready = false) Jun 17 21:59:27.174: INFO: The status of Pod netserver-0 is Running (Ready = false) Jun 17 21:59:29.173: INFO: The status of Pod netserver-0 is Running (Ready = false) Jun 17 21:59:31.174: INFO: The status of Pod netserver-0 is Running (Ready = true) Jun 17 21:59:31.179: INFO: The status of Pod netserver-1 is Running (Ready = false) Jun 17 21:59:33.184: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods Jun 17 21:59:39.218: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2 Jun 17 21:59:39.218: INFO: Going to poll 10.244.4.111 on port 8080 at least 0 times, with a maximum of 34 tries before failing Jun 17 21:59:39.220: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.4.111:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-7298 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 17 21:59:39.220: INFO: >>> kubeConfig: /root/.kube/config Jun 17 21:59:39.375: INFO: Found all 1 expected endpoints: [netserver-0] Jun 17 21:59:39.375: INFO: Going to poll 10.244.3.248 on port 8080 at least 0 times, with a maximum of 34 tries before failing Jun 17 21:59:39.378: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.3.248:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-7298 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 17 21:59:39.378: INFO: >>> kubeConfig: /root/.kube/config Jun 17 21:59:39.473: INFO: Found all 1 expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 17 21:59:39.473: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-7298" for this suite. • [SLOW TEST:30.391 seconds] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/network/framework.go:23 Granular Checks: Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/network/networking.go:30 should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":7,"failed":0} SSSSSS ------------------------------ [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 17 21:59:27.281: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41 [It] should provide container's cpu limit [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test downward API volume plugin Jun 17 21:59:27.315: INFO: Waiting up to 5m0s for pod "downwardapi-volume-507e43a1-b822-4dd4-852b-8f80c89ecc00" in namespace "downward-api-932" to be "Succeeded or Failed" Jun 17 21:59:27.319: INFO: Pod "downwardapi-volume-507e43a1-b822-4dd4-852b-8f80c89ecc00": Phase="Pending", Reason="", readiness=false. Elapsed: 3.603145ms Jun 17 21:59:29.321: INFO: Pod "downwardapi-volume-507e43a1-b822-4dd4-852b-8f80c89ecc00": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006181453s Jun 17 21:59:31.325: INFO: Pod "downwardapi-volume-507e43a1-b822-4dd4-852b-8f80c89ecc00": Phase="Pending", Reason="", readiness=false. Elapsed: 4.010044889s Jun 17 21:59:33.329: INFO: Pod "downwardapi-volume-507e43a1-b822-4dd4-852b-8f80c89ecc00": Phase="Pending", Reason="", readiness=false. Elapsed: 6.013845778s Jun 17 21:59:35.333: INFO: Pod "downwardapi-volume-507e43a1-b822-4dd4-852b-8f80c89ecc00": Phase="Pending", Reason="", readiness=false. Elapsed: 8.017767474s Jun 17 21:59:37.336: INFO: Pod "downwardapi-volume-507e43a1-b822-4dd4-852b-8f80c89ecc00": Phase="Pending", Reason="", readiness=false. Elapsed: 10.020924486s Jun 17 21:59:39.340: INFO: Pod "downwardapi-volume-507e43a1-b822-4dd4-852b-8f80c89ecc00": Phase="Pending", Reason="", readiness=false. Elapsed: 12.024704775s Jun 17 21:59:41.344: INFO: Pod "downwardapi-volume-507e43a1-b822-4dd4-852b-8f80c89ecc00": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.029273464s STEP: Saw pod success Jun 17 21:59:41.344: INFO: Pod "downwardapi-volume-507e43a1-b822-4dd4-852b-8f80c89ecc00" satisfied condition "Succeeded or Failed" Jun 17 21:59:41.347: INFO: Trying to get logs from node node2 pod downwardapi-volume-507e43a1-b822-4dd4-852b-8f80c89ecc00 container client-container: STEP: delete the pod Jun 17 21:59:41.361: INFO: Waiting for pod downwardapi-volume-507e43a1-b822-4dd4-852b-8f80c89ecc00 to disappear Jun 17 21:59:41.363: INFO: Pod downwardapi-volume-507e43a1-b822-4dd4-852b-8f80c89ecc00 no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 17 21:59:41.363: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-932" for this suite. • [SLOW TEST:14.090 seconds] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should provide container's cpu limit [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance]","total":-1,"completed":3,"skipped":19,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 17 21:59:23.374: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:86 [It] Deployment should have a working scale subresource [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Jun 17 21:59:23.399: INFO: Creating simple deployment test-new-deployment Jun 17 21:59:23.408: INFO: deployment "test-new-deployment" doesn't have the required revision set Jun 17 21:59:25.416: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63791099963, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63791099963, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63791099963, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63791099963, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-new-deployment-847dcfb7fb\" is progressing."}}, CollisionCount:(*int32)(nil)} Jun 17 21:59:27.420: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63791099963, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63791099963, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63791099963, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63791099963, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-new-deployment-847dcfb7fb\" is progressing."}}, CollisionCount:(*int32)(nil)} Jun 17 21:59:29.423: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63791099963, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63791099963, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63791099963, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63791099963, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-new-deployment-847dcfb7fb\" is progressing."}}, CollisionCount:(*int32)(nil)} Jun 17 21:59:31.425: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63791099963, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63791099963, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63791099963, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63791099963, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-new-deployment-847dcfb7fb\" is progressing."}}, CollisionCount:(*int32)(nil)} Jun 17 21:59:33.419: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63791099963, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63791099963, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63791099963, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63791099963, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-new-deployment-847dcfb7fb\" is progressing."}}, CollisionCount:(*int32)(nil)} Jun 17 21:59:35.420: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63791099963, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63791099963, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63791099963, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63791099963, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-new-deployment-847dcfb7fb\" is progressing."}}, CollisionCount:(*int32)(nil)} Jun 17 21:59:37.419: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63791099963, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63791099963, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63791099963, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63791099963, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-new-deployment-847dcfb7fb\" is progressing."}}, CollisionCount:(*int32)(nil)} Jun 17 21:59:39.420: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63791099963, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63791099963, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63791099963, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63791099963, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-new-deployment-847dcfb7fb\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: getting scale subresource STEP: updating a scale subresource STEP: verifying the deployment Spec.Replicas was modified STEP: Patch a scale subresource [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:80 Jun 17 21:59:41.440: INFO: Deployment "test-new-deployment": &Deployment{ObjectMeta:{test-new-deployment deployment-8711 79690a92-bdf4-4b1e-a087-40a4804103b0 32480 3 2022-06-17 21:59:23 +0000 UTC map[name:httpd] map[deployment.kubernetes.io/revision:1] [] [] [{e2e.test Update apps/v1 2022-06-17 21:59:23 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2022-06-17 21:59:41 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:availableReplicas":{},"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{},"f:updatedReplicas":{}}}}]},Spec:DeploymentSpec{Replicas:*4,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd] map[] [] [] []} {[] [] [{httpd k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc001cb07f8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2022-06-17 21:59:41 +0000 UTC,LastTransitionTime:2022-06-17 21:59:41 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-new-deployment-847dcfb7fb" has successfully progressed.,LastUpdateTime:2022-06-17 21:59:41 +0000 UTC,LastTransitionTime:2022-06-17 21:59:23 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} Jun 17 21:59:41.445: INFO: New ReplicaSet "test-new-deployment-847dcfb7fb" of Deployment "test-new-deployment": &ReplicaSet{ObjectMeta:{test-new-deployment-847dcfb7fb deployment-8711 56004ce1-764d-49f1-8d80-604e4e0c95c4 32483 3 2022-06-17 21:59:23 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[deployment.kubernetes.io/desired-replicas:4 deployment.kubernetes.io/max-replicas:5 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-new-deployment 79690a92-bdf4-4b1e-a087-40a4804103b0 0xc001cb0be7 0xc001cb0be8}] [] [{kube-controller-manager Update apps/v1 2022-06-17 21:59:41 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"79690a92-bdf4-4b1e-a087-40a4804103b0\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*4,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: 847dcfb7fb,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[] [] [] []} {[] [] [{httpd k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc001cb0c58 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} Jun 17 21:59:41.448: INFO: Pod "test-new-deployment-847dcfb7fb-cq9xn" is available: &Pod{ObjectMeta:{test-new-deployment-847dcfb7fb-cq9xn test-new-deployment-847dcfb7fb- deployment-8711 2b4bd4a8-302a-4fe0-9939-df7b3d7dcf9f 32462 0 2022-06-17 21:59:23 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[k8s.v1.cni.cncf.io/network-status:[{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.3.13" ], "mac": "3a:0e:7b:ec:0d:81", "default": true, "dns": {} }] k8s.v1.cni.cncf.io/networks-status:[{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.3.13" ], "mac": "3a:0e:7b:ec:0d:81", "default": true, "dns": {} }] kubernetes.io/psp:collectd] [{apps/v1 ReplicaSet test-new-deployment-847dcfb7fb 56004ce1-764d-49f1-8d80-604e4e0c95c4 0xc001cb0fef 0xc001cb1000}] [] [{kube-controller-manager Update v1 2022-06-17 21:59:23 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"56004ce1-764d-49f1-8d80-604e4e0c95c4\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {multus Update v1 2022-06-17 21:59:30 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:k8s.v1.cni.cncf.io/network-status":{},"f:k8s.v1.cni.cncf.io/networks-status":{}}}}} {kubelet Update v1 2022-06-17 21:59:41 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.3.13\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-ktv9x,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-ktv9x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-06-17 21:59:23 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-06-17 21:59:41 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-06-17 21:59:41 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-06-17 21:59:23 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.10.190.208,PodIP:10.244.3.13,StartTime:2022-06-17 21:59:23 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2022-06-17 21:59:40 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,ImageID:docker-pullable://k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50,ContainerID:docker://e206c025bbb64e4c98f995f6c60c784c9ba3e725e4e975070900598461e5d914,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.3.13,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Jun 17 21:59:41.448: INFO: Pod "test-new-deployment-847dcfb7fb-w86v8" is not available: &Pod{ObjectMeta:{test-new-deployment-847dcfb7fb-w86v8 test-new-deployment-847dcfb7fb- deployment-8711 8459c23f-6582-4a80-be52-699402909a9c 32484 0 2022-06-17 21:59:41 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[kubernetes.io/psp:collectd] [{apps/v1 ReplicaSet test-new-deployment-847dcfb7fb 56004ce1-764d-49f1-8d80-604e4e0c95c4 0xc001cb11ef 0xc001cb1200}] [] [{kube-controller-manager Update v1 2022-06-17 21:59:41 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"56004ce1-764d-49f1-8d80-604e4e0c95c4\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-fsh9w,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-fsh9w,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node1,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-06-17 21:59:41 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 17 21:59:41.448: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-8711" for this suite. • [SLOW TEST:18.083 seconds] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 Deployment should have a working scale subresource [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] Deployment Deployment should have a working scale subresource [Conformance]","total":-1,"completed":3,"skipped":18,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 17 21:59:35.341: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide host IP as an env var [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test downward api env vars Jun 17 21:59:35.372: INFO: Waiting up to 5m0s for pod "downward-api-b5bb10aa-c688-4c36-a530-38794781ee6b" in namespace "downward-api-7928" to be "Succeeded or Failed" Jun 17 21:59:35.375: INFO: Pod "downward-api-b5bb10aa-c688-4c36-a530-38794781ee6b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.868484ms Jun 17 21:59:37.380: INFO: Pod "downward-api-b5bb10aa-c688-4c36-a530-38794781ee6b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007861928s Jun 17 21:59:39.385: INFO: Pod "downward-api-b5bb10aa-c688-4c36-a530-38794781ee6b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.013163039s Jun 17 21:59:41.389: INFO: Pod "downward-api-b5bb10aa-c688-4c36-a530-38794781ee6b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.01700094s STEP: Saw pod success Jun 17 21:59:41.389: INFO: Pod "downward-api-b5bb10aa-c688-4c36-a530-38794781ee6b" satisfied condition "Succeeded or Failed" Jun 17 21:59:41.391: INFO: Trying to get logs from node node1 pod downward-api-b5bb10aa-c688-4c36-a530-38794781ee6b container dapi-container: STEP: delete the pod Jun 17 21:59:41.701: INFO: Waiting for pod downward-api-b5bb10aa-c688-4c36-a530-38794781ee6b to disappear Jun 17 21:59:41.702: INFO: Pod downward-api-b5bb10aa-c688-4c36-a530-38794781ee6b no longer exists [AfterEach] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 17 21:59:41.702: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-7928" for this suite. • [SLOW TEST:6.369 seconds] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should provide host IP as an env var [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance]","total":-1,"completed":2,"skipped":61,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 17 21:59:09.261: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: udp [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Performing setup for networking test in namespace pod-network-test-7467 STEP: creating a selector STEP: Creating the service pods in kubernetes Jun 17 21:59:09.282: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Jun 17 21:59:09.315: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jun 17 21:59:11.318: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jun 17 21:59:13.319: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jun 17 21:59:15.319: INFO: The status of Pod netserver-0 is Running (Ready = false) Jun 17 21:59:17.318: INFO: The status of Pod netserver-0 is Running (Ready = false) Jun 17 21:59:19.318: INFO: The status of Pod netserver-0 is Running (Ready = false) Jun 17 21:59:21.319: INFO: The status of Pod netserver-0 is Running (Ready = false) Jun 17 21:59:23.319: INFO: The status of Pod netserver-0 is Running (Ready = false) Jun 17 21:59:25.318: INFO: The status of Pod netserver-0 is Running (Ready = false) Jun 17 21:59:27.319: INFO: The status of Pod netserver-0 is Running (Ready = false) Jun 17 21:59:29.318: INFO: The status of Pod netserver-0 is Running (Ready = false) Jun 17 21:59:31.319: INFO: The status of Pod netserver-0 is Running (Ready = true) Jun 17 21:59:31.323: INFO: The status of Pod netserver-1 is Running (Ready = false) Jun 17 21:59:33.328: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods Jun 17 21:59:45.352: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2 Jun 17 21:59:45.352: INFO: Breadth first check of 10.244.4.112 on host 10.10.190.207... Jun 17 21:59:45.355: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.3.18:9080/dial?request=hostname&protocol=udp&host=10.244.4.112&port=8081&tries=1'] Namespace:pod-network-test-7467 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 17 21:59:45.355: INFO: >>> kubeConfig: /root/.kube/config Jun 17 21:59:46.029: INFO: Waiting for responses: map[] Jun 17 21:59:46.029: INFO: reached 10.244.4.112 after 0/1 tries Jun 17 21:59:46.029: INFO: Breadth first check of 10.244.3.250 on host 10.10.190.208... Jun 17 21:59:46.031: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.3.18:9080/dial?request=hostname&protocol=udp&host=10.244.3.250&port=8081&tries=1'] Namespace:pod-network-test-7467 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 17 21:59:46.031: INFO: >>> kubeConfig: /root/.kube/config Jun 17 21:59:46.221: INFO: Waiting for responses: map[] Jun 17 21:59:46.221: INFO: reached 10.244.3.250 after 0/1 tries Jun 17 21:59:46.221: INFO: Going to retry 0 out of 2 pods.... [AfterEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 17 21:59:46.221: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-7467" for this suite. • [SLOW TEST:36.971 seconds] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/network/framework.go:23 Granular Checks: Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/network/networking.go:30 should function for intra-pod communication: udp [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance]","total":-1,"completed":2,"skipped":17,"failed":0} SSSSSSSS ------------------------------ [BeforeEach] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 17 21:59:41.780: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test downward api env vars Jun 17 21:59:41.811: INFO: Waiting up to 5m0s for pod "downward-api-7f41a84e-b795-4f7d-bba0-963d132c2756" in namespace "downward-api-6933" to be "Succeeded or Failed" Jun 17 21:59:41.814: INFO: Pod "downward-api-7f41a84e-b795-4f7d-bba0-963d132c2756": Phase="Pending", Reason="", readiness=false. Elapsed: 2.52171ms Jun 17 21:59:43.817: INFO: Pod "downward-api-7f41a84e-b795-4f7d-bba0-963d132c2756": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005199898s Jun 17 21:59:45.821: INFO: Pod "downward-api-7f41a84e-b795-4f7d-bba0-963d132c2756": Phase="Pending", Reason="", readiness=false. Elapsed: 4.009884406s Jun 17 21:59:47.825: INFO: Pod "downward-api-7f41a84e-b795-4f7d-bba0-963d132c2756": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.013409389s STEP: Saw pod success Jun 17 21:59:47.825: INFO: Pod "downward-api-7f41a84e-b795-4f7d-bba0-963d132c2756" satisfied condition "Succeeded or Failed" Jun 17 21:59:47.827: INFO: Trying to get logs from node node1 pod downward-api-7f41a84e-b795-4f7d-bba0-963d132c2756 container dapi-container: STEP: delete the pod Jun 17 21:59:48.183: INFO: Waiting for pod downward-api-7f41a84e-b795-4f7d-bba0-963d132c2756 to disappear Jun 17 21:59:48.186: INFO: Pod downward-api-7f41a84e-b795-4f7d-bba0-963d132c2756 no longer exists [AfterEach] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 17 21:59:48.186: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-6933" for this suite. • [SLOW TEST:6.415 seconds] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]","total":-1,"completed":3,"skipped":102,"failed":0} SSS ------------------------------ [BeforeEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 17 21:59:41.411: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating secret with name secret-test-f232abf1-8379-4785-8c1e-eaab036e9a63 STEP: Creating a pod to test consume secrets Jun 17 21:59:41.447: INFO: Waiting up to 5m0s for pod "pod-secrets-360f2313-75c7-4fad-b616-07db490b1d8e" in namespace "secrets-597" to be "Succeeded or Failed" Jun 17 21:59:41.449: INFO: Pod "pod-secrets-360f2313-75c7-4fad-b616-07db490b1d8e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.69171ms Jun 17 21:59:43.453: INFO: Pod "pod-secrets-360f2313-75c7-4fad-b616-07db490b1d8e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006009339s Jun 17 21:59:45.457: INFO: Pod "pod-secrets-360f2313-75c7-4fad-b616-07db490b1d8e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.009934607s Jun 17 21:59:47.459: INFO: Pod "pod-secrets-360f2313-75c7-4fad-b616-07db490b1d8e": Phase="Pending", Reason="", readiness=false. Elapsed: 6.012660671s Jun 17 21:59:49.463: INFO: Pod "pod-secrets-360f2313-75c7-4fad-b616-07db490b1d8e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.016072403s STEP: Saw pod success Jun 17 21:59:49.463: INFO: Pod "pod-secrets-360f2313-75c7-4fad-b616-07db490b1d8e" satisfied condition "Succeeded or Failed" Jun 17 21:59:49.466: INFO: Trying to get logs from node node2 pod pod-secrets-360f2313-75c7-4fad-b616-07db490b1d8e container secret-volume-test: STEP: delete the pod Jun 17 21:59:49.529: INFO: Waiting for pod pod-secrets-360f2313-75c7-4fad-b616-07db490b1d8e to disappear Jun 17 21:59:49.535: INFO: Pod pod-secrets-360f2313-75c7-4fad-b616-07db490b1d8e no longer exists [AfterEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 17 21:59:49.535: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-597" for this suite. • [SLOW TEST:8.132 seconds] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":-1,"completed":4,"skipped":38,"failed":0} SSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 17 21:59:46.250: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating secret with name projected-secret-test-cd23ad63-a54c-4933-8095-3f60f8af5995 STEP: Creating a pod to test consume secrets Jun 17 21:59:46.285: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-135d1306-cd4c-43fb-933b-193bbc438a91" in namespace "projected-4815" to be "Succeeded or Failed" Jun 17 21:59:46.290: INFO: Pod "pod-projected-secrets-135d1306-cd4c-43fb-933b-193bbc438a91": Phase="Pending", Reason="", readiness=false. Elapsed: 4.863372ms Jun 17 21:59:48.294: INFO: Pod "pod-projected-secrets-135d1306-cd4c-43fb-933b-193bbc438a91": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008405654s Jun 17 21:59:50.301: INFO: Pod "pod-projected-secrets-135d1306-cd4c-43fb-933b-193bbc438a91": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.015553123s STEP: Saw pod success Jun 17 21:59:50.301: INFO: Pod "pod-projected-secrets-135d1306-cd4c-43fb-933b-193bbc438a91" satisfied condition "Succeeded or Failed" Jun 17 21:59:50.304: INFO: Trying to get logs from node node1 pod pod-projected-secrets-135d1306-cd4c-43fb-933b-193bbc438a91 container secret-volume-test: STEP: delete the pod Jun 17 21:59:50.318: INFO: Waiting for pod pod-projected-secrets-135d1306-cd4c-43fb-933b-193bbc438a91 to disappear Jun 17 21:59:50.320: INFO: Pod pod-projected-secrets-135d1306-cd4c-43fb-933b-193bbc438a91 no longer exists [AfterEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 17 21:59:50.320: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4815" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":-1,"completed":3,"skipped":25,"failed":0} SSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 17 21:59:37.329: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should fail substituting values in a volume subpath with absolute path [Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Jun 17 21:59:47.372: INFO: Deleting pod "var-expansion-aabde495-0b63-4a3f-a8fc-b6cc9642405b" in namespace "var-expansion-7256" Jun 17 21:59:47.377: INFO: Wait up to 5m0s for pod "var-expansion-aabde495-0b63-4a3f-a8fc-b6cc9642405b" to be fully deleted [AfterEach] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 17 21:59:53.384: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-7256" for this suite. • [SLOW TEST:16.064 seconds] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should fail substituting values in a volume subpath with absolute path [Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Variable Expansion should fail substituting values in a volume subpath with absolute path [Slow] [Conformance]","total":-1,"completed":2,"skipped":54,"failed":0} S ------------------------------ [BeforeEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 17 21:59:41.566: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating secret with name secret-test-map-730077ca-9a80-43fe-bede-830fc57ab099 STEP: Creating a pod to test consume secrets Jun 17 21:59:41.600: INFO: Waiting up to 5m0s for pod "pod-secrets-74d37854-e41a-4c2e-8dcc-467c8a9cee8b" in namespace "secrets-1991" to be "Succeeded or Failed" Jun 17 21:59:41.603: INFO: Pod "pod-secrets-74d37854-e41a-4c2e-8dcc-467c8a9cee8b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.896971ms Jun 17 21:59:43.607: INFO: Pod "pod-secrets-74d37854-e41a-4c2e-8dcc-467c8a9cee8b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007027083s Jun 17 21:59:45.611: INFO: Pod "pod-secrets-74d37854-e41a-4c2e-8dcc-467c8a9cee8b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.01135973s Jun 17 21:59:47.614: INFO: Pod "pod-secrets-74d37854-e41a-4c2e-8dcc-467c8a9cee8b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.013970177s Jun 17 21:59:49.616: INFO: Pod "pod-secrets-74d37854-e41a-4c2e-8dcc-467c8a9cee8b": Phase="Pending", Reason="", readiness=false. Elapsed: 8.016335824s Jun 17 21:59:51.620: INFO: Pod "pod-secrets-74d37854-e41a-4c2e-8dcc-467c8a9cee8b": Phase="Pending", Reason="", readiness=false. Elapsed: 10.020578238s Jun 17 21:59:53.623: INFO: Pod "pod-secrets-74d37854-e41a-4c2e-8dcc-467c8a9cee8b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.023665626s STEP: Saw pod success Jun 17 21:59:53.623: INFO: Pod "pod-secrets-74d37854-e41a-4c2e-8dcc-467c8a9cee8b" satisfied condition "Succeeded or Failed" Jun 17 21:59:53.625: INFO: Trying to get logs from node node2 pod pod-secrets-74d37854-e41a-4c2e-8dcc-467c8a9cee8b container secret-volume-test: STEP: delete the pod Jun 17 21:59:53.640: INFO: Waiting for pod pod-secrets-74d37854-e41a-4c2e-8dcc-467c8a9cee8b to disappear Jun 17 21:59:53.642: INFO: Pod pod-secrets-74d37854-e41a-4c2e-8dcc-467c8a9cee8b no longer exists [AfterEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 17 21:59:53.642: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-1991" for this suite. • [SLOW TEST:12.084 seconds] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":4,"skipped":77,"failed":0} SS ------------------------------ [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 17 21:59:50.354: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] binary data should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating configMap with name configmap-test-upd-22948d67-9720-473a-afe1-eb952293f74a STEP: Creating the pod STEP: Waiting for pod with text data STEP: Waiting for pod with binary data [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 17 21:59:54.460: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-9838" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance]","total":-1,"completed":4,"skipped":38,"failed":0} SSSSS ------------------------------ [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 17 21:59:48.204: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test emptydir 0777 on tmpfs Jun 17 21:59:48.241: INFO: Waiting up to 5m0s for pod "pod-4de2eecf-084e-445e-bbdd-3fd564a13ed4" in namespace "emptydir-4337" to be "Succeeded or Failed" Jun 17 21:59:48.244: INFO: Pod "pod-4de2eecf-084e-445e-bbdd-3fd564a13ed4": Phase="Pending", Reason="", readiness=false. Elapsed: 3.246116ms Jun 17 21:59:50.249: INFO: Pod "pod-4de2eecf-084e-445e-bbdd-3fd564a13ed4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008022084s Jun 17 21:59:52.254: INFO: Pod "pod-4de2eecf-084e-445e-bbdd-3fd564a13ed4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.012365939s Jun 17 21:59:54.258: INFO: Pod "pod-4de2eecf-084e-445e-bbdd-3fd564a13ed4": Phase="Pending", Reason="", readiness=false. Elapsed: 6.016500652s Jun 17 21:59:56.262: INFO: Pod "pod-4de2eecf-084e-445e-bbdd-3fd564a13ed4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.02077879s STEP: Saw pod success Jun 17 21:59:56.262: INFO: Pod "pod-4de2eecf-084e-445e-bbdd-3fd564a13ed4" satisfied condition "Succeeded or Failed" Jun 17 21:59:56.264: INFO: Trying to get logs from node node2 pod pod-4de2eecf-084e-445e-bbdd-3fd564a13ed4 container test-container: STEP: delete the pod Jun 17 21:59:56.277: INFO: Waiting for pod pod-4de2eecf-084e-445e-bbdd-3fd564a13ed4 to disappear Jun 17 21:59:56.278: INFO: Pod pod-4de2eecf-084e-445e-bbdd-3fd564a13ed4 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 17 21:59:56.279: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-4337" for this suite. • [SLOW TEST:8.082 seconds] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":4,"skipped":105,"failed":0} SSSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 17 21:59:54.482: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41 [It] should update labels on modification [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating the pod Jun 17 21:59:54.522: INFO: The status of Pod labelsupdate99d1131f-ed2f-4198-80ae-4e5d4f4fb065 is Pending, waiting for it to be Running (with Ready = true) Jun 17 21:59:56.524: INFO: The status of Pod labelsupdate99d1131f-ed2f-4198-80ae-4e5d4f4fb065 is Pending, waiting for it to be Running (with Ready = true) Jun 17 21:59:58.525: INFO: The status of Pod labelsupdate99d1131f-ed2f-4198-80ae-4e5d4f4fb065 is Running (Ready = true) Jun 17 21:59:59.052: INFO: Successfully updated pod "labelsupdate99d1131f-ed2f-4198-80ae-4e5d4f4fb065" [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 17 22:00:01.066: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7121" for this suite. • [SLOW TEST:6.594 seconds] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should update labels on modification [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance]","total":-1,"completed":5,"skipped":43,"failed":0} SSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 17 21:59:53.397: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD without validation schema [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Jun 17 21:59:53.417: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties Jun 17 22:00:01.529: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7127 --namespace=crd-publish-openapi-7127 create -f -' Jun 17 22:00:02.017: INFO: stderr: "" Jun 17 22:00:02.017: INFO: stdout: "e2e-test-crd-publish-openapi-7431-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n" Jun 17 22:00:02.017: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7127 --namespace=crd-publish-openapi-7127 delete e2e-test-crd-publish-openapi-7431-crds test-cr' Jun 17 22:00:02.184: INFO: stderr: "" Jun 17 22:00:02.184: INFO: stdout: "e2e-test-crd-publish-openapi-7431-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n" Jun 17 22:00:02.185: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7127 --namespace=crd-publish-openapi-7127 apply -f -' Jun 17 22:00:02.550: INFO: stderr: "" Jun 17 22:00:02.550: INFO: stdout: "e2e-test-crd-publish-openapi-7431-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n" Jun 17 22:00:02.550: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7127 --namespace=crd-publish-openapi-7127 delete e2e-test-crd-publish-openapi-7431-crds test-cr' Jun 17 22:00:02.716: INFO: stderr: "" Jun 17 22:00:02.716: INFO: stdout: "e2e-test-crd-publish-openapi-7431-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR without validation schema Jun 17 22:00:02.716: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7127 explain e2e-test-crd-publish-openapi-7431-crds' Jun 17 22:00:03.073: INFO: stderr: "" Jun 17 22:00:03.073: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-7431-crd\nVERSION: crd-publish-openapi-test-empty.example.com/v1\n\nDESCRIPTION:\n \n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 17 22:00:06.737: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-7127" for this suite. • [SLOW TEST:13.357 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD without validation schema [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance]","total":-1,"completed":3,"skipped":55,"failed":0} S ------------------------------ [BeforeEach] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 17 22:00:06.759: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Jun 17 22:00:09.809: INFO: Expected: &{} to match Container's Termination Message: -- STEP: delete the container [AfterEach] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 17 22:00:09.817: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-4394" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":-1,"completed":4,"skipped":56,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 17 21:59:53.656: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-3985.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-2.dns-test-service-2.dns-3985.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/wheezy_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-3985.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-3985.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-2.dns-test-service-2.dns-3985.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/jessie_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-3985.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Jun 17 22:00:13.726: INFO: DNS probes using dns-3985/dns-test-b4dbc831-016b-4e85-8765-f8a377e5073f succeeded STEP: deleting the pod STEP: deleting the test headless service [AfterEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 17 22:00:13.739: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-3985" for this suite. • [SLOW TEST:20.090 seconds] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance]","total":-1,"completed":5,"skipped":79,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] IngressClass API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 17 22:00:13.782: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename ingressclass STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] IngressClass API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/ingressclass.go:149 [It] should support creating IngressClass API operations [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: getting /apis STEP: getting /apis/networking.k8s.io STEP: getting /apis/networking.k8s.iov1 STEP: creating STEP: getting STEP: listing STEP: watching Jun 17 22:00:13.819: INFO: starting watch STEP: patching STEP: updating Jun 17 22:00:13.826: INFO: waiting for watch events with expected annotations Jun 17 22:00:13.826: INFO: saw patched and updated annotations STEP: deleting STEP: deleting a collection [AfterEach] [sig-network] IngressClass API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 17 22:00:13.842: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "ingressclass-4597" for this suite. • ------------------------------ [BeforeEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 17 22:00:09.914: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating secret with name secret-test-4d720792-aad3-48e9-866c-c7fc190a0003 STEP: Creating a pod to test consume secrets Jun 17 22:00:09.954: INFO: Waiting up to 5m0s for pod "pod-secrets-ddfc684c-6ee4-4e16-96f0-6ce9d159e2df" in namespace "secrets-2340" to be "Succeeded or Failed" Jun 17 22:00:09.956: INFO: Pod "pod-secrets-ddfc684c-6ee4-4e16-96f0-6ce9d159e2df": Phase="Pending", Reason="", readiness=false. Elapsed: 2.128171ms Jun 17 22:00:11.959: INFO: Pod "pod-secrets-ddfc684c-6ee4-4e16-96f0-6ce9d159e2df": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005112858s Jun 17 22:00:13.963: INFO: Pod "pod-secrets-ddfc684c-6ee4-4e16-96f0-6ce9d159e2df": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.008246339s STEP: Saw pod success Jun 17 22:00:13.963: INFO: Pod "pod-secrets-ddfc684c-6ee4-4e16-96f0-6ce9d159e2df" satisfied condition "Succeeded or Failed" Jun 17 22:00:13.965: INFO: Trying to get logs from node node1 pod pod-secrets-ddfc684c-6ee4-4e16-96f0-6ce9d159e2df container secret-volume-test: STEP: delete the pod Jun 17 22:00:13.977: INFO: Waiting for pod pod-secrets-ddfc684c-6ee4-4e16-96f0-6ce9d159e2df to disappear Jun 17 22:00:13.979: INFO: Pod pod-secrets-ddfc684c-6ee4-4e16-96f0-6ce9d159e2df no longer exists [AfterEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 17 22:00:13.979: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-2340" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":5,"skipped":104,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] EndpointSlice /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 17 22:00:14.018: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename endpointslice STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] EndpointSlice /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/endpointslice.go:49 [It] should support creating EndpointSlice API operations [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: getting /apis STEP: getting /apis/discovery.k8s.io STEP: getting /apis/discovery.k8s.iov1 STEP: creating STEP: getting STEP: listing STEP: watching Jun 17 22:00:14.056: INFO: starting watch STEP: cluster-wide listing STEP: cluster-wide watching Jun 17 22:00:14.062: INFO: starting watch STEP: patching STEP: updating Jun 17 22:00:14.072: INFO: waiting for watch events with expected annotations Jun 17 22:00:14.072: INFO: saw patched and updated annotations STEP: deleting STEP: deleting a collection [AfterEach] [sig-network] EndpointSlice /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 17 22:00:14.087: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "endpointslice-533" for this suite. • ------------------------------ {"msg":"PASSED [sig-network] EndpointSlice should support creating EndpointSlice API operations [Conformance]","total":-1,"completed":6,"skipped":119,"failed":0} SSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 17 22:00:14.113: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746 [It] should complete a service status lifecycle [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating a Service STEP: watching for the Service to be added Jun 17 22:00:14.142: INFO: Found Service test-service-76h8l in namespace services-1264 with labels: map[test-service-static:true] & ports [{http TCP 80 {0 80 } 0}] Jun 17 22:00:14.142: INFO: Service test-service-76h8l created STEP: Getting /status Jun 17 22:00:14.145: INFO: Service test-service-76h8l has LoadBalancer: {[]} STEP: patching the ServiceStatus STEP: watching for the Service to be patched Jun 17 22:00:14.150: INFO: observed Service test-service-76h8l in namespace services-1264 with annotations: map[] & LoadBalancer: {[]} Jun 17 22:00:14.150: INFO: Found Service test-service-76h8l in namespace services-1264 with annotations: map[patchedstatus:true] & LoadBalancer: {[{203.0.113.1 []}]} Jun 17 22:00:14.150: INFO: Service test-service-76h8l has service status patched STEP: updating the ServiceStatus Jun 17 22:00:14.155: INFO: updatedStatus.Conditions: []v1.Condition{v1.Condition{Type:"StatusUpdate", Status:"True", ObservedGeneration:0, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Reason:"E2E", Message:"Set from e2e test"}} STEP: watching for the Service to be updated Jun 17 22:00:14.156: INFO: Observed Service test-service-76h8l in namespace services-1264 with annotations: map[] & Conditions: {[]} Jun 17 22:00:14.156: INFO: Observed event: &Service{ObjectMeta:{test-service-76h8l services-1264 14ef0d81-21d2-4495-a01e-bf3ac630f372 33329 0 2022-06-17 22:00:14 +0000 UTC map[test-service-static:true] map[patchedstatus:true] [] [] [{e2e.test Update v1 2022-06-17 22:00:14 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:patchedstatus":{}},"f:labels":{".":{},"f:test-service-static":{}}},"f:spec":{"f:ports":{".":{},"k:{\"port\":80,\"protocol\":\"TCP\"}":{".":{},"f:name":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:sessionAffinity":{},"f:type":{}},"f:status":{"f:loadBalancer":{"f:ingress":{}}}}}]},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:http,Protocol:TCP,Port:80,TargetPort:{0 80 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{},ClusterIP:10.233.19.191,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,TopologyKeys:[],IPFamilyPolicy:*SingleStack,ClusterIPs:[10.233.19.191],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{LoadBalancerIngress{IP:203.0.113.1,Hostname:,Ports:[]PortStatus{},},},},Conditions:[]Condition{},},} Jun 17 22:00:14.156: INFO: Found Service test-service-76h8l in namespace services-1264 with annotations: map[patchedstatus:true] & Conditions: [{StatusUpdate True 0 0001-01-01 00:00:00 +0000 UTC E2E Set from e2e test}] Jun 17 22:00:14.156: INFO: Service test-service-76h8l has service status updated STEP: patching the service STEP: watching for the Service to be patched Jun 17 22:00:14.166: INFO: observed Service test-service-76h8l in namespace services-1264 with labels: map[test-service-static:true] Jun 17 22:00:14.166: INFO: observed Service test-service-76h8l in namespace services-1264 with labels: map[test-service-static:true] Jun 17 22:00:14.166: INFO: observed Service test-service-76h8l in namespace services-1264 with labels: map[test-service-static:true] Jun 17 22:00:14.166: INFO: Found Service test-service-76h8l in namespace services-1264 with labels: map[test-service:patched test-service-static:true] Jun 17 22:00:14.166: INFO: Service test-service-76h8l patched STEP: deleting the service STEP: watching for the Service to be deleted Jun 17 22:00:14.174: INFO: Observed event: ADDED Jun 17 22:00:14.174: INFO: Observed event: MODIFIED Jun 17 22:00:14.174: INFO: Observed event: MODIFIED Jun 17 22:00:14.174: INFO: Observed event: MODIFIED Jun 17 22:00:14.174: INFO: Found Service test-service-76h8l in namespace services-1264 with labels: map[test-service:patched test-service-static:true] & annotations: map[patchedstatus:true] Jun 17 22:00:14.174: INFO: Service test-service-76h8l deleted [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 17 22:00:14.174: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-1264" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750 • ------------------------------ {"msg":"PASSED [sig-network] Services should complete a service status lifecycle [Conformance]","total":-1,"completed":7,"skipped":129,"failed":0} SS ------------------------------ [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 17 22:00:01.092: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD preserving unknown fields at the schema root [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Jun 17 22:00:01.119: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties Jun 17 22:00:09.749: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-9831 --namespace=crd-publish-openapi-9831 create -f -' Jun 17 22:00:10.182: INFO: stderr: "" Jun 17 22:00:10.182: INFO: stdout: "e2e-test-crd-publish-openapi-5525-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n" Jun 17 22:00:10.182: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-9831 --namespace=crd-publish-openapi-9831 delete e2e-test-crd-publish-openapi-5525-crds test-cr' Jun 17 22:00:10.346: INFO: stderr: "" Jun 17 22:00:10.346: INFO: stdout: "e2e-test-crd-publish-openapi-5525-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n" Jun 17 22:00:10.346: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-9831 --namespace=crd-publish-openapi-9831 apply -f -' Jun 17 22:00:10.707: INFO: stderr: "" Jun 17 22:00:10.707: INFO: stdout: "e2e-test-crd-publish-openapi-5525-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n" Jun 17 22:00:10.707: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-9831 --namespace=crd-publish-openapi-9831 delete e2e-test-crd-publish-openapi-5525-crds test-cr' Jun 17 22:00:10.896: INFO: stderr: "" Jun 17 22:00:10.896: INFO: stdout: "e2e-test-crd-publish-openapi-5525-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR Jun 17 22:00:10.896: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-9831 explain e2e-test-crd-publish-openapi-5525-crds' Jun 17 22:00:11.266: INFO: stderr: "" Jun 17 22:00:11.266: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-5525-crd\nVERSION: crd-publish-openapi-test-unknown-at-root.example.com/v1\n\nDESCRIPTION:\n \n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 17 22:00:14.933: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-9831" for this suite. • [SLOW TEST:13.862 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD preserving unknown fields at the schema root [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance]","total":-1,"completed":6,"skipped":51,"failed":0} SSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 17 21:59:49.572: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with projected pod [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating pod pod-subpath-test-projected-llss STEP: Creating a pod to test atomic-volume-subpath Jun 17 21:59:49.618: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-llss" in namespace "subpath-356" to be "Succeeded or Failed" Jun 17 21:59:49.620: INFO: Pod "pod-subpath-test-projected-llss": Phase="Pending", Reason="", readiness=false. Elapsed: 2.535973ms Jun 17 21:59:51.625: INFO: Pod "pod-subpath-test-projected-llss": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006932848s Jun 17 21:59:53.628: INFO: Pod "pod-subpath-test-projected-llss": Phase="Pending", Reason="", readiness=false. Elapsed: 4.010678742s Jun 17 21:59:55.636: INFO: Pod "pod-subpath-test-projected-llss": Phase="Pending", Reason="", readiness=false. Elapsed: 6.01826494s Jun 17 21:59:57.640: INFO: Pod "pod-subpath-test-projected-llss": Phase="Running", Reason="", readiness=true. Elapsed: 8.022012999s Jun 17 21:59:59.642: INFO: Pod "pod-subpath-test-projected-llss": Phase="Running", Reason="", readiness=true. Elapsed: 10.024851413s Jun 17 22:00:01.648: INFO: Pod "pod-subpath-test-projected-llss": Phase="Running", Reason="", readiness=true. Elapsed: 12.029993059s Jun 17 22:00:03.651: INFO: Pod "pod-subpath-test-projected-llss": Phase="Running", Reason="", readiness=true. Elapsed: 14.033207328s Jun 17 22:00:05.655: INFO: Pod "pod-subpath-test-projected-llss": Phase="Running", Reason="", readiness=true. Elapsed: 16.037338377s Jun 17 22:00:07.660: INFO: Pod "pod-subpath-test-projected-llss": Phase="Running", Reason="", readiness=true. Elapsed: 18.042137878s Jun 17 22:00:09.663: INFO: Pod "pod-subpath-test-projected-llss": Phase="Running", Reason="", readiness=true. Elapsed: 20.045702112s Jun 17 22:00:11.667: INFO: Pod "pod-subpath-test-projected-llss": Phase="Running", Reason="", readiness=true. Elapsed: 22.049262501s Jun 17 22:00:13.672: INFO: Pod "pod-subpath-test-projected-llss": Phase="Running", Reason="", readiness=true. Elapsed: 24.054050707s Jun 17 22:00:15.676: INFO: Pod "pod-subpath-test-projected-llss": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.058371679s STEP: Saw pod success Jun 17 22:00:15.676: INFO: Pod "pod-subpath-test-projected-llss" satisfied condition "Succeeded or Failed" Jun 17 22:00:15.678: INFO: Trying to get logs from node node2 pod pod-subpath-test-projected-llss container test-container-subpath-projected-llss: STEP: delete the pod Jun 17 22:00:15.899: INFO: Waiting for pod pod-subpath-test-projected-llss to disappear Jun 17 22:00:15.901: INFO: Pod pod-subpath-test-projected-llss no longer exists STEP: Deleting pod pod-subpath-test-projected-llss Jun 17 22:00:15.901: INFO: Deleting pod "pod-subpath-test-projected-llss" in namespace "subpath-356" [AfterEach] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 17 22:00:15.904: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-356" for this suite. • [SLOW TEST:26.342 seconds] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with projected pod [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ [BeforeEach] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 17 22:00:14.187: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should fail substituting values in a volume subpath with backticks [Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Jun 17 22:00:20.230: INFO: Deleting pod "var-expansion-26ddff82-e11a-4ac6-a858-897a24f6bf29" in namespace "var-expansion-323" Jun 17 22:00:20.234: INFO: Wait up to 5m0s for pod "var-expansion-26ddff82-e11a-4ac6-a858-897a24f6bf29" to be fully deleted [AfterEach] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 17 22:00:24.240: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-323" for this suite. • [SLOW TEST:10.061 seconds] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should fail substituting values in a volume subpath with backticks [Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Variable Expansion should fail substituting values in a volume subpath with backticks [Slow] [Conformance]","total":-1,"completed":8,"skipped":131,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance]","total":-1,"completed":5,"skipped":50,"failed":0} [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 17 22:00:15.917: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:241 [BeforeEach] Kubectl label /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1308 STEP: creating the pod Jun 17 22:00:15.940: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-2946 create -f -' Jun 17 22:00:16.359: INFO: stderr: "" Jun 17 22:00:16.359: INFO: stdout: "pod/pause created\n" Jun 17 22:00:16.359: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause] Jun 17 22:00:16.359: INFO: Waiting up to 5m0s for pod "pause" in namespace "kubectl-2946" to be "running and ready" Jun 17 22:00:16.362: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.239316ms Jun 17 22:00:18.366: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006538585s Jun 17 22:00:20.371: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 4.011463192s Jun 17 22:00:22.375: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 6.01555125s Jun 17 22:00:24.378: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 8.018274995s Jun 17 22:00:24.378: INFO: Pod "pause" satisfied condition "running and ready" Jun 17 22:00:24.378: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause] [It] should update the label on a resource [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: adding the label testing-label with value testing-label-value to a pod Jun 17 22:00:24.378: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-2946 label pods pause testing-label=testing-label-value' Jun 17 22:00:24.554: INFO: stderr: "" Jun 17 22:00:24.554: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod has the label testing-label with the value testing-label-value Jun 17 22:00:24.555: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-2946 get pod pause -L testing-label' Jun 17 22:00:24.739: INFO: stderr: "" Jun 17 22:00:24.739: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 8s testing-label-value\n" STEP: removing the label testing-label of a pod Jun 17 22:00:24.739: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-2946 label pods pause testing-label-' Jun 17 22:00:24.929: INFO: stderr: "" Jun 17 22:00:24.929: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod doesn't have the label testing-label Jun 17 22:00:24.929: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-2946 get pod pause -L testing-label' Jun 17 22:00:25.111: INFO: stderr: "" Jun 17 22:00:25.112: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 9s \n" [AfterEach] Kubectl label /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1314 STEP: using delete to clean up resources Jun 17 22:00:25.112: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-2946 delete --grace-period=0 --force -f -' Jun 17 22:00:25.256: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jun 17 22:00:25.256: INFO: stdout: "pod \"pause\" force deleted\n" Jun 17 22:00:25.256: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-2946 get rc,svc -l name=pause --no-headers' Jun 17 22:00:25.471: INFO: stderr: "No resources found in kubectl-2946 namespace.\n" Jun 17 22:00:25.471: INFO: stdout: "" Jun 17 22:00:25.471: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-2946 get pods -l name=pause -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Jun 17 22:00:25.637: INFO: stderr: "" Jun 17 22:00:25.637: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 17 22:00:25.638: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2946" for this suite. • [SLOW TEST:9.728 seconds] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl label /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1306 should update the label on a resource [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl label should update the label on a resource [Conformance]","total":-1,"completed":6,"skipped":50,"failed":0} SSS ------------------------------ [BeforeEach] [sig-network] Ingress API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 17 22:00:25.655: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename ingress STEP: Waiting for a default service account to be provisioned in namespace [It] should support creating Ingress API operations [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: getting /apis STEP: getting /apis/networking.k8s.io STEP: getting /apis/networking.k8s.iov1 STEP: creating STEP: getting STEP: listing STEP: watching Jun 17 22:00:25.708: INFO: starting watch STEP: cluster-wide listing STEP: cluster-wide watching Jun 17 22:00:25.714: INFO: starting watch STEP: patching STEP: updating Jun 17 22:00:25.724: INFO: waiting for watch events with expected annotations Jun 17 22:00:25.724: INFO: saw patched and updated annotations STEP: patching /status STEP: updating /status STEP: get /status STEP: deleting STEP: deleting a collection [AfterEach] [sig-network] Ingress API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 17 22:00:25.751: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "ingress-2258" for this suite. • ------------------------------ {"msg":"PASSED [sig-network] Ingress API should support creating Ingress API operations [Conformance]","total":-1,"completed":7,"skipped":53,"failed":0} SSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 17 22:00:24.295: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: create the rc1 STEP: create the rc2 STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well STEP: delete the rc simpletest-rc-to-be-deleted STEP: wait for the rc to be deleted STEP: Gathering metrics Jun 17 22:00:34.394: INFO: The status of Pod kube-controller-manager-master3 is Running (Ready = true) Jun 17 22:00:34.459: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: Jun 17 22:00:34.459: INFO: Deleting pod "simpletest-rc-to-be-deleted-4t7js" in namespace "gc-1392" Jun 17 22:00:34.469: INFO: Deleting pod "simpletest-rc-to-be-deleted-52skg" in namespace "gc-1392" Jun 17 22:00:34.474: INFO: Deleting pod "simpletest-rc-to-be-deleted-7sfzl" in namespace "gc-1392" Jun 17 22:00:34.481: INFO: Deleting pod "simpletest-rc-to-be-deleted-c5rhm" in namespace "gc-1392" Jun 17 22:00:34.488: INFO: Deleting pod "simpletest-rc-to-be-deleted-dpxlw" in namespace "gc-1392" [AfterEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 17 22:00:34.494: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-1392" for this suite. • [SLOW TEST:10.206 seconds] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]","total":-1,"completed":9,"skipped":155,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 17 22:00:25.780: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/init_container.go:162 [It] should invoke init containers on a RestartNever pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating the pod Jun 17 22:00:25.800: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [sig-node] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 17 22:00:35.889: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-6121" for this suite. • [SLOW TEST:10.118 seconds] [sig-node] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should invoke init containers on a RestartNever pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance]","total":-1,"completed":8,"skipped":64,"failed":0} SS ------------------------------ [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 17 22:00:14.982: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Jun 17 22:00:15.397: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Jun 17 22:00:17.408: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63791100015, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63791100015, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63791100015, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63791100015, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} Jun 17 22:00:19.412: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63791100015, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63791100015, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63791100015, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63791100015, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} Jun 17 22:00:21.412: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63791100015, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63791100015, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63791100015, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63791100015, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jun 17 22:00:24.419: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should honor timeout [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Setting timeout (1s) shorter than webhook latency (5s) STEP: Registering slow webhook via the AdmissionRegistration API STEP: Request fails when timeout (1s) is shorter than slow webhook latency (5s) STEP: Having no error when timeout is shorter than webhook latency and failure policy is ignore STEP: Registering slow webhook via the AdmissionRegistration API STEP: Having no error when timeout is longer than webhook latency STEP: Registering slow webhook via the AdmissionRegistration API STEP: Having no error when timeout is empty (defaulted to 10s in v1) STEP: Registering slow webhook via the AdmissionRegistration API [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 17 22:00:36.505: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-4860" for this suite. STEP: Destroying namespace "webhook-4860-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:21.556 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should honor timeout [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","total":-1,"completed":7,"skipped":63,"failed":0} SSS ------------------------------ [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 17 21:59:36.476: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 [BeforeEach] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:105 STEP: Creating service test in namespace statefulset-5494 [It] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating stateful set ss in namespace statefulset-5494 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-5494 Jun 17 21:59:36.508: INFO: Found 0 stateful pods, waiting for 1 Jun 17 21:59:46.513: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod Jun 17 21:59:46.515: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-5494 exec ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Jun 17 21:59:47.096: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" Jun 17 21:59:47.096: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Jun 17 21:59:47.096: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Jun 17 21:59:47.099: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Jun 17 21:59:57.103: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Jun 17 21:59:57.103: INFO: Waiting for statefulset status.replicas updated to 0 Jun 17 21:59:57.113: INFO: POD NODE PHASE GRACE CONDITIONS Jun 17 21:59:57.113: INFO: ss-0 node2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-06-17 21:59:36 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-06-17 21:59:47 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-06-17 21:59:47 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-06-17 21:59:36 +0000 UTC }] Jun 17 21:59:57.113: INFO: Jun 17 21:59:57.113: INFO: StatefulSet ss has not reached scale 3, at 1 Jun 17 21:59:58.117: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.996387349s Jun 17 21:59:59.120: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.993282414s Jun 17 22:00:00.125: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.990105575s Jun 17 22:00:01.129: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.985538988s Jun 17 22:00:02.133: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.980880908s Jun 17 22:00:03.139: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.977413017s Jun 17 22:00:04.143: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.970864033s Jun 17 22:00:05.148: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.966579794s Jun 17 22:00:06.152: INFO: Verifying statefulset ss doesn't scale past 3 for another 961.609178ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-5494 Jun 17 22:00:07.156: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-5494 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jun 17 22:00:07.412: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n" Jun 17 22:00:07.412: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Jun 17 22:00:07.412: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Jun 17 22:00:07.412: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-5494 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jun 17 22:00:07.594: INFO: rc: 1 Jun 17 22:00:07.594: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-5494 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: error: unable to upgrade connection: container not found ("webserver") error: exit status 1 Jun 17 22:00:17.595: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-5494 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jun 17 22:00:17.927: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\n" Jun 17 22:00:17.927: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Jun 17 22:00:17.927: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Jun 17 22:00:17.928: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-5494 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jun 17 22:00:18.679: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\n" Jun 17 22:00:18.679: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Jun 17 22:00:18.679: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Jun 17 22:00:18.683: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Jun 17 22:00:18.683: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Jun 17 22:00:18.683: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Scale down will not halt with unhealthy stateful pod Jun 17 22:00:18.686: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-5494 exec ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Jun 17 22:00:18.928: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" Jun 17 22:00:18.928: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Jun 17 22:00:18.928: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Jun 17 22:00:18.928: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-5494 exec ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Jun 17 22:00:19.176: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" Jun 17 22:00:19.176: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Jun 17 22:00:19.176: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Jun 17 22:00:19.176: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-5494 exec ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Jun 17 22:00:19.637: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" Jun 17 22:00:19.637: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Jun 17 22:00:19.637: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Jun 17 22:00:19.637: INFO: Waiting for statefulset status.replicas updated to 0 Jun 17 22:00:19.640: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 3 Jun 17 22:00:29.647: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Jun 17 22:00:29.647: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Jun 17 22:00:29.647: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Jun 17 22:00:29.655: INFO: POD NODE PHASE GRACE CONDITIONS Jun 17 22:00:29.655: INFO: ss-0 node2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-06-17 21:59:36 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-06-17 22:00:19 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-06-17 22:00:19 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-06-17 21:59:36 +0000 UTC }] Jun 17 22:00:29.655: INFO: ss-1 node2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-06-17 21:59:57 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-06-17 22:00:20 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-06-17 22:00:20 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-06-17 21:59:57 +0000 UTC }] Jun 17 22:00:29.655: INFO: ss-2 node2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-06-17 21:59:57 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-06-17 22:00:20 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-06-17 22:00:20 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-06-17 21:59:57 +0000 UTC }] Jun 17 22:00:29.655: INFO: Jun 17 22:00:29.655: INFO: StatefulSet ss has not reached scale 0, at 3 Jun 17 22:00:30.661: INFO: POD NODE PHASE GRACE CONDITIONS Jun 17 22:00:30.661: INFO: ss-0 node2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-06-17 21:59:36 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-06-17 22:00:19 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-06-17 22:00:19 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-06-17 21:59:36 +0000 UTC }] Jun 17 22:00:30.661: INFO: ss-1 node2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-06-17 21:59:57 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-06-17 22:00:20 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-06-17 22:00:20 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-06-17 21:59:57 +0000 UTC }] Jun 17 22:00:30.661: INFO: ss-2 node2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-06-17 21:59:57 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-06-17 22:00:20 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-06-17 22:00:20 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-06-17 21:59:57 +0000 UTC }] Jun 17 22:00:30.661: INFO: Jun 17 22:00:30.661: INFO: StatefulSet ss has not reached scale 0, at 3 Jun 17 22:00:31.664: INFO: POD NODE PHASE GRACE CONDITIONS Jun 17 22:00:31.664: INFO: ss-0 node2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-06-17 21:59:36 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-06-17 22:00:19 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-06-17 22:00:19 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-06-17 21:59:36 +0000 UTC }] Jun 17 22:00:31.664: INFO: ss-1 node2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-06-17 21:59:57 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-06-17 22:00:20 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-06-17 22:00:20 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-06-17 21:59:57 +0000 UTC }] Jun 17 22:00:31.664: INFO: ss-2 node2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-06-17 21:59:57 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-06-17 22:00:20 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-06-17 22:00:20 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-06-17 21:59:57 +0000 UTC }] Jun 17 22:00:31.664: INFO: Jun 17 22:00:31.664: INFO: StatefulSet ss has not reached scale 0, at 3 Jun 17 22:00:32.668: INFO: POD NODE PHASE GRACE CONDITIONS Jun 17 22:00:32.668: INFO: ss-0 node2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-06-17 21:59:36 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-06-17 22:00:19 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-06-17 22:00:19 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-06-17 21:59:36 +0000 UTC }] Jun 17 22:00:32.668: INFO: ss-1 node2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-06-17 21:59:57 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-06-17 22:00:20 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-06-17 22:00:20 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-06-17 21:59:57 +0000 UTC }] Jun 17 22:00:32.668: INFO: ss-2 node2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-06-17 21:59:57 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-06-17 22:00:20 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-06-17 22:00:20 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-06-17 21:59:57 +0000 UTC }] Jun 17 22:00:32.668: INFO: Jun 17 22:00:32.668: INFO: StatefulSet ss has not reached scale 0, at 3 Jun 17 22:00:33.672: INFO: POD NODE PHASE GRACE CONDITIONS Jun 17 22:00:33.672: INFO: ss-0 node2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-06-17 21:59:36 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-06-17 22:00:19 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-06-17 22:00:19 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-06-17 21:59:36 +0000 UTC }] Jun 17 22:00:33.672: INFO: ss-1 node2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-06-17 21:59:57 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-06-17 22:00:20 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-06-17 22:00:20 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-06-17 21:59:57 +0000 UTC }] Jun 17 22:00:33.672: INFO: ss-2 node2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-06-17 21:59:57 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-06-17 22:00:20 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-06-17 22:00:20 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-06-17 21:59:57 +0000 UTC }] Jun 17 22:00:33.672: INFO: Jun 17 22:00:33.672: INFO: StatefulSet ss has not reached scale 0, at 3 Jun 17 22:00:34.677: INFO: POD NODE PHASE GRACE CONDITIONS Jun 17 22:00:34.677: INFO: ss-0 node2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-06-17 21:59:36 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-06-17 22:00:19 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-06-17 22:00:19 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-06-17 21:59:36 +0000 UTC }] Jun 17 22:00:34.677: INFO: ss-1 node2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-06-17 21:59:57 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-06-17 22:00:20 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-06-17 22:00:20 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-06-17 21:59:57 +0000 UTC }] Jun 17 22:00:34.677: INFO: ss-2 node2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-06-17 21:59:57 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-06-17 22:00:20 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-06-17 22:00:20 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-06-17 21:59:57 +0000 UTC }] Jun 17 22:00:34.677: INFO: Jun 17 22:00:34.677: INFO: StatefulSet ss has not reached scale 0, at 3 Jun 17 22:00:35.680: INFO: POD NODE PHASE GRACE CONDITIONS Jun 17 22:00:35.680: INFO: ss-0 node2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-06-17 21:59:36 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-06-17 22:00:19 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-06-17 22:00:19 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-06-17 21:59:36 +0000 UTC }] Jun 17 22:00:35.680: INFO: ss-1 node2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-06-17 21:59:57 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-06-17 22:00:20 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-06-17 22:00:20 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-06-17 21:59:57 +0000 UTC }] Jun 17 22:00:35.680: INFO: ss-2 node2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-06-17 21:59:57 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-06-17 22:00:20 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-06-17 22:00:20 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-06-17 21:59:57 +0000 UTC }] Jun 17 22:00:35.681: INFO: Jun 17 22:00:35.681: INFO: StatefulSet ss has not reached scale 0, at 3 Jun 17 22:00:36.685: INFO: POD NODE PHASE GRACE CONDITIONS Jun 17 22:00:36.685: INFO: ss-0 node2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-06-17 21:59:36 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-06-17 22:00:19 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-06-17 22:00:19 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-06-17 21:59:36 +0000 UTC }] Jun 17 22:00:36.685: INFO: ss-2 node2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-06-17 21:59:57 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-06-17 22:00:20 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-06-17 22:00:20 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-06-17 21:59:57 +0000 UTC }] Jun 17 22:00:36.685: INFO: Jun 17 22:00:36.685: INFO: StatefulSet ss has not reached scale 0, at 2 Jun 17 22:00:37.689: INFO: POD NODE PHASE GRACE CONDITIONS Jun 17 22:00:37.689: INFO: ss-0 node2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-06-17 21:59:36 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-06-17 22:00:19 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-06-17 22:00:19 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-06-17 21:59:36 +0000 UTC }] Jun 17 22:00:37.689: INFO: ss-2 node2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-06-17 21:59:57 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-06-17 22:00:20 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-06-17 22:00:20 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-06-17 21:59:57 +0000 UTC }] Jun 17 22:00:37.689: INFO: Jun 17 22:00:37.689: INFO: StatefulSet ss has not reached scale 0, at 2 Jun 17 22:00:38.692: INFO: Verifying statefulset ss doesn't scale past 0 for another 963.614958ms STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-5494 Jun 17 22:00:39.696: INFO: Scaling statefulset ss to 0 Jun 17 22:00:39.705: INFO: Waiting for statefulset status.replicas updated to 0 [AfterEach] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:116 Jun 17 22:00:39.708: INFO: Deleting all statefulset in ns statefulset-5494 Jun 17 22:00:39.710: INFO: Scaling statefulset ss to 0 Jun 17 22:00:39.717: INFO: Waiting for statefulset status.replicas updated to 0 Jun 17 22:00:39.719: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 17 22:00:39.728: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-5494" for this suite. • [SLOW TEST:63.260 seconds] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:95 Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]","total":-1,"completed":5,"skipped":102,"failed":0} SSS ------------------------------ [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 17 22:00:35.905: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication Jun 17 22:00:36.272: INFO: role binding webhook-auth-reader already exists STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Jun 17 22:00:36.283: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Jun 17 22:00:38.292: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63791100036, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63791100036, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63791100036, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63791100036, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} Jun 17 22:00:40.298: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63791100036, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63791100036, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63791100036, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63791100036, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jun 17 22:00:43.305: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should include webhook resources in discovery documents [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: fetching the /apis discovery document STEP: finding the admissionregistration.k8s.io API group in the /apis discovery document STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis discovery document STEP: fetching the /apis/admissionregistration.k8s.io discovery document STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis/admissionregistration.k8s.io discovery document STEP: fetching the /apis/admissionregistration.k8s.io/v1 discovery document STEP: finding mutatingwebhookconfigurations and validatingwebhookconfigurations resources in the /apis/admissionregistration.k8s.io/v1 discovery document [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 17 22:00:43.311: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-323" for this suite. STEP: Destroying namespace "webhook-323-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:7.437 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should include webhook resources in discovery documents [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance]","total":-1,"completed":9,"skipped":66,"failed":0} SSSSSSSS ------------------------------ [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 17 22:00:39.746: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46 [It] should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Jun 17 22:00:39.777: INFO: Waiting up to 5m0s for pod "alpine-nnp-false-2f82da34-7963-4cc9-893a-4f15d5cd272b" in namespace "security-context-test-6580" to be "Succeeded or Failed" Jun 17 22:00:39.780: INFO: Pod "alpine-nnp-false-2f82da34-7963-4cc9-893a-4f15d5cd272b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.552561ms Jun 17 22:00:41.783: INFO: Pod "alpine-nnp-false-2f82da34-7963-4cc9-893a-4f15d5cd272b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005955374s Jun 17 22:00:43.787: INFO: Pod "alpine-nnp-false-2f82da34-7963-4cc9-893a-4f15d5cd272b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.009679007s Jun 17 22:00:45.791: INFO: Pod "alpine-nnp-false-2f82da34-7963-4cc9-893a-4f15d5cd272b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.013650835s Jun 17 22:00:45.791: INFO: Pod "alpine-nnp-false-2f82da34-7963-4cc9-893a-4f15d5cd272b" satisfied condition "Succeeded or Failed" [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 17 22:00:45.796: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-6580" for this suite. • [SLOW TEST:6.060 seconds] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 when creating containers with AllowPrivilegeEscalation /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:296 should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":6,"skipped":105,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 17 22:00:36.546: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a service. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Service STEP: Creating a NodePort Service STEP: Not allowing a LoadBalancer Service with NodePort to be created that exceeds remaining quota STEP: Ensuring resource quota status captures service creation STEP: Deleting Services STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 17 22:00:47.653: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-7651" for this suite. • [SLOW TEST:11.118 seconds] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a service. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance]","total":-1,"completed":8,"skipped":66,"failed":0} S ------------------------------ [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 17 22:00:43.360: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Jun 17 22:00:43.745: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Jun 17 22:00:45.755: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63791100043, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63791100043, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63791100043, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63791100043, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jun 17 22:00:48.764: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] patching/updating a validating webhook should work [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a validating webhook configuration STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Updating a validating webhook configuration's rules to not include the create operation STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Patching a validating webhook configuration's rules to include the create operation STEP: Creating a configMap that does not comply to the validation webhook rules [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 17 22:00:49.810: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-1804" for this suite. STEP: Destroying namespace "webhook-1804-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.480 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 patching/updating a validating webhook should work [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]","total":-1,"completed":10,"skipped":74,"failed":0} SS ------------------------------ [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 17 22:00:45.856: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating configMap with name configmap-test-volume-85b0f20e-4b79-40b7-b26f-ee490be0e6aa STEP: Creating a pod to test consume configMaps Jun 17 22:00:45.893: INFO: Waiting up to 5m0s for pod "pod-configmaps-b8ccbca1-6e80-48cf-a415-b9614e8fd619" in namespace "configmap-5262" to be "Succeeded or Failed" Jun 17 22:00:45.897: INFO: Pod "pod-configmaps-b8ccbca1-6e80-48cf-a415-b9614e8fd619": Phase="Pending", Reason="", readiness=false. Elapsed: 3.377457ms Jun 17 22:00:47.902: INFO: Pod "pod-configmaps-b8ccbca1-6e80-48cf-a415-b9614e8fd619": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008507203s Jun 17 22:00:49.905: INFO: Pod "pod-configmaps-b8ccbca1-6e80-48cf-a415-b9614e8fd619": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011703046s STEP: Saw pod success Jun 17 22:00:49.905: INFO: Pod "pod-configmaps-b8ccbca1-6e80-48cf-a415-b9614e8fd619" satisfied condition "Succeeded or Failed" Jun 17 22:00:49.908: INFO: Trying to get logs from node node2 pod pod-configmaps-b8ccbca1-6e80-48cf-a415-b9614e8fd619 container agnhost-container: STEP: delete the pod Jun 17 22:00:49.920: INFO: Waiting for pod pod-configmaps-b8ccbca1-6e80-48cf-a415-b9614e8fd619 to disappear Jun 17 22:00:49.922: INFO: Pod pod-configmaps-b8ccbca1-6e80-48cf-a415-b9614e8fd619 no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 17 22:00:49.922: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-5262" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":-1,"completed":7,"skipped":131,"failed":0} SSSSSS ------------------------------ [BeforeEach] [sig-node] PodTemplates /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 17 22:00:49.944: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename podtemplate STEP: Waiting for a default service account to be provisioned in namespace [It] should delete a collection of pod templates [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Create set of pod templates Jun 17 22:00:49.969: INFO: created test-podtemplate-1 Jun 17 22:00:49.973: INFO: created test-podtemplate-2 Jun 17 22:00:49.976: INFO: created test-podtemplate-3 STEP: get a list of pod templates with a label in the current namespace STEP: delete collection of pod templates Jun 17 22:00:49.978: INFO: requesting DeleteCollection of pod templates STEP: check that the list of pod templates matches the requested quantity Jun 17 22:00:49.989: INFO: requesting list of pod templates to confirm quantity [AfterEach] [sig-node] PodTemplates /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 17 22:00:49.991: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "podtemplate-9490" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] PodTemplates should delete a collection of pod templates [Conformance]","total":-1,"completed":8,"skipped":137,"failed":0} SSSSS ------------------------------ [BeforeEach] [sig-node] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 17 22:00:47.667: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in env vars [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating secret with name secret-test-ce9e8890-0252-4c46-bb01-6b6daf3a6391 STEP: Creating a pod to test consume secrets Jun 17 22:00:47.703: INFO: Waiting up to 5m0s for pod "pod-secrets-3ce5ad94-a010-4c76-ae23-cd61c15a06f2" in namespace "secrets-2297" to be "Succeeded or Failed" Jun 17 22:00:47.705: INFO: Pod "pod-secrets-3ce5ad94-a010-4c76-ae23-cd61c15a06f2": Phase="Pending", Reason="", readiness=false. Elapsed: 1.939542ms Jun 17 22:00:49.710: INFO: Pod "pod-secrets-3ce5ad94-a010-4c76-ae23-cd61c15a06f2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007860897s Jun 17 22:00:51.714: INFO: Pod "pod-secrets-3ce5ad94-a010-4c76-ae23-cd61c15a06f2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010984069s STEP: Saw pod success Jun 17 22:00:51.714: INFO: Pod "pod-secrets-3ce5ad94-a010-4c76-ae23-cd61c15a06f2" satisfied condition "Succeeded or Failed" Jun 17 22:00:51.716: INFO: Trying to get logs from node node2 pod pod-secrets-3ce5ad94-a010-4c76-ae23-cd61c15a06f2 container secret-env-test: STEP: delete the pod Jun 17 22:00:51.728: INFO: Waiting for pod pod-secrets-3ce5ad94-a010-4c76-ae23-cd61c15a06f2 to disappear Jun 17 22:00:51.730: INFO: Pod pod-secrets-3ce5ad94-a010-4c76-ae23-cd61c15a06f2 no longer exists [AfterEach] [sig-node] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 17 22:00:51.730: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-2297" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance]","total":-1,"completed":9,"skipped":67,"failed":0} SSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected combined /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 17 22:00:50.013: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating configMap with name configmap-projected-all-test-volume-1dbc4ce9-1730-4467-9f82-140145ff6d2d STEP: Creating secret with name secret-projected-all-test-volume-7cf2e23b-a66b-41cd-868a-384fa4be13af STEP: Creating a pod to test Check all projections for projected volume plugin Jun 17 22:00:50.059: INFO: Waiting up to 5m0s for pod "projected-volume-db3614ba-c8b2-41af-8411-382491f5fabf" in namespace "projected-7436" to be "Succeeded or Failed" Jun 17 22:00:50.062: INFO: Pod "projected-volume-db3614ba-c8b2-41af-8411-382491f5fabf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.809687ms Jun 17 22:00:52.065: INFO: Pod "projected-volume-db3614ba-c8b2-41af-8411-382491f5fabf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005895167s Jun 17 22:00:54.068: INFO: Pod "projected-volume-db3614ba-c8b2-41af-8411-382491f5fabf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.008764068s STEP: Saw pod success Jun 17 22:00:54.068: INFO: Pod "projected-volume-db3614ba-c8b2-41af-8411-382491f5fabf" satisfied condition "Succeeded or Failed" Jun 17 22:00:54.071: INFO: Trying to get logs from node node2 pod projected-volume-db3614ba-c8b2-41af-8411-382491f5fabf container projected-all-volume-test: STEP: delete the pod Jun 17 22:00:54.082: INFO: Waiting for pod projected-volume-db3614ba-c8b2-41af-8411-382491f5fabf to disappear Jun 17 22:00:54.084: INFO: Pod projected-volume-db3614ba-c8b2-41af-8411-382491f5fabf no longer exists [AfterEach] [sig-storage] Projected combined /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 17 22:00:54.084: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7436" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance]","total":-1,"completed":9,"skipped":142,"failed":0} S ------------------------------ [BeforeEach] [sig-api-machinery] Watchers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 17 21:59:56.303: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe add, update, and delete watch notifications on configmaps [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating a watch on configmaps with label A STEP: creating a watch on configmaps with label B STEP: creating a watch on configmaps with label A or B STEP: creating a configmap with label A and ensuring the correct watchers observe the notification Jun 17 21:59:56.331: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-8545 d0d9860a-ec36-423a-bd22-5906e23ecfa8 32979 0 2022-06-17 21:59:56 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2022-06-17 21:59:56 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} Jun 17 21:59:56.332: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-8545 d0d9860a-ec36-423a-bd22-5906e23ecfa8 32979 0 2022-06-17 21:59:56 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2022-06-17 21:59:56 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} STEP: modifying configmap A and ensuring the correct watchers observe the notification Jun 17 22:00:06.340: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-8545 d0d9860a-ec36-423a-bd22-5906e23ecfa8 33144 0 2022-06-17 21:59:56 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2022-06-17 22:00:06 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} Jun 17 22:00:06.340: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-8545 d0d9860a-ec36-423a-bd22-5906e23ecfa8 33144 0 2022-06-17 21:59:56 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2022-06-17 22:00:06 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: modifying configmap A again and ensuring the correct watchers observe the notification Jun 17 22:00:16.347: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-8545 d0d9860a-ec36-423a-bd22-5906e23ecfa8 33398 0 2022-06-17 21:59:56 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2022-06-17 22:00:06 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} Jun 17 22:00:16.347: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-8545 d0d9860a-ec36-423a-bd22-5906e23ecfa8 33398 0 2022-06-17 21:59:56 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2022-06-17 22:00:06 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: deleting configmap A and ensuring the correct watchers observe the notification Jun 17 22:00:26.355: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-8545 d0d9860a-ec36-423a-bd22-5906e23ecfa8 33678 0 2022-06-17 21:59:56 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2022-06-17 22:00:06 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} Jun 17 22:00:26.356: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-8545 d0d9860a-ec36-423a-bd22-5906e23ecfa8 33678 0 2022-06-17 21:59:56 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2022-06-17 22:00:06 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: creating a configmap with label B and ensuring the correct watchers observe the notification Jun 17 22:00:36.361: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-8545 d9385443-a5e6-4342-bcb7-f15611f9f5ae 33968 0 2022-06-17 22:00:36 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] [{e2e.test Update v1 2022-06-17 22:00:36 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} Jun 17 22:00:36.361: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-8545 d9385443-a5e6-4342-bcb7-f15611f9f5ae 33968 0 2022-06-17 22:00:36 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] [{e2e.test Update v1 2022-06-17 22:00:36 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} STEP: deleting configmap B and ensuring the correct watchers observe the notification Jun 17 22:00:46.368: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-8545 d9385443-a5e6-4342-bcb7-f15611f9f5ae 34317 0 2022-06-17 22:00:36 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] [{e2e.test Update v1 2022-06-17 22:00:36 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} Jun 17 22:00:46.368: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-8545 d9385443-a5e6-4342-bcb7-f15611f9f5ae 34317 0 2022-06-17 22:00:36 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] [{e2e.test Update v1 2022-06-17 22:00:36 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 17 22:00:56.370: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-8545" for this suite. • [SLOW TEST:60.077 seconds] [sig-api-machinery] Watchers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe add, update, and delete watch notifications on configmaps [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance]","total":-1,"completed":5,"skipped":112,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 17 22:00:49.847: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating projection with configMap that has name projected-configmap-test-upd-fa58fdb3-9a1d-4911-8885-854af7684169 STEP: Creating the pod Jun 17 22:00:49.891: INFO: The status of Pod pod-projected-configmaps-9496a0a0-76b1-4b60-b70f-715945882141 is Pending, waiting for it to be Running (with Ready = true) Jun 17 22:00:51.895: INFO: The status of Pod pod-projected-configmaps-9496a0a0-76b1-4b60-b70f-715945882141 is Pending, waiting for it to be Running (with Ready = true) Jun 17 22:00:53.895: INFO: The status of Pod pod-projected-configmaps-9496a0a0-76b1-4b60-b70f-715945882141 is Running (Ready = true) STEP: Updating configmap projected-configmap-test-upd-fa58fdb3-9a1d-4911-8885-854af7684169 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 17 22:00:57.929: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6671" for this suite. • [SLOW TEST:8.092 seconds] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance]","total":-1,"completed":11,"skipped":76,"failed":0} SSSSSSS ------------------------------ [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 17 22:00:34.548: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746 [It] should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating service in namespace services-7000 STEP: creating service affinity-clusterip in namespace services-7000 STEP: creating replication controller affinity-clusterip in namespace services-7000 I0617 22:00:34.581153 26 runners.go:190] Created replication controller with name: affinity-clusterip, namespace: services-7000, replica count: 3 I0617 22:00:37.633009 26 runners.go:190] affinity-clusterip Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0617 22:00:40.633850 26 runners.go:190] affinity-clusterip Pods: 3 out of 3 created, 2 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0617 22:00:43.634723 26 runners.go:190] affinity-clusterip Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jun 17 22:00:43.642: INFO: Creating new exec pod Jun 17 22:00:48.660: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7000 exec execpod-affinity2pgrz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80' Jun 17 22:00:49.005: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-clusterip 80\nConnection to affinity-clusterip 80 port [tcp/http] succeeded!\n" Jun 17 22:00:49.005: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" Jun 17 22:00:49.005: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7000 exec execpod-affinity2pgrz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.233.30.95 80' Jun 17 22:00:49.432: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 10.233.30.95 80\nConnection to 10.233.30.95 80 port [tcp/http] succeeded!\n" Jun 17 22:00:49.432: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" Jun 17 22:00:49.432: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7000 exec execpod-affinity2pgrz -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://10.233.30.95:80/ ; done' Jun 17 22:00:49.784: INFO: stderr: "+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.30.95:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.30.95:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.30.95:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.30.95:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.30.95:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.30.95:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.30.95:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.30.95:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.30.95:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.30.95:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.30.95:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.30.95:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.30.95:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.30.95:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.30.95:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.30.95:80/\n" Jun 17 22:00:49.784: INFO: stdout: "\naffinity-clusterip-s4pzb\naffinity-clusterip-s4pzb\naffinity-clusterip-s4pzb\naffinity-clusterip-s4pzb\naffinity-clusterip-s4pzb\naffinity-clusterip-s4pzb\naffinity-clusterip-s4pzb\naffinity-clusterip-s4pzb\naffinity-clusterip-s4pzb\naffinity-clusterip-s4pzb\naffinity-clusterip-s4pzb\naffinity-clusterip-s4pzb\naffinity-clusterip-s4pzb\naffinity-clusterip-s4pzb\naffinity-clusterip-s4pzb\naffinity-clusterip-s4pzb" Jun 17 22:00:49.784: INFO: Received response from host: affinity-clusterip-s4pzb Jun 17 22:00:49.784: INFO: Received response from host: affinity-clusterip-s4pzb Jun 17 22:00:49.784: INFO: Received response from host: affinity-clusterip-s4pzb Jun 17 22:00:49.784: INFO: Received response from host: affinity-clusterip-s4pzb Jun 17 22:00:49.784: INFO: Received response from host: affinity-clusterip-s4pzb Jun 17 22:00:49.784: INFO: Received response from host: affinity-clusterip-s4pzb Jun 17 22:00:49.784: INFO: Received response from host: affinity-clusterip-s4pzb Jun 17 22:00:49.784: INFO: Received response from host: affinity-clusterip-s4pzb Jun 17 22:00:49.784: INFO: Received response from host: affinity-clusterip-s4pzb Jun 17 22:00:49.784: INFO: Received response from host: affinity-clusterip-s4pzb Jun 17 22:00:49.784: INFO: Received response from host: affinity-clusterip-s4pzb Jun 17 22:00:49.784: INFO: Received response from host: affinity-clusterip-s4pzb Jun 17 22:00:49.784: INFO: Received response from host: affinity-clusterip-s4pzb Jun 17 22:00:49.784: INFO: Received response from host: affinity-clusterip-s4pzb Jun 17 22:00:49.784: INFO: Received response from host: affinity-clusterip-s4pzb Jun 17 22:00:49.784: INFO: Received response from host: affinity-clusterip-s4pzb Jun 17 22:00:49.784: INFO: Cleaning up the exec pod STEP: deleting ReplicationController affinity-clusterip in namespace services-7000, will wait for the garbage collector to delete the pods Jun 17 22:00:49.849: INFO: Deleting ReplicationController affinity-clusterip took: 3.342797ms Jun 17 22:00:49.950: INFO: Terminating ReplicationController affinity-clusterip pods took: 101.319232ms [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 17 22:00:59.359: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-7000" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750 • [SLOW TEST:24.819 seconds] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 17 22:00:51.764: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746 [It] should serve multiport endpoints from pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating service multi-endpoint-test in namespace services-4301 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-4301 to expose endpoints map[] Jun 17 22:00:51.793: INFO: Failed go get Endpoints object: endpoints "multi-endpoint-test" not found Jun 17 22:00:52.800: INFO: successfully validated that service multi-endpoint-test in namespace services-4301 exposes endpoints map[] STEP: Creating pod pod1 in namespace services-4301 Jun 17 22:00:52.820: INFO: The status of Pod pod1 is Pending, waiting for it to be Running (with Ready = true) Jun 17 22:00:54.823: INFO: The status of Pod pod1 is Pending, waiting for it to be Running (with Ready = true) Jun 17 22:00:56.824: INFO: The status of Pod pod1 is Running (Ready = true) STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-4301 to expose endpoints map[pod1:[100]] Jun 17 22:00:56.835: INFO: successfully validated that service multi-endpoint-test in namespace services-4301 exposes endpoints map[pod1:[100]] STEP: Creating pod pod2 in namespace services-4301 Jun 17 22:00:56.856: INFO: The status of Pod pod2 is Pending, waiting for it to be Running (with Ready = true) Jun 17 22:00:58.862: INFO: The status of Pod pod2 is Pending, waiting for it to be Running (with Ready = true) Jun 17 22:01:00.861: INFO: The status of Pod pod2 is Running (Ready = true) STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-4301 to expose endpoints map[pod1:[100] pod2:[101]] Jun 17 22:01:00.876: INFO: successfully validated that service multi-endpoint-test in namespace services-4301 exposes endpoints map[pod1:[100] pod2:[101]] STEP: Deleting pod pod1 in namespace services-4301 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-4301 to expose endpoints map[pod2:[101]] Jun 17 22:01:00.890: INFO: successfully validated that service multi-endpoint-test in namespace services-4301 exposes endpoints map[pod2:[101]] STEP: Deleting pod pod2 in namespace services-4301 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-4301 to expose endpoints map[] Jun 17 22:01:00.903: INFO: successfully validated that service multi-endpoint-test in namespace services-4301 exposes endpoints map[] [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 17 22:01:00.912: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-4301" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750 • [SLOW TEST:9.155 seconds] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should serve multiport endpoints from pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-network] Services should serve multiport endpoints from pods [Conformance]","total":-1,"completed":10,"skipped":79,"failed":0} SSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 17 22:00:54.096: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating secret with name s-test-opt-del-62193db7-657e-435e-9f16-0070b7d5834d STEP: Creating secret with name s-test-opt-upd-a49ccfb5-c3c9-4911-b979-971860e0945d STEP: Creating the pod Jun 17 22:00:54.145: INFO: The status of Pod pod-projected-secrets-b8566876-485a-41d7-a01f-b9d8179c75bf is Pending, waiting for it to be Running (with Ready = true) Jun 17 22:00:56.150: INFO: The status of Pod pod-projected-secrets-b8566876-485a-41d7-a01f-b9d8179c75bf is Pending, waiting for it to be Running (with Ready = true) Jun 17 22:00:58.150: INFO: The status of Pod pod-projected-secrets-b8566876-485a-41d7-a01f-b9d8179c75bf is Running (Ready = true) STEP: Deleting secret s-test-opt-del-62193db7-657e-435e-9f16-0070b7d5834d STEP: Updating secret s-test-opt-upd-a49ccfb5-c3c9-4911-b979-971860e0945d STEP: Creating secret with name s-test-opt-create-e55e438d-46fb-4921-82cf-48eb5ed812e2 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 17 22:01:02.220: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4211" for this suite. • [SLOW TEST:8.132 seconds] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance]","total":-1,"completed":10,"skipped":143,"failed":0} SSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 17 22:01:02.246: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication Jun 17 22:01:02.506: INFO: role binding webhook-auth-reader already exists STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Jun 17 22:01:02.528: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Jun 17 22:01:04.536: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63791100062, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63791100062, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63791100062, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63791100062, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jun 17 22:01:07.546: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should unconditionally reject operations on fail closed webhook [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Registering a webhook that server cannot talk to, with fail closed policy, via the AdmissionRegistration API STEP: create a namespace for the webhook STEP: create a configmap should be unconditionally rejected by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 17 22:01:07.589: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-5312" for this suite. STEP: Destroying namespace "webhook-5312-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:5.375 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should unconditionally reject operations on fail closed webhook [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","total":-1,"completed":11,"skipped":152,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 17 22:01:00.939: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication Jun 17 22:01:01.337: INFO: role binding webhook-auth-reader already exists STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Jun 17 22:01:01.348: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Jun 17 22:01:03.355: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63791100061, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63791100061, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63791100061, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63791100061, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jun 17 22:01:06.369: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny attaching pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Registering the webhook via the AdmissionRegistration API STEP: create a pod STEP: 'kubectl attach' the pod, should be denied by the webhook Jun 17 22:01:11.404: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=webhook-753 attach --namespace=webhook-753 to-be-attached-pod -i -c=container1' Jun 17 22:01:11.578: INFO: rc: 1 [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 17 22:01:11.582: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-753" for this suite. STEP: Destroying namespace "webhook-753-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:10.672 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny attaching pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","total":-1,"completed":11,"skipped":88,"failed":0} SS ------------------------------ [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 17 22:01:07.663: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41 [It] should provide container's memory request [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test downward API volume plugin Jun 17 22:01:07.695: INFO: Waiting up to 5m0s for pod "downwardapi-volume-6992416b-6b8f-45c3-9f93-042e8e8b0447" in namespace "downward-api-5350" to be "Succeeded or Failed" Jun 17 22:01:07.698: INFO: Pod "downwardapi-volume-6992416b-6b8f-45c3-9f93-042e8e8b0447": Phase="Pending", Reason="", readiness=false. Elapsed: 3.552434ms Jun 17 22:01:09.701: INFO: Pod "downwardapi-volume-6992416b-6b8f-45c3-9f93-042e8e8b0447": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006494556s Jun 17 22:01:11.705: INFO: Pod "downwardapi-volume-6992416b-6b8f-45c3-9f93-042e8e8b0447": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010156677s STEP: Saw pod success Jun 17 22:01:11.705: INFO: Pod "downwardapi-volume-6992416b-6b8f-45c3-9f93-042e8e8b0447" satisfied condition "Succeeded or Failed" Jun 17 22:01:11.707: INFO: Trying to get logs from node node2 pod downwardapi-volume-6992416b-6b8f-45c3-9f93-042e8e8b0447 container client-container: STEP: delete the pod Jun 17 22:01:11.717: INFO: Waiting for pod downwardapi-volume-6992416b-6b8f-45c3-9f93-042e8e8b0447 to disappear Jun 17 22:01:11.719: INFO: Pod downwardapi-volume-6992416b-6b8f-45c3-9f93-042e8e8b0447 no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 17 22:01:11.719: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-5350" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance]","total":-1,"completed":12,"skipped":173,"failed":0} SSS ------------------------------ {"msg":"PASSED [sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","total":-1,"completed":10,"skipped":181,"failed":0} [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 17 22:00:59.370: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Jun 17 22:00:59.969: INFO: new replicaset for deployment "sample-webhook-deployment" is yet to be created Jun 17 22:01:01.979: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63791100059, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63791100059, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63791100059, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63791100059, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jun 17 22:01:04.991: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource with different stored version [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Jun 17 22:01:04.993: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-8368-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource while v1 is storage version STEP: Patching Custom Resource Definition to set v2 as storage STEP: Patching the custom resource while v2 is storage version [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 17 22:01:13.128: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-1766" for this suite. STEP: Destroying namespace "webhook-1766-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:13.791 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource with different stored version [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","total":-1,"completed":11,"skipped":181,"failed":0} SSSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 17 22:01:13.177: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41 [It] should provide container's memory request [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test downward API volume plugin Jun 17 22:01:13.210: INFO: Waiting up to 5m0s for pod "downwardapi-volume-ceaa8076-2cca-45a6-b220-c24710e8daae" in namespace "projected-3380" to be "Succeeded or Failed" Jun 17 22:01:13.212: INFO: Pod "downwardapi-volume-ceaa8076-2cca-45a6-b220-c24710e8daae": Phase="Pending", Reason="", readiness=false. Elapsed: 1.804421ms Jun 17 22:01:15.215: INFO: Pod "downwardapi-volume-ceaa8076-2cca-45a6-b220-c24710e8daae": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005470026s Jun 17 22:01:17.219: INFO: Pod "downwardapi-volume-ceaa8076-2cca-45a6-b220-c24710e8daae": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.009202486s STEP: Saw pod success Jun 17 22:01:17.219: INFO: Pod "downwardapi-volume-ceaa8076-2cca-45a6-b220-c24710e8daae" satisfied condition "Succeeded or Failed" Jun 17 22:01:17.221: INFO: Trying to get logs from node node2 pod downwardapi-volume-ceaa8076-2cca-45a6-b220-c24710e8daae container client-container: STEP: delete the pod Jun 17 22:01:17.234: INFO: Waiting for pod downwardapi-volume-ceaa8076-2cca-45a6-b220-c24710e8daae to disappear Jun 17 22:01:17.235: INFO: Pod downwardapi-volume-ceaa8076-2cca-45a6-b220-c24710e8daae no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 17 22:01:17.236: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3380" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance]","total":-1,"completed":12,"skipped":188,"failed":0} SSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 17 22:01:17.264: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating configMap with name configmap-test-volume-map-047354e9-049d-4fe9-841a-1dc01fb26c4f STEP: Creating a pod to test consume configMaps Jun 17 22:01:17.298: INFO: Waiting up to 5m0s for pod "pod-configmaps-243caf4a-472d-4beb-8a60-baa205ea2147" in namespace "configmap-8573" to be "Succeeded or Failed" Jun 17 22:01:17.300: INFO: Pod "pod-configmaps-243caf4a-472d-4beb-8a60-baa205ea2147": Phase="Pending", Reason="", readiness=false. Elapsed: 2.148847ms Jun 17 22:01:19.305: INFO: Pod "pod-configmaps-243caf4a-472d-4beb-8a60-baa205ea2147": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006964696s Jun 17 22:01:21.310: INFO: Pod "pod-configmaps-243caf4a-472d-4beb-8a60-baa205ea2147": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011496902s STEP: Saw pod success Jun 17 22:01:21.310: INFO: Pod "pod-configmaps-243caf4a-472d-4beb-8a60-baa205ea2147" satisfied condition "Succeeded or Failed" Jun 17 22:01:21.313: INFO: Trying to get logs from node node1 pod pod-configmaps-243caf4a-472d-4beb-8a60-baa205ea2147 container agnhost-container: STEP: delete the pod Jun 17 22:01:21.327: INFO: Waiting for pod pod-configmaps-243caf4a-472d-4beb-8a60-baa205ea2147 to disappear Jun 17 22:01:21.329: INFO: Pod pod-configmaps-243caf4a-472d-4beb-8a60-baa205ea2147 no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 17 22:01:21.329: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-8573" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":13,"skipped":198,"failed":0} SS ------------------------------ [BeforeEach] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 17 22:00:56.475: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should run with the expected status [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpa': should get the expected 'State' STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpof': should get the expected 'State' STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpn': should get the expected 'State' STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance] [AfterEach] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 17 22:01:26.711: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-5698" for this suite. • [SLOW TEST:30.243 seconds] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 blackbox test /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:41 when starting a container that exits /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:42 should run with the expected status [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance]","total":-1,"completed":6,"skipped":158,"failed":0} SSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 17 22:01:11.616: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should verify ResourceQuota with terminating scopes. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a ResourceQuota with terminating scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a ResourceQuota with not terminating scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a long running pod STEP: Ensuring resource quota with not terminating scope captures the pod usage STEP: Ensuring resource quota with terminating scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage STEP: Creating a terminating pod STEP: Ensuring resource quota with terminating scope captures the pod usage STEP: Ensuring resource quota with not terminating scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 17 22:01:27.715: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-5022" for this suite. • [SLOW TEST:16.110 seconds] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should verify ResourceQuota with terminating scopes. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance]","total":-1,"completed":12,"skipped":90,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Docker Containers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 17 22:01:27.783: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test override arguments Jun 17 22:01:27.820: INFO: Waiting up to 5m0s for pod "client-containers-4c0feaed-596b-4cc9-8b1c-acf80822837c" in namespace "containers-6533" to be "Succeeded or Failed" Jun 17 22:01:27.822: INFO: Pod "client-containers-4c0feaed-596b-4cc9-8b1c-acf80822837c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.123971ms Jun 17 22:01:29.825: INFO: Pod "client-containers-4c0feaed-596b-4cc9-8b1c-acf80822837c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00477676s Jun 17 22:01:31.829: INFO: Pod "client-containers-4c0feaed-596b-4cc9-8b1c-acf80822837c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.008981945s STEP: Saw pod success Jun 17 22:01:31.829: INFO: Pod "client-containers-4c0feaed-596b-4cc9-8b1c-acf80822837c" satisfied condition "Succeeded or Failed" Jun 17 22:01:31.831: INFO: Trying to get logs from node node1 pod client-containers-4c0feaed-596b-4cc9-8b1c-acf80822837c container agnhost-container: STEP: delete the pod Jun 17 22:01:31.843: INFO: Waiting for pod client-containers-4c0feaed-596b-4cc9-8b1c-acf80822837c to disappear Jun 17 22:01:31.845: INFO: Pod client-containers-4c0feaed-596b-4cc9-8b1c-acf80822837c no longer exists [AfterEach] [sig-node] Docker Containers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 17 22:01:31.845: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-6533" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]","total":-1,"completed":13,"skipped":120,"failed":0} SSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 17 22:01:11.735: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with downward pod [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating pod pod-subpath-test-downwardapi-hsw4 STEP: Creating a pod to test atomic-volume-subpath Jun 17 22:01:11.772: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-hsw4" in namespace "subpath-7907" to be "Succeeded or Failed" Jun 17 22:01:11.774: INFO: Pod "pod-subpath-test-downwardapi-hsw4": Phase="Pending", Reason="", readiness=false. Elapsed: 1.92217ms Jun 17 22:01:13.776: INFO: Pod "pod-subpath-test-downwardapi-hsw4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.004631369s Jun 17 22:01:15.780: INFO: Pod "pod-subpath-test-downwardapi-hsw4": Phase="Running", Reason="", readiness=true. Elapsed: 4.008311729s Jun 17 22:01:17.783: INFO: Pod "pod-subpath-test-downwardapi-hsw4": Phase="Running", Reason="", readiness=true. Elapsed: 6.011747231s Jun 17 22:01:19.786: INFO: Pod "pod-subpath-test-downwardapi-hsw4": Phase="Running", Reason="", readiness=true. Elapsed: 8.01449856s Jun 17 22:01:21.791: INFO: Pod "pod-subpath-test-downwardapi-hsw4": Phase="Running", Reason="", readiness=true. Elapsed: 10.019606201s Jun 17 22:01:23.796: INFO: Pod "pod-subpath-test-downwardapi-hsw4": Phase="Running", Reason="", readiness=true. Elapsed: 12.024521458s Jun 17 22:01:25.802: INFO: Pod "pod-subpath-test-downwardapi-hsw4": Phase="Running", Reason="", readiness=true. Elapsed: 14.030006848s Jun 17 22:01:27.805: INFO: Pod "pod-subpath-test-downwardapi-hsw4": Phase="Running", Reason="", readiness=true. Elapsed: 16.033777411s Jun 17 22:01:29.809: INFO: Pod "pod-subpath-test-downwardapi-hsw4": Phase="Running", Reason="", readiness=true. Elapsed: 18.037831593s Jun 17 22:01:31.814: INFO: Pod "pod-subpath-test-downwardapi-hsw4": Phase="Running", Reason="", readiness=true. Elapsed: 20.041980411s Jun 17 22:01:33.819: INFO: Pod "pod-subpath-test-downwardapi-hsw4": Phase="Running", Reason="", readiness=true. Elapsed: 22.047799674s Jun 17 22:01:35.824: INFO: Pod "pod-subpath-test-downwardapi-hsw4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.051931593s STEP: Saw pod success Jun 17 22:01:35.824: INFO: Pod "pod-subpath-test-downwardapi-hsw4" satisfied condition "Succeeded or Failed" Jun 17 22:01:35.826: INFO: Trying to get logs from node node1 pod pod-subpath-test-downwardapi-hsw4 container test-container-subpath-downwardapi-hsw4: STEP: delete the pod Jun 17 22:01:35.859: INFO: Waiting for pod pod-subpath-test-downwardapi-hsw4 to disappear Jun 17 22:01:35.861: INFO: Pod pod-subpath-test-downwardapi-hsw4 no longer exists STEP: Deleting pod pod-subpath-test-downwardapi-hsw4 Jun 17 22:01:35.861: INFO: Deleting pod "pod-subpath-test-downwardapi-hsw4" in namespace "subpath-7907" [AfterEach] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 17 22:01:35.863: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-7907" for this suite. • [SLOW TEST:24.136 seconds] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with downward pod [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance]","total":-1,"completed":13,"skipped":176,"failed":0} SSSS ------------------------------ [BeforeEach] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 17 22:00:57.955: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should succeed in writing subpaths in container [Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating the pod STEP: waiting for pod running STEP: creating a file in subpath Jun 17 22:01:02.000: INFO: ExecWithOptions {Command:[/bin/sh -c touch /volume_mount/mypath/foo/test.log] Namespace:var-expansion-285 PodName:var-expansion-7f1995b0-ef4c-4772-93a9-615b37c14486 ContainerName:dapi-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 17 22:01:02.000: INFO: >>> kubeConfig: /root/.kube/config STEP: test for file in mounted path Jun 17 22:01:02.091: INFO: ExecWithOptions {Command:[/bin/sh -c test -f /subpath_mount/test.log] Namespace:var-expansion-285 PodName:var-expansion-7f1995b0-ef4c-4772-93a9-615b37c14486 ContainerName:dapi-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 17 22:01:02.091: INFO: >>> kubeConfig: /root/.kube/config STEP: updating the annotation value Jun 17 22:01:02.681: INFO: Successfully updated pod "var-expansion-7f1995b0-ef4c-4772-93a9-615b37c14486" STEP: waiting for annotated pod running STEP: deleting the pod gracefully Jun 17 22:01:02.683: INFO: Deleting pod "var-expansion-7f1995b0-ef4c-4772-93a9-615b37c14486" in namespace "var-expansion-285" Jun 17 22:01:02.688: INFO: Wait up to 5m0s for pod "var-expansion-7f1995b0-ef4c-4772-93a9-615b37c14486" to be fully deleted [AfterEach] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 17 22:01:38.694: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-285" for this suite. • [SLOW TEST:40.748 seconds] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should succeed in writing subpaths in container [Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Variable Expansion should succeed in writing subpaths in container [Slow] [Conformance]","total":-1,"completed":12,"skipped":83,"failed":0} [BeforeEach] [sig-node] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 17 22:01:38.705: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create ConfigMap with empty key [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating configMap that has name configmap-test-emptyKey-38334436-e150-48d6-a7ee-77e27263b265 [AfterEach] [sig-node] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 17 22:01:38.730: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-9000" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance]","total":-1,"completed":13,"skipped":83,"failed":0} SSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 17 22:01:31.874: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/init_container.go:162 [It] should invoke init containers on a RestartAlways pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating the pod Jun 17 22:01:31.894: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [sig-node] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 17 22:01:38.845: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-6631" for this suite. • [SLOW TEST:6.979 seconds] [sig-node] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should invoke init containers on a RestartAlways pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance]","total":-1,"completed":14,"skipped":128,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 17 22:01:35.882: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should support configurable pod DNS nameservers [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod with dnsPolicy=None and customized dnsConfig... Jun 17 22:01:35.918: INFO: Created pod &Pod{ObjectMeta:{test-dns-nameservers dns-8567 f3ef5679-7d79-410f-912a-ed4817d849ba 35458 0 2022-06-17 22:01:35 +0000 UTC map[] map[kubernetes.io/psp:collectd] [] [] [{e2e.test Update v1 2022-06-17 22:01:35 +0000 UTC FieldsV1 {"f:spec":{"f:containers":{"k:{\"name\":\"agnhost-container\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsConfig":{".":{},"f:nameservers":{},"f:searches":{}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-c45b8,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:agnhost-container,Image:k8s.gcr.io/e2e-test-images/agnhost:2.32,Command:[],Args:[pause],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-c45b8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:None,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:&PodDNSConfig{Nameservers:[1.1.1.1],Searches:[resolv.conf.local],Options:[]PodDNSConfigOption{},},ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jun 17 22:01:35.921: INFO: The status of Pod test-dns-nameservers is Pending, waiting for it to be Running (with Ready = true) Jun 17 22:01:37.924: INFO: The status of Pod test-dns-nameservers is Pending, waiting for it to be Running (with Ready = true) Jun 17 22:01:39.925: INFO: The status of Pod test-dns-nameservers is Running (Ready = true) STEP: Verifying customized DNS suffix list is configured on pod... Jun 17 22:01:39.925: INFO: ExecWithOptions {Command:[/agnhost dns-suffix] Namespace:dns-8567 PodName:test-dns-nameservers ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 17 22:01:39.925: INFO: >>> kubeConfig: /root/.kube/config STEP: Verifying customized DNS server is configured on pod... Jun 17 22:01:40.020: INFO: ExecWithOptions {Command:[/agnhost dns-server-list] Namespace:dns-8567 PodName:test-dns-nameservers ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 17 22:01:40.020: INFO: >>> kubeConfig: /root/.kube/config Jun 17 22:01:40.113: INFO: Deleting pod test-dns-nameservers... [AfterEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 17 22:01:40.119: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-8567" for this suite. • ------------------------------ {"msg":"PASSED [sig-network] DNS should support configurable pod DNS nameservers [Conformance]","total":-1,"completed":14,"skipped":180,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/sysctl.go:35 [BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 17 22:01:38.758: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sysctl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/sysctl.go:64 [It] should support unsafe sysctls which are actually allowed [MinimumKubeletVersion:1.21] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod with the kernel.shm_rmid_forced sysctl STEP: Watching for error events or started pod STEP: Waiting for pod completion STEP: Checking that the pod succeeded STEP: Getting logs from the pod STEP: Checking that the sysctl is actually updated [AfterEach] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 17 22:01:42.803: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sysctl-2907" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] should support unsafe sysctls which are actually allowed [MinimumKubeletVersion:1.21] [Conformance]","total":-1,"completed":14,"skipped":93,"failed":0} SSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 17 22:01:42.833: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746 [It] should test the lifecycle of an Endpoint [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating an Endpoint STEP: waiting for available Endpoint STEP: listing all Endpoints STEP: updating the Endpoint STEP: fetching the Endpoint STEP: patching the Endpoint STEP: fetching the Endpoint STEP: deleting the Endpoint by Collection STEP: waiting for Endpoint deletion STEP: fetching the Endpoint [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 17 22:01:42.889: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-9341" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750 • ------------------------------ {"msg":"PASSED [sig-network] Services should test the lifecycle of an Endpoint [Conformance]","total":-1,"completed":15,"skipped":103,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-instrumentation] Events API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 17 22:01:42.941: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-instrumentation] Events API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/instrumentation/events.go:81 [It] should ensure that an event can be fetched, patched, deleted, and listed [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating a test event STEP: listing events in all namespaces STEP: listing events in test namespace STEP: listing events with field selection filtering on source STEP: listing events with field selection filtering on reportingController STEP: getting the test event STEP: patching the test event STEP: getting the test event STEP: updating the test event STEP: getting the test event STEP: deleting the test event STEP: listing events in all namespaces STEP: listing events in test namespace [AfterEach] [sig-instrumentation] Events API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 17 22:01:43.007: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-3841" for this suite. • ------------------------------ {"msg":"PASSED [sig-instrumentation] Events API should ensure that an event can be fetched, patched, deleted, and listed [Conformance]","total":-1,"completed":16,"skipped":127,"failed":0} SSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] Discovery /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 17 22:01:43.039: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename discovery STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Discovery /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/discovery.go:39 STEP: Setting up server cert [It] should validate PreferredVersion for each APIGroup [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Jun 17 22:01:43.378: INFO: Checking APIGroup: apiregistration.k8s.io Jun 17 22:01:43.379: INFO: PreferredVersion.GroupVersion: apiregistration.k8s.io/v1 Jun 17 22:01:43.379: INFO: Versions found [{apiregistration.k8s.io/v1 v1} {apiregistration.k8s.io/v1beta1 v1beta1}] Jun 17 22:01:43.379: INFO: apiregistration.k8s.io/v1 matches apiregistration.k8s.io/v1 Jun 17 22:01:43.379: INFO: Checking APIGroup: apps Jun 17 22:01:43.379: INFO: PreferredVersion.GroupVersion: apps/v1 Jun 17 22:01:43.379: INFO: Versions found [{apps/v1 v1}] Jun 17 22:01:43.379: INFO: apps/v1 matches apps/v1 Jun 17 22:01:43.379: INFO: Checking APIGroup: events.k8s.io Jun 17 22:01:43.380: INFO: PreferredVersion.GroupVersion: events.k8s.io/v1 Jun 17 22:01:43.380: INFO: Versions found [{events.k8s.io/v1 v1} {events.k8s.io/v1beta1 v1beta1}] Jun 17 22:01:43.380: INFO: events.k8s.io/v1 matches events.k8s.io/v1 Jun 17 22:01:43.380: INFO: Checking APIGroup: authentication.k8s.io Jun 17 22:01:43.381: INFO: PreferredVersion.GroupVersion: authentication.k8s.io/v1 Jun 17 22:01:43.381: INFO: Versions found [{authentication.k8s.io/v1 v1} {authentication.k8s.io/v1beta1 v1beta1}] Jun 17 22:01:43.381: INFO: authentication.k8s.io/v1 matches authentication.k8s.io/v1 Jun 17 22:01:43.381: INFO: Checking APIGroup: authorization.k8s.io Jun 17 22:01:43.382: INFO: PreferredVersion.GroupVersion: authorization.k8s.io/v1 Jun 17 22:01:43.382: INFO: Versions found [{authorization.k8s.io/v1 v1} {authorization.k8s.io/v1beta1 v1beta1}] Jun 17 22:01:43.382: INFO: authorization.k8s.io/v1 matches authorization.k8s.io/v1 Jun 17 22:01:43.382: INFO: Checking APIGroup: autoscaling Jun 17 22:01:43.383: INFO: PreferredVersion.GroupVersion: autoscaling/v1 Jun 17 22:01:43.383: INFO: Versions found [{autoscaling/v1 v1} {autoscaling/v2beta1 v2beta1} {autoscaling/v2beta2 v2beta2}] Jun 17 22:01:43.383: INFO: autoscaling/v1 matches autoscaling/v1 Jun 17 22:01:43.383: INFO: Checking APIGroup: batch Jun 17 22:01:43.384: INFO: PreferredVersion.GroupVersion: batch/v1 Jun 17 22:01:43.384: INFO: Versions found [{batch/v1 v1} {batch/v1beta1 v1beta1}] Jun 17 22:01:43.384: INFO: batch/v1 matches batch/v1 Jun 17 22:01:43.384: INFO: Checking APIGroup: certificates.k8s.io Jun 17 22:01:43.385: INFO: PreferredVersion.GroupVersion: certificates.k8s.io/v1 Jun 17 22:01:43.385: INFO: Versions found [{certificates.k8s.io/v1 v1} {certificates.k8s.io/v1beta1 v1beta1}] Jun 17 22:01:43.385: INFO: certificates.k8s.io/v1 matches certificates.k8s.io/v1 Jun 17 22:01:43.385: INFO: Checking APIGroup: networking.k8s.io Jun 17 22:01:43.386: INFO: PreferredVersion.GroupVersion: networking.k8s.io/v1 Jun 17 22:01:43.386: INFO: Versions found [{networking.k8s.io/v1 v1} {networking.k8s.io/v1beta1 v1beta1}] Jun 17 22:01:43.386: INFO: networking.k8s.io/v1 matches networking.k8s.io/v1 Jun 17 22:01:43.386: INFO: Checking APIGroup: extensions Jun 17 22:01:43.387: INFO: PreferredVersion.GroupVersion: extensions/v1beta1 Jun 17 22:01:43.387: INFO: Versions found [{extensions/v1beta1 v1beta1}] Jun 17 22:01:43.387: INFO: extensions/v1beta1 matches extensions/v1beta1 Jun 17 22:01:43.387: INFO: Checking APIGroup: policy Jun 17 22:01:43.388: INFO: PreferredVersion.GroupVersion: policy/v1 Jun 17 22:01:43.388: INFO: Versions found [{policy/v1 v1} {policy/v1beta1 v1beta1}] Jun 17 22:01:43.388: INFO: policy/v1 matches policy/v1 Jun 17 22:01:43.388: INFO: Checking APIGroup: rbac.authorization.k8s.io Jun 17 22:01:43.390: INFO: PreferredVersion.GroupVersion: rbac.authorization.k8s.io/v1 Jun 17 22:01:43.390: INFO: Versions found [{rbac.authorization.k8s.io/v1 v1} {rbac.authorization.k8s.io/v1beta1 v1beta1}] Jun 17 22:01:43.390: INFO: rbac.authorization.k8s.io/v1 matches rbac.authorization.k8s.io/v1 Jun 17 22:01:43.390: INFO: Checking APIGroup: storage.k8s.io Jun 17 22:01:43.391: INFO: PreferredVersion.GroupVersion: storage.k8s.io/v1 Jun 17 22:01:43.391: INFO: Versions found [{storage.k8s.io/v1 v1} {storage.k8s.io/v1beta1 v1beta1}] Jun 17 22:01:43.391: INFO: storage.k8s.io/v1 matches storage.k8s.io/v1 Jun 17 22:01:43.391: INFO: Checking APIGroup: admissionregistration.k8s.io Jun 17 22:01:43.392: INFO: PreferredVersion.GroupVersion: admissionregistration.k8s.io/v1 Jun 17 22:01:43.392: INFO: Versions found [{admissionregistration.k8s.io/v1 v1} {admissionregistration.k8s.io/v1beta1 v1beta1}] Jun 17 22:01:43.392: INFO: admissionregistration.k8s.io/v1 matches admissionregistration.k8s.io/v1 Jun 17 22:01:43.392: INFO: Checking APIGroup: apiextensions.k8s.io Jun 17 22:01:43.392: INFO: PreferredVersion.GroupVersion: apiextensions.k8s.io/v1 Jun 17 22:01:43.392: INFO: Versions found [{apiextensions.k8s.io/v1 v1} {apiextensions.k8s.io/v1beta1 v1beta1}] Jun 17 22:01:43.392: INFO: apiextensions.k8s.io/v1 matches apiextensions.k8s.io/v1 Jun 17 22:01:43.392: INFO: Checking APIGroup: scheduling.k8s.io Jun 17 22:01:43.393: INFO: PreferredVersion.GroupVersion: scheduling.k8s.io/v1 Jun 17 22:01:43.393: INFO: Versions found [{scheduling.k8s.io/v1 v1} {scheduling.k8s.io/v1beta1 v1beta1}] Jun 17 22:01:43.393: INFO: scheduling.k8s.io/v1 matches scheduling.k8s.io/v1 Jun 17 22:01:43.393: INFO: Checking APIGroup: coordination.k8s.io Jun 17 22:01:43.394: INFO: PreferredVersion.GroupVersion: coordination.k8s.io/v1 Jun 17 22:01:43.394: INFO: Versions found [{coordination.k8s.io/v1 v1} {coordination.k8s.io/v1beta1 v1beta1}] Jun 17 22:01:43.394: INFO: coordination.k8s.io/v1 matches coordination.k8s.io/v1 Jun 17 22:01:43.394: INFO: Checking APIGroup: node.k8s.io Jun 17 22:01:43.395: INFO: PreferredVersion.GroupVersion: node.k8s.io/v1 Jun 17 22:01:43.395: INFO: Versions found [{node.k8s.io/v1 v1} {node.k8s.io/v1beta1 v1beta1}] Jun 17 22:01:43.396: INFO: node.k8s.io/v1 matches node.k8s.io/v1 Jun 17 22:01:43.396: INFO: Checking APIGroup: discovery.k8s.io Jun 17 22:01:43.397: INFO: PreferredVersion.GroupVersion: discovery.k8s.io/v1 Jun 17 22:01:43.397: INFO: Versions found [{discovery.k8s.io/v1 v1} {discovery.k8s.io/v1beta1 v1beta1}] Jun 17 22:01:43.397: INFO: discovery.k8s.io/v1 matches discovery.k8s.io/v1 Jun 17 22:01:43.397: INFO: Checking APIGroup: flowcontrol.apiserver.k8s.io Jun 17 22:01:43.397: INFO: PreferredVersion.GroupVersion: flowcontrol.apiserver.k8s.io/v1beta1 Jun 17 22:01:43.397: INFO: Versions found [{flowcontrol.apiserver.k8s.io/v1beta1 v1beta1}] Jun 17 22:01:43.397: INFO: flowcontrol.apiserver.k8s.io/v1beta1 matches flowcontrol.apiserver.k8s.io/v1beta1 Jun 17 22:01:43.397: INFO: Checking APIGroup: intel.com Jun 17 22:01:43.398: INFO: PreferredVersion.GroupVersion: intel.com/v1 Jun 17 22:01:43.398: INFO: Versions found [{intel.com/v1 v1}] Jun 17 22:01:43.398: INFO: intel.com/v1 matches intel.com/v1 Jun 17 22:01:43.398: INFO: Checking APIGroup: k8s.cni.cncf.io Jun 17 22:01:43.399: INFO: PreferredVersion.GroupVersion: k8s.cni.cncf.io/v1 Jun 17 22:01:43.399: INFO: Versions found [{k8s.cni.cncf.io/v1 v1}] Jun 17 22:01:43.399: INFO: k8s.cni.cncf.io/v1 matches k8s.cni.cncf.io/v1 Jun 17 22:01:43.399: INFO: Checking APIGroup: monitoring.coreos.com Jun 17 22:01:43.400: INFO: PreferredVersion.GroupVersion: monitoring.coreos.com/v1 Jun 17 22:01:43.400: INFO: Versions found [{monitoring.coreos.com/v1 v1} {monitoring.coreos.com/v1alpha1 v1alpha1}] Jun 17 22:01:43.400: INFO: monitoring.coreos.com/v1 matches monitoring.coreos.com/v1 Jun 17 22:01:43.400: INFO: Checking APIGroup: telemetry.intel.com Jun 17 22:01:43.400: INFO: PreferredVersion.GroupVersion: telemetry.intel.com/v1alpha1 Jun 17 22:01:43.401: INFO: Versions found [{telemetry.intel.com/v1alpha1 v1alpha1}] Jun 17 22:01:43.401: INFO: telemetry.intel.com/v1alpha1 matches telemetry.intel.com/v1alpha1 Jun 17 22:01:43.401: INFO: Checking APIGroup: custom.metrics.k8s.io Jun 17 22:01:43.401: INFO: PreferredVersion.GroupVersion: custom.metrics.k8s.io/v1beta1 Jun 17 22:01:43.401: INFO: Versions found [{custom.metrics.k8s.io/v1beta1 v1beta1}] Jun 17 22:01:43.401: INFO: custom.metrics.k8s.io/v1beta1 matches custom.metrics.k8s.io/v1beta1 [AfterEach] [sig-api-machinery] Discovery /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 17 22:01:43.402: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "discovery-430" for this suite. • ------------------------------ {"msg":"PASSED [sig-api-machinery] Discovery should validate PreferredVersion for each APIGroup [Conformance]","total":-1,"completed":17,"skipped":140,"failed":0} SSS ------------------------------ [BeforeEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 17 22:01:38.910: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/rc.go:54 [It] should release no longer matching pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Given a ReplicationController is created STEP: When the matched label of one of its pods change Jun 17 22:01:38.938: INFO: Pod name pod-release: Found 0 pods out of 1 Jun 17 22:01:43.941: INFO: Pod name pod-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 17 22:01:44.955: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-5199" for this suite. • [SLOW TEST:6.054 seconds] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should release no longer matching pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ [BeforeEach] [sig-node] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 17 22:01:40.216: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/kubelet.go:38 [It] should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Jun 17 22:01:40.255: INFO: The status of Pod busybox-readonly-fs55b4e198-b27e-4308-bce4-4ea01442a336 is Pending, waiting for it to be Running (with Ready = true) Jun 17 22:01:42.260: INFO: The status of Pod busybox-readonly-fs55b4e198-b27e-4308-bce4-4ea01442a336 is Pending, waiting for it to be Running (with Ready = true) Jun 17 22:01:44.259: INFO: The status of Pod busybox-readonly-fs55b4e198-b27e-4308-bce4-4ea01442a336 is Pending, waiting for it to be Running (with Ready = true) Jun 17 22:01:46.260: INFO: The status of Pod busybox-readonly-fs55b4e198-b27e-4308-bce4-4ea01442a336 is Running (Ready = true) [AfterEach] [sig-node] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 17 22:01:46.268: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-1228" for this suite. • [SLOW TEST:6.062 seconds] [sig-node] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 when scheduling a read only busybox container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/kubelet.go:188 should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":15,"skipped":229,"failed":0} SSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 17 22:01:21.343: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: http [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Performing setup for networking test in namespace pod-network-test-3435 STEP: creating a selector STEP: Creating the service pods in kubernetes Jun 17 22:01:21.369: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Jun 17 22:01:21.401: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jun 17 22:01:23.404: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jun 17 22:01:25.404: INFO: The status of Pod netserver-0 is Running (Ready = false) Jun 17 22:01:27.403: INFO: The status of Pod netserver-0 is Running (Ready = false) Jun 17 22:01:29.405: INFO: The status of Pod netserver-0 is Running (Ready = false) Jun 17 22:01:31.408: INFO: The status of Pod netserver-0 is Running (Ready = false) Jun 17 22:01:33.406: INFO: The status of Pod netserver-0 is Running (Ready = false) Jun 17 22:01:35.404: INFO: The status of Pod netserver-0 is Running (Ready = false) Jun 17 22:01:37.404: INFO: The status of Pod netserver-0 is Running (Ready = false) Jun 17 22:01:39.405: INFO: The status of Pod netserver-0 is Running (Ready = false) Jun 17 22:01:41.406: INFO: The status of Pod netserver-0 is Running (Ready = false) Jun 17 22:01:43.405: INFO: The status of Pod netserver-0 is Running (Ready = true) Jun 17 22:01:43.409: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods Jun 17 22:01:49.432: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2 Jun 17 22:01:49.433: INFO: Breadth first check of 10.244.4.154 on host 10.10.190.207... Jun 17 22:01:49.434: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.3.63:9080/dial?request=hostname&protocol=http&host=10.244.4.154&port=8080&tries=1'] Namespace:pod-network-test-3435 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 17 22:01:49.434: INFO: >>> kubeConfig: /root/.kube/config Jun 17 22:01:49.582: INFO: Waiting for responses: map[] Jun 17 22:01:49.582: INFO: reached 10.244.4.154 after 0/1 tries Jun 17 22:01:49.582: INFO: Breadth first check of 10.244.3.57 on host 10.10.190.208... Jun 17 22:01:49.586: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.3.63:9080/dial?request=hostname&protocol=http&host=10.244.3.57&port=8080&tries=1'] Namespace:pod-network-test-3435 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 17 22:01:49.586: INFO: >>> kubeConfig: /root/.kube/config Jun 17 22:01:49.713: INFO: Waiting for responses: map[] Jun 17 22:01:49.713: INFO: reached 10.244.3.57 after 0/1 tries Jun 17 22:01:49.713: INFO: Going to retry 0 out of 2 pods.... [AfterEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 17 22:01:49.713: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-3435" for this suite. • [SLOW TEST:28.378 seconds] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/network/framework.go:23 Granular Checks: Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/network/networking.go:30 should function for intra-pod communication: http [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","total":-1,"completed":14,"skipped":200,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 17 22:01:46.302: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating secret with name secret-test-1aa43f85-8d50-40b7-a224-3ce709aa5775 STEP: Creating a pod to test consume secrets Jun 17 22:01:46.341: INFO: Waiting up to 5m0s for pod "pod-secrets-8cd90753-7806-4d14-8c7d-8c14c242e968" in namespace "secrets-4241" to be "Succeeded or Failed" Jun 17 22:01:46.343: INFO: Pod "pod-secrets-8cd90753-7806-4d14-8c7d-8c14c242e968": Phase="Pending", Reason="", readiness=false. Elapsed: 1.936453ms Jun 17 22:01:48.347: INFO: Pod "pod-secrets-8cd90753-7806-4d14-8c7d-8c14c242e968": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006302225s Jun 17 22:01:50.351: INFO: Pod "pod-secrets-8cd90753-7806-4d14-8c7d-8c14c242e968": Phase="Pending", Reason="", readiness=false. Elapsed: 4.010101626s Jun 17 22:01:52.354: INFO: Pod "pod-secrets-8cd90753-7806-4d14-8c7d-8c14c242e968": Phase="Pending", Reason="", readiness=false. Elapsed: 6.012845261s Jun 17 22:01:54.358: INFO: Pod "pod-secrets-8cd90753-7806-4d14-8c7d-8c14c242e968": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.016582381s STEP: Saw pod success Jun 17 22:01:54.358: INFO: Pod "pod-secrets-8cd90753-7806-4d14-8c7d-8c14c242e968" satisfied condition "Succeeded or Failed" Jun 17 22:01:54.361: INFO: Trying to get logs from node node1 pod pod-secrets-8cd90753-7806-4d14-8c7d-8c14c242e968 container secret-volume-test: STEP: delete the pod Jun 17 22:01:54.374: INFO: Waiting for pod pod-secrets-8cd90753-7806-4d14-8c7d-8c14c242e968 to disappear Jun 17 22:01:54.376: INFO: Pod pod-secrets-8cd90753-7806-4d14-8c7d-8c14c242e968 no longer exists [AfterEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 17 22:01:54.376: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-4241" for this suite. • [SLOW TEST:8.084 seconds] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":16,"skipped":241,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Events /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 17 22:01:43.417: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [It] should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: retrieving the pod Jun 17 22:01:51.472: INFO: &Pod{ObjectMeta:{send-events-26f1edeb-46d9-4567-b1b2-926258bc968b events-4382 e0380e64-b3a4-4c64-9411-63768004a2e2 35848 0 2022-06-17 22:01:43 +0000 UTC map[name:foo time:451949433] map[k8s.v1.cni.cncf.io/network-status:[{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.3.65" ], "mac": "c6:10:df:73:1b:50", "default": true, "dns": {} }] k8s.v1.cni.cncf.io/networks-status:[{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.3.65" ], "mac": "c6:10:df:73:1b:50", "default": true, "dns": {} }] kubernetes.io/psp:collectd] [] [] [{e2e.test Update v1 2022-06-17 22:01:43 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{},"f:time":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"p\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:ports":{".":{},"k:{\"containerPort\":80,\"protocol\":\"TCP\"}":{".":{},"f:containerPort":{},"f:protocol":{}}},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {multus Update v1 2022-06-17 22:01:47 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:k8s.v1.cni.cncf.io/network-status":{},"f:k8s.v1.cni.cncf.io/networks-status":{}}}}} {kubelet Update v1 2022-06-17 22:01:50 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.3.65\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-2grd8,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:p,Image:k8s.gcr.io/e2e-test-images/agnhost:2.32,Command:[],Args:[serve-hostname],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:,HostPort:0,ContainerPort:80,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-2grd8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-06-17 22:01:43 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-06-17 22:01:50 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-06-17 22:01:50 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-06-17 22:01:43 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.10.190.208,PodIP:10.244.3.65,StartTime:2022-06-17 22:01:43 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:p,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2022-06-17 22:01:48 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/agnhost:2.32,ImageID:docker-pullable://k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1,ContainerID:docker://7e878ebdfbe3437db0ac9ae0d8b13e7c9327b481c9deef1fb4431674bc66a949,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.3.65,},},EphemeralContainerStatuses:[]ContainerStatus{},},} STEP: checking for scheduler event about the pod Jun 17 22:01:53.477: INFO: Saw scheduler event for our pod. STEP: checking for kubelet event about the pod Jun 17 22:01:55.482: INFO: Saw kubelet event for our pod. STEP: deleting the pod [AfterEach] [sig-node] Events /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 17 22:01:55.488: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-4382" for this suite. • [SLOW TEST:12.080 seconds] [sig-node] Events /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance]","total":-1,"completed":18,"skipped":143,"failed":0} SSSSSSSSSSSSSSS ------------------------------ {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":3,"skipped":71,"failed":0} [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 17 21:59:29.604: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746 [It] should be able to change the type from ExternalName to NodePort [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating a service externalname-service with the type=ExternalName in namespace services-6408 STEP: changing the ExternalName service to type=NodePort STEP: creating replication controller externalname-service in namespace services-6408 I0617 21:59:29.648361 35 runners.go:190] Created replication controller with name: externalname-service, namespace: services-6408, replica count: 2 I0617 21:59:32.699066 35 runners.go:190] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0617 21:59:35.700858 35 runners.go:190] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0617 21:59:38.702103 35 runners.go:190] externalname-service Pods: 2 out of 2 created, 1 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0617 21:59:41.704787 35 runners.go:190] externalname-service Pods: 2 out of 2 created, 1 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0617 21:59:44.705908 35 runners.go:190] externalname-service Pods: 2 out of 2 created, 1 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0617 21:59:47.706810 35 runners.go:190] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jun 17 21:59:47.706: INFO: Creating new exec pod Jun 17 21:59:56.729: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6408 exec execpodt5pz9 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80' Jun 17 21:59:56.995: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 externalname-service 80\nConnection to externalname-service 80 port [tcp/http] succeeded!\n" Jun 17 21:59:56.995: INFO: stdout: "externalname-service-rm766" Jun 17 21:59:56.996: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6408 exec execpodt5pz9 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.233.38.46 80' Jun 17 21:59:57.248: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 10.233.38.46 80\nConnection to 10.233.38.46 80 port [tcp/http] succeeded!\n" Jun 17 21:59:57.248: INFO: stdout: "externalname-service-krpfv" Jun 17 21:59:57.248: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6408 exec execpodt5pz9 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30434' Jun 17 21:59:57.504: INFO: rc: 1 Jun 17 21:59:57.504: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6408 exec execpodt5pz9 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30434: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30434 nc: connect to 10.10.190.207 port 30434 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 21:59:58.504: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6408 exec execpodt5pz9 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30434' Jun 17 21:59:59.139: INFO: rc: 1 Jun 17 21:59:59.139: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6408 exec execpodt5pz9 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30434: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30434 nc: connect to 10.10.190.207 port 30434 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 21:59:59.504: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6408 exec execpodt5pz9 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30434' Jun 17 21:59:59.874: INFO: rc: 1 Jun 17 21:59:59.874: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6408 exec execpodt5pz9 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30434: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30434 nc: connect to 10.10.190.207 port 30434 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:00:00.505: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6408 exec execpodt5pz9 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30434' Jun 17 22:00:00.769: INFO: rc: 1 Jun 17 22:00:00.769: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6408 exec execpodt5pz9 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30434: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30434 nc: connect to 10.10.190.207 port 30434 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:00:01.504: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6408 exec execpodt5pz9 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30434' Jun 17 22:00:01.752: INFO: rc: 1 Jun 17 22:00:01.752: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6408 exec execpodt5pz9 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30434: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30434 nc: connect to 10.10.190.207 port 30434 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:00:02.504: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6408 exec execpodt5pz9 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30434' Jun 17 22:00:02.746: INFO: rc: 1 Jun 17 22:00:02.746: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6408 exec execpodt5pz9 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30434: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30434 nc: connect to 10.10.190.207 port 30434 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:00:03.505: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6408 exec execpodt5pz9 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30434' Jun 17 22:00:03.753: INFO: rc: 1 Jun 17 22:00:03.753: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6408 exec execpodt5pz9 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30434: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30434 nc: connect to 10.10.190.207 port 30434 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:00:04.504: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6408 exec execpodt5pz9 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30434' Jun 17 22:00:04.725: INFO: rc: 1 Jun 17 22:00:04.725: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6408 exec execpodt5pz9 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30434: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30434 nc: connect to 10.10.190.207 port 30434 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:00:05.505: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6408 exec execpodt5pz9 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30434' Jun 17 22:00:05.745: INFO: rc: 1 Jun 17 22:00:05.745: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6408 exec execpodt5pz9 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30434: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30434 nc: connect to 10.10.190.207 port 30434 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:00:06.504: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6408 exec execpodt5pz9 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30434' Jun 17 22:00:06.726: INFO: rc: 1 Jun 17 22:00:06.726: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6408 exec execpodt5pz9 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30434: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30434 nc: connect to 10.10.190.207 port 30434 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:00:07.504: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6408 exec execpodt5pz9 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30434' Jun 17 22:00:07.722: INFO: rc: 1 Jun 17 22:00:07.722: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6408 exec execpodt5pz9 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30434: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30434 nc: connect to 10.10.190.207 port 30434 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:00:08.504: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6408 exec execpodt5pz9 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30434' Jun 17 22:00:08.763: INFO: rc: 1 Jun 17 22:00:08.763: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6408 exec execpodt5pz9 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30434: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30434 nc: connect to 10.10.190.207 port 30434 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:00:09.505: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6408 exec execpodt5pz9 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30434' Jun 17 22:00:09.779: INFO: rc: 1 Jun 17 22:00:09.779: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6408 exec execpodt5pz9 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30434: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30434 nc: connect to 10.10.190.207 port 30434 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:00:10.505: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6408 exec execpodt5pz9 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30434' Jun 17 22:00:10.762: INFO: rc: 1 Jun 17 22:00:10.762: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6408 exec execpodt5pz9 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30434: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30434 nc: connect to 10.10.190.207 port 30434 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:00:11.505: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6408 exec execpodt5pz9 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30434' Jun 17 22:00:12.268: INFO: rc: 1 Jun 17 22:00:12.268: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6408 exec execpodt5pz9 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30434: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30434 nc: connect to 10.10.190.207 port 30434 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:00:12.504: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6408 exec execpodt5pz9 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30434' Jun 17 22:00:12.742: INFO: rc: 1 Jun 17 22:00:12.742: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6408 exec execpodt5pz9 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30434: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30434 nc: connect to 10.10.190.207 port 30434 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:00:13.504: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6408 exec execpodt5pz9 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30434' Jun 17 22:00:13.767: INFO: rc: 1 Jun 17 22:00:13.767: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6408 exec execpodt5pz9 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30434: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30434 nc: connect to 10.10.190.207 port 30434 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:00:14.505: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6408 exec execpodt5pz9 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30434' Jun 17 22:00:14.815: INFO: rc: 1 Jun 17 22:00:14.815: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6408 exec execpodt5pz9 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30434: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30434 nc: connect to 10.10.190.207 port 30434 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:00:15.504: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6408 exec execpodt5pz9 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30434' Jun 17 22:00:16.106: INFO: rc: 1 Jun 17 22:00:16.107: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6408 exec execpodt5pz9 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30434: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30434 nc: connect to 10.10.190.207 port 30434 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:00:16.504: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6408 exec execpodt5pz9 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30434' Jun 17 22:00:16.971: INFO: rc: 1 Jun 17 22:00:16.971: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6408 exec execpodt5pz9 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30434: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30434 nc: connect to 10.10.190.207 port 30434 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:00:17.504: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6408 exec execpodt5pz9 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30434' Jun 17 22:00:17.798: INFO: rc: 1 Jun 17 22:00:17.799: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6408 exec execpodt5pz9 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30434: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30434 nc: connect to 10.10.190.207 port 30434 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:00:18.504: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6408 exec execpodt5pz9 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30434' Jun 17 22:00:18.871: INFO: rc: 1 Jun 17 22:00:18.871: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6408 exec execpodt5pz9 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30434: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30434 nc: connect to 10.10.190.207 port 30434 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:00:19.505: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6408 exec execpodt5pz9 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30434' Jun 17 22:00:19.786: INFO: rc: 1 Jun 17 22:00:19.786: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6408 exec execpodt5pz9 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30434: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30434 nc: connect to 10.10.190.207 port 30434 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:00:20.505: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6408 exec execpodt5pz9 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30434' Jun 17 22:00:20.775: INFO: rc: 1 Jun 17 22:00:20.775: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6408 exec execpodt5pz9 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30434: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30434 nc: connect to 10.10.190.207 port 30434 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:00:21.504: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6408 exec execpodt5pz9 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30434' Jun 17 22:00:21.753: INFO: rc: 1 Jun 17 22:00:21.753: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6408 exec execpodt5pz9 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30434: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30434 nc: connect to 10.10.190.207 port 30434 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:00:22.504: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6408 exec execpodt5pz9 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30434' Jun 17 22:00:22.863: INFO: rc: 1 Jun 17 22:00:22.863: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6408 exec execpodt5pz9 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30434: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30434 nc: connect to 10.10.190.207 port 30434 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:00:23.505: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6408 exec execpodt5pz9 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30434' Jun 17 22:00:23.768: INFO: rc: 1 Jun 17 22:00:23.768: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6408 exec execpodt5pz9 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30434: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30434 nc: connect to 10.10.190.207 port 30434 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:00:24.504: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6408 exec execpodt5pz9 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30434' Jun 17 22:00:24.741: INFO: rc: 1 Jun 17 22:00:24.742: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6408 exec execpodt5pz9 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30434: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30434 nc: connect to 10.10.190.207 port 30434 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:00:25.505: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6408 exec execpodt5pz9 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30434' Jun 17 22:00:25.846: INFO: rc: 1 Jun 17 22:00:25.846: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6408 exec execpodt5pz9 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30434: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30434 nc: connect to 10.10.190.207 port 30434 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:00:26.505: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6408 exec execpodt5pz9 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30434' Jun 17 22:00:26.885: INFO: rc: 1 Jun 17 22:00:26.885: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6408 exec execpodt5pz9 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30434: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30434 nc: connect to 10.10.190.207 port 30434 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:00:27.505: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6408 exec execpodt5pz9 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30434' Jun 17 22:00:27.999: INFO: rc: 1 Jun 17 22:00:27.999: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6408 exec execpodt5pz9 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30434: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30434 nc: connect to 10.10.190.207 port 30434 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:00:28.504: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6408 exec execpodt5pz9 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30434' Jun 17 22:00:28.795: INFO: rc: 1 Jun 17 22:00:28.795: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6408 exec execpodt5pz9 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30434: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30434 nc: connect to 10.10.190.207 port 30434 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:00:29.505: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6408 exec execpodt5pz9 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30434' Jun 17 22:00:29.838: INFO: rc: 1 Jun 17 22:00:29.838: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6408 exec execpodt5pz9 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30434: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30434 nc: connect to 10.10.190.207 port 30434 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:00:30.505: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6408 exec execpodt5pz9 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30434' Jun 17 22:00:30.953: INFO: rc: 1 Jun 17 22:00:30.953: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6408 exec execpodt5pz9 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30434: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30434 nc: connect to 10.10.190.207 port 30434 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:00:31.505: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6408 exec execpodt5pz9 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30434' Jun 17 22:00:31.856: INFO: rc: 1 Jun 17 22:00:31.856: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6408 exec execpodt5pz9 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30434: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30434 nc: connect to 10.10.190.207 port 30434 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:00:32.505: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6408 exec execpodt5pz9 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30434' Jun 17 22:00:32.983: INFO: rc: 1 Jun 17 22:00:32.983: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6408 exec execpodt5pz9 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30434: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30434 nc: connect to 10.10.190.207 port 30434 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:00:33.504: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6408 exec execpodt5pz9 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30434' Jun 17 22:00:33.857: INFO: rc: 1 Jun 17 22:00:33.857: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6408 exec execpodt5pz9 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30434: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30434 nc: connect to 10.10.190.207 port 30434 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:00:34.504: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6408 exec execpodt5pz9 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30434' Jun 17 22:00:34.816: INFO: rc: 1 Jun 17 22:00:34.816: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6408 exec execpodt5pz9 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30434: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30434 nc: connect to 10.10.190.207 port 30434 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:00:35.505: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6408 exec execpodt5pz9 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30434' Jun 17 22:00:35.953: INFO: rc: 1 Jun 17 22:00:35.953: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6408 exec execpodt5pz9 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30434: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30434 nc: connect to 10.10.190.207 port 30434 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:00:36.504: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6408 exec execpodt5pz9 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30434' Jun 17 22:00:36.925: INFO: rc: 1 Jun 17 22:00:36.926: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6408 exec execpodt5pz9 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30434: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30434 nc: connect to 10.10.190.207 port 30434 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:00:37.504: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6408 exec execpodt5pz9 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30434' Jun 17 22:00:37.762: INFO: rc: 1 Jun 17 22:00:37.763: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6408 exec execpodt5pz9 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30434: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30434 nc: connect to 10.10.190.207 port 30434 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:00:38.505: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6408 exec execpodt5pz9 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30434' Jun 17 22:00:38.741: INFO: rc: 1 Jun 17 22:00:38.741: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6408 exec execpodt5pz9 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30434: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30434 nc: connect to 10.10.190.207 port 30434 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:00:39.504: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6408 exec execpodt5pz9 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30434' Jun 17 22:00:39.767: INFO: rc: 1 Jun 17 22:00:39.767: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6408 exec execpodt5pz9 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30434: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30434 + echo hostName nc: connect to 10.10.190.207 port 30434 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:00:40.505: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6408 exec execpodt5pz9 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30434' Jun 17 22:00:40.752: INFO: rc: 1 Jun 17 22:00:40.752: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6408 exec execpodt5pz9 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30434: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30434 nc: connect to 10.10.190.207 port 30434 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:00:41.505: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6408 exec execpodt5pz9 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30434' Jun 17 22:00:41.770: INFO: rc: 1 Jun 17 22:00:41.770: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6408 exec execpodt5pz9 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30434: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30434 nc: connect to 10.10.190.207 port 30434 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:00:42.505: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6408 exec execpodt5pz9 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30434' Jun 17 22:00:42.736: INFO: rc: 1 Jun 17 22:00:42.736: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6408 exec execpodt5pz9 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30434: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30434 nc: connect to 10.10.190.207 port 30434 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:00:43.504: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6408 exec execpodt5pz9 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30434' Jun 17 22:00:43.763: INFO: rc: 1 Jun 17 22:00:43.763: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6408 exec execpodt5pz9 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30434: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30434 nc: connect to 10.10.190.207 port 30434 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:00:44.504: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6408 exec execpodt5pz9 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30434' Jun 17 22:00:44.851: INFO: rc: 1 Jun 17 22:00:44.851: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6408 exec execpodt5pz9 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30434: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30434 nc: connect to 10.10.190.207 port 30434 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:00:45.504: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6408 exec execpodt5pz9 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30434' Jun 17 22:00:46.024: INFO: rc: 1 Jun 17 22:00:46.024: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6408 exec execpodt5pz9 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30434: Command stdout: stderr: + echo+ hostName nc -v -t -w 2 10.10.190.207 30434 nc: connect to 10.10.190.207 port 30434 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:00:46.504: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6408 exec execpodt5pz9 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30434' Jun 17 22:00:46.740: INFO: rc: 1 Jun 17 22:00:46.740: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6408 exec execpodt5pz9 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30434: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30434 nc: connect to 10.10.190.207 port 30434 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:00:47.504: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6408 exec execpodt5pz9 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30434' Jun 17 22:00:47.769: INFO: rc: 1 Jun 17 22:00:47.769: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6408 exec execpodt5pz9 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30434: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30434 nc: connect to 10.10.190.207 port 30434 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:00:48.504: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6408 exec execpodt5pz9 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30434' Jun 17 22:00:48.880: INFO: rc: 1 Jun 17 22:00:48.880: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6408 exec execpodt5pz9 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30434: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30434 nc: connect to 10.10.190.207 port 30434 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:00:49.504: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6408 exec execpodt5pz9 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30434' Jun 17 22:00:49.765: INFO: rc: 1 Jun 17 22:00:49.765: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6408 exec execpodt5pz9 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30434: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30434 nc: connect to 10.10.190.207 port 30434 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:00:50.504: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6408 exec execpodt5pz9 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30434' Jun 17 22:00:50.835: INFO: rc: 1 Jun 17 22:00:50.835: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6408 exec execpodt5pz9 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30434: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30434 nc: connect to 10.10.190.207 port 30434 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:00:51.504: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6408 exec execpodt5pz9 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30434' Jun 17 22:00:51.960: INFO: rc: 1 Jun 17 22:00:51.960: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6408 exec execpodt5pz9 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30434: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30434 nc: connect to 10.10.190.207 port 30434 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:00:52.504: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6408 exec execpodt5pz9 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30434' Jun 17 22:00:52.784: INFO: rc: 1 Jun 17 22:00:52.784: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6408 exec execpodt5pz9 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30434: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30434 nc: connect to 10.10.190.207 port 30434 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:00:53.504: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6408 exec execpodt5pz9 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30434' Jun 17 22:00:53.755: INFO: rc: 1 Jun 17 22:00:53.755: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6408 exec execpodt5pz9 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30434: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30434 nc: connect to 10.10.190.207 port 30434 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:00:54.506: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6408 exec execpodt5pz9 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30434' Jun 17 22:00:54.884: INFO: rc: 1 Jun 17 22:00:54.884: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6408 exec execpodt5pz9 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30434: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30434 nc: connect to 10.10.190.207 port 30434 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:00:55.504: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6408 exec execpodt5pz9 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30434' Jun 17 22:00:55.759: INFO: rc: 1 Jun 17 22:00:55.759: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6408 exec execpodt5pz9 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30434: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30434 nc: connect to 10.10.190.207 port 30434 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:00:56.504: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6408 exec execpodt5pz9 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30434' Jun 17 22:00:56.771: INFO: rc: 1 Jun 17 22:00:56.771: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6408 exec execpodt5pz9 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30434: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30434 nc: connect to 10.10.190.207 port 30434 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:00:57.505: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6408 exec execpodt5pz9 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30434' Jun 17 22:00:57.945: INFO: rc: 1 Jun 17 22:00:57.945: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6408 exec execpodt5pz9 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30434: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30434 nc: connect to 10.10.190.207 port 30434 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:00:58.504: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6408 exec execpodt5pz9 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30434' Jun 17 22:00:58.928: INFO: rc: 1 Jun 17 22:00:58.928: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6408 exec execpodt5pz9 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30434: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30434 nc: connect to 10.10.190.207 port 30434 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:00:59.504: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6408 exec execpodt5pz9 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30434' Jun 17 22:00:59.933: INFO: rc: 1 Jun 17 22:00:59.933: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6408 exec execpodt5pz9 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30434: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30434 nc: connect to 10.10.190.207 port 30434 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:01:00.504: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6408 exec execpodt5pz9 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30434' Jun 17 22:01:00.788: INFO: rc: 1 Jun 17 22:01:00.788: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6408 exec execpodt5pz9 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30434: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30434 nc: connect to 10.10.190.207 port 30434 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:01:01.504: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6408 exec execpodt5pz9 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30434' Jun 17 22:01:01.771: INFO: rc: 1 Jun 17 22:01:01.771: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6408 exec execpodt5pz9 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30434: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30434 nc: connect to 10.10.190.207 port 30434 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:01:02.504: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6408 exec execpodt5pz9 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30434' Jun 17 22:01:02.808: INFO: rc: 1 Jun 17 22:01:02.808: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6408 exec execpodt5pz9 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30434: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30434 nc: connect to 10.10.190.207 port 30434 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:01:03.504: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6408 exec execpodt5pz9 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30434' Jun 17 22:01:03.864: INFO: rc: 1 Jun 17 22:01:03.864: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6408 exec execpodt5pz9 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30434: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30434 nc: connect to 10.10.190.207 port 30434 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:01:04.504: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6408 exec execpodt5pz9 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30434' Jun 17 22:01:04.745: INFO: rc: 1 Jun 17 22:01:04.745: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6408 exec execpodt5pz9 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30434: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30434 nc: connect to 10.10.190.207 port 30434 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:01:05.504: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6408 exec execpodt5pz9 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30434' Jun 17 22:01:05.745: INFO: rc: 1 Jun 17 22:01:05.745: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6408 exec execpodt5pz9 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30434: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30434 nc: connect to 10.10.190.207 port 30434 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:01:06.504: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6408 exec execpodt5pz9 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30434' Jun 17 22:01:06.763: INFO: rc: 1 Jun 17 22:01:06.763: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6408 exec execpodt5pz9 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30434: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30434 nc: connect to 10.10.190.207 port 30434 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:01:07.505: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6408 exec execpodt5pz9 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30434' Jun 17 22:01:07.887: INFO: rc: 1 Jun 17 22:01:07.887: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6408 exec execpodt5pz9 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30434: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30434 + echo hostName nc: connect to 10.10.190.207 port 30434 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:01:08.504: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6408 exec execpodt5pz9 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30434' Jun 17 22:01:08.745: INFO: rc: 1 Jun 17 22:01:08.745: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6408 exec execpodt5pz9 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30434: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30434 nc: connect to 10.10.190.207 port 30434 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:01:09.505: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6408 exec execpodt5pz9 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30434' Jun 17 22:01:09.748: INFO: rc: 1 Jun 17 22:01:09.748: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6408 exec execpodt5pz9 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30434: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30434 nc: connect to 10.10.190.207 port 30434 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:01:10.504: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6408 exec execpodt5pz9 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30434' Jun 17 22:01:10.753: INFO: rc: 1 Jun 17 22:01:10.753: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6408 exec execpodt5pz9 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30434: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30434 nc: connect to 10.10.190.207 port 30434 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:01:11.504: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6408 exec execpodt5pz9 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30434' Jun 17 22:01:11.746: INFO: rc: 1 Jun 17 22:01:11.746: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6408 exec execpodt5pz9 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30434: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30434 nc: connect to 10.10.190.207 port 30434 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:01:12.505: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6408 exec execpodt5pz9 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30434' Jun 17 22:01:12.743: INFO: rc: 1 Jun 17 22:01:12.743: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6408 exec execpodt5pz9 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30434: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30434 nc: connect to 10.10.190.207 port 30434 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:01:13.504: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6408 exec execpodt5pz9 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30434' Jun 17 22:01:13.886: INFO: rc: 1 Jun 17 22:01:13.886: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6408 exec execpodt5pz9 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30434: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30434 nc: connect to 10.10.190.207 port 30434 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:01:14.504: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6408 exec execpodt5pz9 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30434' Jun 17 22:01:14.734: INFO: rc: 1 Jun 17 22:01:14.734: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6408 exec execpodt5pz9 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30434: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30434 nc: connect to 10.10.190.207 port 30434 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:01:15.505: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6408 exec execpodt5pz9 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30434' Jun 17 22:01:15.772: INFO: rc: 1 Jun 17 22:01:15.772: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6408 exec execpodt5pz9 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30434: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30434 nc: connect to 10.10.190.207 port 30434 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:01:16.504: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6408 exec execpodt5pz9 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30434' Jun 17 22:01:16.754: INFO: rc: 1 Jun 17 22:01:16.754: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6408 exec execpodt5pz9 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30434: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30434 nc: connect to 10.10.190.207 port 30434 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:01:17.504: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6408 exec execpodt5pz9 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30434' Jun 17 22:01:17.793: INFO: rc: 1 Jun 17 22:01:17.793: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6408 exec execpodt5pz9 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30434: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30434 nc: connect to 10.10.190.207 port 30434 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:01:18.504: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6408 exec execpodt5pz9 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30434' Jun 17 22:01:18.752: INFO: rc: 1 Jun 17 22:01:18.752: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6408 exec execpodt5pz9 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30434: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30434 nc: connect to 10.10.190.207 port 30434 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:01:19.504: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6408 exec execpodt5pz9 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30434' Jun 17 22:01:19.954: INFO: rc: 1 Jun 17 22:01:19.954: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6408 exec execpodt5pz9 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30434: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30434 nc: connect to 10.10.190.207 port 30434 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:01:20.504: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6408 exec execpodt5pz9 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30434' Jun 17 22:01:20.795: INFO: rc: 1 Jun 17 22:01:20.795: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6408 exec execpodt5pz9 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30434: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30434 nc: connect to 10.10.190.207 port 30434 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:01:21.504: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6408 exec execpodt5pz9 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30434' Jun 17 22:01:21.743: INFO: rc: 1 Jun 17 22:01:21.743: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6408 exec execpodt5pz9 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30434: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30434 nc: connect to 10.10.190.207 port 30434 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:01:22.504: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6408 exec execpodt5pz9 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30434' Jun 17 22:01:23.010: INFO: rc: 1 Jun 17 22:01:23.011: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6408 exec execpodt5pz9 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30434: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30434 nc: connect to 10.10.190.207 port 30434 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:01:23.504: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6408 exec execpodt5pz9 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30434' Jun 17 22:01:23.752: INFO: rc: 1 Jun 17 22:01:23.752: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6408 exec execpodt5pz9 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30434: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30434 nc: connect to 10.10.190.207 port 30434 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:01:24.504: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6408 exec execpodt5pz9 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30434' Jun 17 22:01:24.740: INFO: rc: 1 Jun 17 22:01:24.740: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6408 exec execpodt5pz9 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30434: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30434 nc: connect to 10.10.190.207 port 30434 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:01:25.505: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6408 exec execpodt5pz9 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30434' Jun 17 22:01:25.760: INFO: rc: 1 Jun 17 22:01:25.761: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6408 exec execpodt5pz9 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30434: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30434 nc: connect to 10.10.190.207 port 30434 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:01:26.504: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6408 exec execpodt5pz9 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30434' Jun 17 22:01:26.759: INFO: rc: 1 Jun 17 22:01:26.760: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6408 exec execpodt5pz9 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30434: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30434 nc: connect to 10.10.190.207 port 30434 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:01:27.504: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6408 exec execpodt5pz9 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30434' Jun 17 22:01:27.893: INFO: rc: 1 Jun 17 22:01:27.893: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6408 exec execpodt5pz9 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30434: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30434 nc: connect to 10.10.190.207 port 30434 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:01:28.504: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6408 exec execpodt5pz9 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30434' Jun 17 22:01:28.740: INFO: rc: 1 Jun 17 22:01:28.740: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6408 exec execpodt5pz9 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30434: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30434 nc: connect to 10.10.190.207 port 30434 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:01:29.505: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6408 exec execpodt5pz9 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30434' Jun 17 22:01:29.784: INFO: rc: 1 Jun 17 22:01:29.784: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6408 exec execpodt5pz9 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30434: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30434 nc: connect to 10.10.190.207 port 30434 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:01:30.504: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6408 exec execpodt5pz9 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30434' Jun 17 22:01:30.759: INFO: rc: 1 Jun 17 22:01:30.759: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6408 exec execpodt5pz9 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30434: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30434 nc: connect to 10.10.190.207 port 30434 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:01:31.504: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6408 exec execpodt5pz9 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30434' Jun 17 22:01:31.778: INFO: rc: 1 Jun 17 22:01:31.778: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6408 exec execpodt5pz9 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30434: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30434 nc: connect to 10.10.190.207 port 30434 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:01:32.504: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6408 exec execpodt5pz9 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30434' Jun 17 22:01:32.756: INFO: rc: 1 Jun 17 22:01:32.756: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6408 exec execpodt5pz9 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30434: Command stdout: stderr: + + echonc hostName -v -t -w 2 10.10.190.207 30434 nc: connect to 10.10.190.207 port 30434 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:01:33.504: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6408 exec execpodt5pz9 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30434' Jun 17 22:01:33.751: INFO: rc: 1 Jun 17 22:01:33.751: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6408 exec execpodt5pz9 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30434: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30434 + echo hostName nc: connect to 10.10.190.207 port 30434 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:01:34.505: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6408 exec execpodt5pz9 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30434' Jun 17 22:01:34.756: INFO: rc: 1 Jun 17 22:01:34.756: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6408 exec execpodt5pz9 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30434: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30434 nc: connect to 10.10.190.207 port 30434 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:01:35.504: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6408 exec execpodt5pz9 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30434' Jun 17 22:01:35.752: INFO: rc: 1 Jun 17 22:01:35.752: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6408 exec execpodt5pz9 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30434: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30434 nc: connect to 10.10.190.207 port 30434 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:01:36.504: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6408 exec execpodt5pz9 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30434' Jun 17 22:01:36.771: INFO: rc: 1 Jun 17 22:01:36.771: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6408 exec execpodt5pz9 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30434: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30434 nc: connect to 10.10.190.207 port 30434 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:01:37.505: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6408 exec execpodt5pz9 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30434' Jun 17 22:01:37.762: INFO: rc: 1 Jun 17 22:01:37.762: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6408 exec execpodt5pz9 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30434: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30434 nc: connect to 10.10.190.207 port 30434 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:01:38.504: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6408 exec execpodt5pz9 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30434' Jun 17 22:01:38.747: INFO: rc: 1 Jun 17 22:01:38.747: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6408 exec execpodt5pz9 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30434: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30434 nc: connect to 10.10.190.207 port 30434 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:01:39.504: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6408 exec execpodt5pz9 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30434' Jun 17 22:01:39.872: INFO: rc: 1 Jun 17 22:01:39.872: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6408 exec execpodt5pz9 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30434: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30434 nc: connect to 10.10.190.207 port 30434 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:01:40.505: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6408 exec execpodt5pz9 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30434' Jun 17 22:01:41.056: INFO: rc: 1 Jun 17 22:01:41.056: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6408 exec execpodt5pz9 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30434: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30434 nc: connect to 10.10.190.207 port 30434 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:01:41.504: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6408 exec execpodt5pz9 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30434' Jun 17 22:01:41.891: INFO: rc: 1 Jun 17 22:01:41.891: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6408 exec execpodt5pz9 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30434: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30434 nc: connect to 10.10.190.207 port 30434 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:01:42.505: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6408 exec execpodt5pz9 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30434' Jun 17 22:01:42.752: INFO: rc: 1 Jun 17 22:01:42.752: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6408 exec execpodt5pz9 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30434: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30434 nc: connect to 10.10.190.207 port 30434 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:01:43.504: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6408 exec execpodt5pz9 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30434' Jun 17 22:01:43.774: INFO: rc: 1 Jun 17 22:01:43.774: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6408 exec execpodt5pz9 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30434: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30434 nc: connect to 10.10.190.207 port 30434 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:01:44.504: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6408 exec execpodt5pz9 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30434' Jun 17 22:01:46.437: INFO: rc: 1 Jun 17 22:01:46.437: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6408 exec execpodt5pz9 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30434: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30434 nc: connect to 10.10.190.207 port 30434 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:01:46.504: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6408 exec execpodt5pz9 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30434' Jun 17 22:01:46.851: INFO: rc: 1 Jun 17 22:01:46.852: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6408 exec execpodt5pz9 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30434: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30434 nc: connect to 10.10.190.207 port 30434 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:01:47.505: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6408 exec execpodt5pz9 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30434' Jun 17 22:01:47.761: INFO: rc: 1 Jun 17 22:01:47.761: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6408 exec execpodt5pz9 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30434: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30434 nc: connect to 10.10.190.207 port 30434 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:01:48.505: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6408 exec execpodt5pz9 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30434' Jun 17 22:01:48.784: INFO: rc: 1 Jun 17 22:01:48.784: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6408 exec execpodt5pz9 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30434: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30434 nc: connect to 10.10.190.207 port 30434 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:01:49.504: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6408 exec execpodt5pz9 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30434' Jun 17 22:01:49.750: INFO: rc: 1 Jun 17 22:01:49.750: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6408 exec execpodt5pz9 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30434: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30434 nc: connect to 10.10.190.207 port 30434 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:01:50.504: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6408 exec execpodt5pz9 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30434' Jun 17 22:01:50.758: INFO: rc: 1 Jun 17 22:01:50.758: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6408 exec execpodt5pz9 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30434: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30434 nc: connect to 10.10.190.207 port 30434 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:01:51.506: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6408 exec execpodt5pz9 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30434' Jun 17 22:01:51.750: INFO: rc: 1 Jun 17 22:01:51.750: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6408 exec execpodt5pz9 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30434: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30434 nc: connect to 10.10.190.207 port 30434 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:01:52.505: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6408 exec execpodt5pz9 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30434' Jun 17 22:01:52.741: INFO: rc: 1 Jun 17 22:01:52.741: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6408 exec execpodt5pz9 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30434: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30434 nc: connect to 10.10.190.207 port 30434 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:01:53.504: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6408 exec execpodt5pz9 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30434' Jun 17 22:01:53.747: INFO: rc: 1 Jun 17 22:01:53.747: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6408 exec execpodt5pz9 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30434: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30434 nc: connect to 10.10.190.207 port 30434 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:01:54.504: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6408 exec execpodt5pz9 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30434' Jun 17 22:01:54.760: INFO: rc: 1 Jun 17 22:01:54.760: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6408 exec execpodt5pz9 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30434: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30434 nc: connect to 10.10.190.207 port 30434 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:01:55.506: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6408 exec execpodt5pz9 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30434' Jun 17 22:01:55.770: INFO: rc: 1 Jun 17 22:01:55.770: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6408 exec execpodt5pz9 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30434: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30434 nc: connect to 10.10.190.207 port 30434 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:01:56.504: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6408 exec execpodt5pz9 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30434' Jun 17 22:01:56.757: INFO: rc: 1 Jun 17 22:01:56.758: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6408 exec execpodt5pz9 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30434: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30434 nc: connect to 10.10.190.207 port 30434 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:01:57.505: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6408 exec execpodt5pz9 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30434' Jun 17 22:01:57.761: INFO: rc: 1 Jun 17 22:01:57.761: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6408 exec execpodt5pz9 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30434: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30434 nc: connect to 10.10.190.207 port 30434 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:01:57.761: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6408 exec execpodt5pz9 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30434' Jun 17 22:01:58.040: INFO: rc: 1 Jun 17 22:01:58.040: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6408 exec execpodt5pz9 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30434: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30434 nc: connect to 10.10.190.207 port 30434 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:01:58.041: FAIL: Unexpected error: <*errors.errorString | 0xc003bd6360>: { s: "service is not reachable within 2m0s timeout on endpoint 10.10.190.207:30434 over TCP protocol", } service is not reachable within 2m0s timeout on endpoint 10.10.190.207:30434 over TCP protocol occurred Full Stack Trace k8s.io/kubernetes/test/e2e/network.glob..func24.15() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1351 +0x358 k8s.io/kubernetes/test/e2e.RunE2ETests(0xc002a73200) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x36c k8s.io/kubernetes/test/e2e.TestE2E(0xc002a73200) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:144 +0x2b testing.tRunner(0xc002a73200, 0x70f99e8) /usr/local/go/src/testing/testing.go:1193 +0xef created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1238 +0x2b3 Jun 17 22:01:58.042: INFO: Cleaning up the ExternalName to NodePort test service [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 STEP: Collecting events from namespace "services-6408". STEP: Found 17 events. Jun 17 22:01:58.070: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for execpodt5pz9: { } Scheduled: Successfully assigned services-6408/execpodt5pz9 to node2 Jun 17 22:01:58.070: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for externalname-service-krpfv: { } Scheduled: Successfully assigned services-6408/externalname-service-krpfv to node2 Jun 17 22:01:58.070: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for externalname-service-rm766: { } Scheduled: Successfully assigned services-6408/externalname-service-rm766 to node1 Jun 17 22:01:58.070: INFO: At 2022-06-17 21:59:29 +0000 UTC - event for externalname-service: {replication-controller } SuccessfulCreate: Created pod: externalname-service-rm766 Jun 17 22:01:58.070: INFO: At 2022-06-17 21:59:29 +0000 UTC - event for externalname-service: {replication-controller } SuccessfulCreate: Created pod: externalname-service-krpfv Jun 17 22:01:58.070: INFO: At 2022-06-17 21:59:32 +0000 UTC - event for externalname-service-rm766: {kubelet node1} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/agnhost:2.32" Jun 17 22:01:58.070: INFO: At 2022-06-17 21:59:33 +0000 UTC - event for externalname-service-krpfv: {kubelet node2} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/agnhost:2.32" Jun 17 22:01:58.070: INFO: At 2022-06-17 21:59:33 +0000 UTC - event for externalname-service-rm766: {kubelet node1} Created: Created container externalname-service Jun 17 22:01:58.070: INFO: At 2022-06-17 21:59:33 +0000 UTC - event for externalname-service-rm766: {kubelet node1} Started: Started container externalname-service Jun 17 22:01:58.070: INFO: At 2022-06-17 21:59:33 +0000 UTC - event for externalname-service-rm766: {kubelet node1} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/agnhost:2.32" in 267.217293ms Jun 17 22:01:58.070: INFO: At 2022-06-17 21:59:40 +0000 UTC - event for externalname-service-krpfv: {kubelet node2} Created: Created container externalname-service Jun 17 22:01:58.070: INFO: At 2022-06-17 21:59:40 +0000 UTC - event for externalname-service-krpfv: {kubelet node2} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/agnhost:2.32" in 7.295927724s Jun 17 22:01:58.070: INFO: At 2022-06-17 21:59:41 +0000 UTC - event for externalname-service-krpfv: {kubelet node2} Started: Started container externalname-service Jun 17 22:01:58.070: INFO: At 2022-06-17 21:59:50 +0000 UTC - event for execpodt5pz9: {kubelet node2} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/agnhost:2.32" Jun 17 22:01:58.070: INFO: At 2022-06-17 21:59:50 +0000 UTC - event for execpodt5pz9: {kubelet node2} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/agnhost:2.32" in 306.329675ms Jun 17 22:01:58.070: INFO: At 2022-06-17 21:59:51 +0000 UTC - event for execpodt5pz9: {kubelet node2} Started: Started container agnhost-container Jun 17 22:01:58.070: INFO: At 2022-06-17 21:59:51 +0000 UTC - event for execpodt5pz9: {kubelet node2} Created: Created container agnhost-container Jun 17 22:01:58.073: INFO: POD NODE PHASE GRACE CONDITIONS Jun 17 22:01:58.073: INFO: execpodt5pz9 node2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-06-17 21:59:47 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2022-06-17 21:59:52 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-06-17 21:59:52 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-06-17 21:59:47 +0000 UTC }] Jun 17 22:01:58.073: INFO: externalname-service-krpfv node2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-06-17 21:59:29 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2022-06-17 21:59:42 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-06-17 21:59:42 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-06-17 21:59:29 +0000 UTC }] Jun 17 22:01:58.073: INFO: externalname-service-rm766 node1 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-06-17 21:59:29 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2022-06-17 21:59:34 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-06-17 21:59:34 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-06-17 21:59:29 +0000 UTC }] Jun 17 22:01:58.073: INFO: Jun 17 22:01:58.077: INFO: Logging node info for node master1 Jun 17 22:01:58.080: INFO: Node Info: &Node{ObjectMeta:{master1 47691bb2-4ee9-4386-8bec-0f9db1917afd 35810 0 2022-06-17 19:59:00 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master1 kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.202 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2022-06-17 19:59:03 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}}} {flanneld Update v1 2022-06-17 20:01:43 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2022-06-17 20:01:45 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.0.0/24\"":{}},"f:taints":{}}}} {kubelet Update v1 2022-06-17 20:06:30 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{451201003520 0} {} 440625980Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234739200 0} {} 196518300Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{406080902496 0} {} 406080902496 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324575232 0} {} 195629468Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-06-17 20:04:36 +0000 UTC,LastTransitionTime:2022-06-17 20:04:36 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-06-17 22:01:49 +0000 UTC,LastTransitionTime:2022-06-17 19:58:57 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-06-17 22:01:49 +0000 UTC,LastTransitionTime:2022-06-17 19:58:57 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-06-17 22:01:49 +0000 UTC,LastTransitionTime:2022-06-17 19:58:57 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-06-17 22:01:49 +0000 UTC,LastTransitionTime:2022-06-17 20:01:45 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.202,},NodeAddress{Type:Hostname,Address:master1,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:f59e69c8e0cc41ff966b02f015e9cf30,SystemUUID:00ACFB60-0631-E711-906E-0017A4403562,BootID:81e1dc93-cb0d-4bf9-b7c4-28e0b4aef603,KernelVersion:3.10.0-1160.66.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.17,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:727740703,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 sirot/netperf-latest:latest],SizeBytes:282025213,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e k8s.gcr.io/coredns/coredns:v1.8.0],SizeBytes:42454755,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[localhost:30500/tasextender@sha256:a226a9c613b9eeed89115dd78ba697306e50d1b4466033c8415371714720c861 tasextender:latest localhost:30500/tasextender:0.4],SizeBytes:28910791,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[registry@sha256:1cd9409a311350c3072fe510b52046f104416376c126a479cef9a4dfe692cf57 registry:2.7.0],SizeBytes:24191168,},ContainerImage{Names:[nginx@sha256:b92d3b942c8b84da889ac3dc6e83bd20ffb8cd2d8298eba92c8b0bf88d52f03e nginx:1.20.1-alpine],SizeBytes:22721538,},ContainerImage{Names:[@ :],SizeBytes:5577654,},ContainerImage{Names:[alpine@sha256:c0e9560cda118f9ec63ddefb4a173a2b2a0347082d7dff7dc14272e7841a5b5a alpine:3.12.1],SizeBytes:5573013,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jun 17 22:01:58.080: INFO: Logging kubelet events for node master1 Jun 17 22:01:58.082: INFO: Logging pods the kubelet thinks is on node master1 Jun 17 22:01:58.117: INFO: kube-apiserver-master1 started at 2022-06-17 20:00:04 +0000 UTC (0+1 container statuses recorded) Jun 17 22:01:58.117: INFO: Container kube-apiserver ready: true, restart count 0 Jun 17 22:01:58.117: INFO: kube-controller-manager-master1 started at 2022-06-17 20:08:08 +0000 UTC (0+1 container statuses recorded) Jun 17 22:01:58.117: INFO: Container kube-controller-manager ready: true, restart count 2 Jun 17 22:01:58.117: INFO: kube-flannel-z9nqz started at 2022-06-17 20:01:38 +0000 UTC (1+1 container statuses recorded) Jun 17 22:01:58.117: INFO: Init container install-cni ready: true, restart count 2 Jun 17 22:01:58.117: INFO: Container kube-flannel ready: true, restart count 2 Jun 17 22:01:58.117: INFO: kube-multus-ds-amd64-rqb4r started at 2022-06-17 20:01:47 +0000 UTC (0+1 container statuses recorded) Jun 17 22:01:58.117: INFO: Container kube-multus ready: true, restart count 1 Jun 17 22:01:58.117: INFO: kube-scheduler-master1 started at 2022-06-17 20:08:08 +0000 UTC (0+1 container statuses recorded) Jun 17 22:01:58.117: INFO: Container kube-scheduler ready: true, restart count 0 Jun 17 22:01:58.117: INFO: kube-proxy-b2xlr started at 2022-06-17 20:00:43 +0000 UTC (0+1 container statuses recorded) Jun 17 22:01:58.117: INFO: Container kube-proxy ready: true, restart count 2 Jun 17 22:01:58.117: INFO: container-registry-65d7c44b96-hq7rp started at 2022-06-17 20:06:17 +0000 UTC (0+2 container statuses recorded) Jun 17 22:01:58.117: INFO: Container docker-registry ready: true, restart count 0 Jun 17 22:01:58.117: INFO: Container nginx ready: true, restart count 0 Jun 17 22:01:58.117: INFO: node-exporter-bts5h started at 2022-06-17 20:14:54 +0000 UTC (0+2 container statuses recorded) Jun 17 22:01:58.117: INFO: Container kube-rbac-proxy ready: true, restart count 0 Jun 17 22:01:58.117: INFO: Container node-exporter ready: true, restart count 0 Jun 17 22:01:58.199: INFO: Latency metrics for node master1 Jun 17 22:01:58.199: INFO: Logging node info for node master2 Jun 17 22:01:58.201: INFO: Node Info: &Node{ObjectMeta:{master2 71ab7827-6f85-4ecf-82ce-5b27d8ba1a11 35977 0 2022-06-17 19:59:29 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master2 kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.203 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/master.version:v0.8.2 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2022-06-17 19:59:30 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}}} {flanneld Update v1 2022-06-17 20:01:42 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2022-06-17 20:01:45 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}},"f:taints":{}}}} {nfd-master Update v1 2022-06-17 20:09:34 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/master.version":{}}}}} {kubelet Update v1 2022-06-17 20:09:40 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:capacity":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.1.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{451201003520 0} {} 440625980Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234743296 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{406080902496 0} {} 406080902496 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324579328 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-06-17 20:04:35 +0000 UTC,LastTransitionTime:2022-06-17 20:04:35 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-06-17 22:01:57 +0000 UTC,LastTransitionTime:2022-06-17 19:59:29 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-06-17 22:01:57 +0000 UTC,LastTransitionTime:2022-06-17 19:59:29 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-06-17 22:01:57 +0000 UTC,LastTransitionTime:2022-06-17 19:59:29 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-06-17 22:01:57 +0000 UTC,LastTransitionTime:2022-06-17 20:01:45 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.203,},NodeAddress{Type:Hostname,Address:master2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:ba0363db4fd2476098c500989c8b1fd5,SystemUUID:00A0DE53-E51D-E711-906E-0017A4403562,BootID:cafb2298-e9e8-4bc9-82ab-0feb6c416066,KernelVersion:3.10.0-1160.66.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.17,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:727740703,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 sirot/netperf-latest:latest],SizeBytes:282025213,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[k8s.gcr.io/nfd/node-feature-discovery@sha256:74a1cbd82354f148277e20cdce25d57816e355a896bc67f67a0f722164b16945 k8s.gcr.io/nfd/node-feature-discovery:v0.8.2],SizeBytes:108486428,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e k8s.gcr.io/coredns/coredns:v1.8.0],SizeBytes:42454755,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 aquasec/kube-bench:0.3.1],SizeBytes:19301876,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jun 17 22:01:58.202: INFO: Logging kubelet events for node master2 Jun 17 22:01:58.204: INFO: Logging pods the kubelet thinks is on node master2 Jun 17 22:01:58.220: INFO: kube-controller-manager-master2 started at 2022-06-17 20:08:05 +0000 UTC (0+1 container statuses recorded) Jun 17 22:01:58.220: INFO: Container kube-controller-manager ready: true, restart count 2 Jun 17 22:01:58.220: INFO: kube-scheduler-master2 started at 2022-06-17 20:00:05 +0000 UTC (0+1 container statuses recorded) Jun 17 22:01:58.220: INFO: Container kube-scheduler ready: true, restart count 2 Jun 17 22:01:58.220: INFO: kube-flannel-kmc7f started at 2022-06-17 20:01:38 +0000 UTC (1+1 container statuses recorded) Jun 17 22:01:58.220: INFO: Init container install-cni ready: true, restart count 2 Jun 17 22:01:58.220: INFO: Container kube-flannel ready: true, restart count 2 Jun 17 22:01:58.220: INFO: node-feature-discovery-controller-cff799f9f-zlzkd started at 2022-06-17 20:09:28 +0000 UTC (0+1 container statuses recorded) Jun 17 22:01:58.220: INFO: Container nfd-controller ready: true, restart count 0 Jun 17 22:01:58.220: INFO: node-exporter-ccmb2 started at 2022-06-17 20:14:54 +0000 UTC (0+2 container statuses recorded) Jun 17 22:01:58.220: INFO: Container kube-rbac-proxy ready: true, restart count 0 Jun 17 22:01:58.220: INFO: Container node-exporter ready: true, restart count 0 Jun 17 22:01:58.220: INFO: kube-apiserver-master2 started at 2022-06-17 20:00:05 +0000 UTC (0+1 container statuses recorded) Jun 17 22:01:58.220: INFO: Container kube-apiserver ready: true, restart count 0 Jun 17 22:01:58.220: INFO: kube-proxy-52p78 started at 2022-06-17 20:00:43 +0000 UTC (0+1 container statuses recorded) Jun 17 22:01:58.220: INFO: Container kube-proxy ready: true, restart count 1 Jun 17 22:01:58.220: INFO: kube-multus-ds-amd64-spg7h started at 2022-06-17 20:01:47 +0000 UTC (0+1 container statuses recorded) Jun 17 22:01:58.220: INFO: Container kube-multus ready: true, restart count 1 Jun 17 22:01:58.220: INFO: coredns-8474476ff8-55pd7 started at 2022-06-17 20:02:14 +0000 UTC (0+1 container statuses recorded) Jun 17 22:01:58.220: INFO: Container coredns ready: true, restart count 1 Jun 17 22:01:58.220: INFO: dns-autoscaler-7df78bfcfb-ml447 started at 2022-06-17 20:02:16 +0000 UTC (0+1 container statuses recorded) Jun 17 22:01:58.220: INFO: Container autoscaler ready: true, restart count 1 Jun 17 22:01:58.312: INFO: Latency metrics for node master2 Jun 17 22:01:58.312: INFO: Logging node info for node master3 Jun 17 22:01:58.316: INFO: Node Info: &Node{ObjectMeta:{master3 4495d2b3-3dc7-45fa-93e4-2ad5ef91730e 35932 0 2022-06-17 19:59:37 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master3 kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.204 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2022-06-17 19:59:39 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}}} {kube-controller-manager Update v1 2022-06-17 20:00:32 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.2.0/24\"":{}},"f:taints":{}}}} {flanneld Update v1 2022-06-17 20:01:42 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kubelet Update v1 2022-06-17 20:12:12 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:capacity":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.2.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{451201003520 0} {} 440625980Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234743296 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{406080902496 0} {} 406080902496 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324579328 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-06-17 20:04:33 +0000 UTC,LastTransitionTime:2022-06-17 20:04:33 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-06-17 22:01:55 +0000 UTC,LastTransitionTime:2022-06-17 19:59:37 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-06-17 22:01:55 +0000 UTC,LastTransitionTime:2022-06-17 19:59:37 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-06-17 22:01:55 +0000 UTC,LastTransitionTime:2022-06-17 19:59:37 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-06-17 22:01:55 +0000 UTC,LastTransitionTime:2022-06-17 20:01:45 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.204,},NodeAddress{Type:Hostname,Address:master3,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:e420146228b341cbbaf470c338ef023e,SystemUUID:008B1444-141E-E711-906E-0017A4403562,BootID:88e9c5d2-4324-4e63-8acf-ee80e9511e70,KernelVersion:3.10.0-1160.66.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.17,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:727740703,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[quay.io/prometheus-operator/prometheus-operator@sha256:850c86bfeda4389bc9c757a9fd17ca5a090ea6b424968178d4467492cfa13921 quay.io/prometheus-operator/prometheus-operator:v0.44.1],SizeBytes:42617274,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e k8s.gcr.io/coredns/coredns:v1.8.0],SizeBytes:42454755,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jun 17 22:01:58.316: INFO: Logging kubelet events for node master3 Jun 17 22:01:58.319: INFO: Logging pods the kubelet thinks is on node master3 Jun 17 22:01:58.331: INFO: kube-apiserver-master3 started at 2022-06-17 20:00:05 +0000 UTC (0+1 container statuses recorded) Jun 17 22:01:58.331: INFO: Container kube-apiserver ready: true, restart count 0 Jun 17 22:01:58.331: INFO: kube-scheduler-master3 started at 2022-06-17 20:08:07 +0000 UTC (0+1 container statuses recorded) Jun 17 22:01:58.331: INFO: Container kube-scheduler ready: true, restart count 2 Jun 17 22:01:58.331: INFO: kube-proxy-qw2lh started at 2022-06-17 20:00:43 +0000 UTC (0+1 container statuses recorded) Jun 17 22:01:58.331: INFO: Container kube-proxy ready: true, restart count 1 Jun 17 22:01:58.331: INFO: kube-flannel-7sp2w started at 2022-06-17 20:01:38 +0000 UTC (1+1 container statuses recorded) Jun 17 22:01:58.331: INFO: Init container install-cni ready: true, restart count 0 Jun 17 22:01:58.331: INFO: Container kube-flannel ready: true, restart count 2 Jun 17 22:01:58.331: INFO: kube-multus-ds-amd64-vtvhp started at 2022-06-17 20:01:47 +0000 UTC (0+1 container statuses recorded) Jun 17 22:01:58.331: INFO: Container kube-multus ready: true, restart count 1 Jun 17 22:01:58.331: INFO: node-exporter-tv8q4 started at 2022-06-17 20:14:54 +0000 UTC (0+2 container statuses recorded) Jun 17 22:01:58.331: INFO: Container kube-rbac-proxy ready: true, restart count 0 Jun 17 22:01:58.331: INFO: Container node-exporter ready: true, restart count 0 Jun 17 22:01:58.331: INFO: kube-controller-manager-master3 started at 2022-06-17 20:08:07 +0000 UTC (0+1 container statuses recorded) Jun 17 22:01:58.331: INFO: Container kube-controller-manager ready: true, restart count 2 Jun 17 22:01:58.331: INFO: coredns-8474476ff8-plfdq started at 2022-06-17 20:02:18 +0000 UTC (0+1 container statuses recorded) Jun 17 22:01:58.331: INFO: Container coredns ready: true, restart count 1 Jun 17 22:01:58.331: INFO: prometheus-operator-585ccfb458-kz9ss started at 2022-06-17 20:14:47 +0000 UTC (0+2 container statuses recorded) Jun 17 22:01:58.331: INFO: Container kube-rbac-proxy ready: true, restart count 0 Jun 17 22:01:58.331: INFO: Container prometheus-operator ready: true, restart count 0 Jun 17 22:01:58.418: INFO: Latency metrics for node master3 Jun 17 22:01:58.418: INFO: Logging node info for node node1 Jun 17 22:01:58.420: INFO: Node Info: &Node{ObjectMeta:{node1 2db3a28c-448f-4511-9db8-4ef739b681b1 35868 0 2022-06-17 20:00:39 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux cmk.intel.com/cmk-node:true feature.node.kubernetes.io/cpu-cpuid.ADX:true feature.node.kubernetes.io/cpu-cpuid.AESNI:true feature.node.kubernetes.io/cpu-cpuid.AVX:true feature.node.kubernetes.io/cpu-cpuid.AVX2:true feature.node.kubernetes.io/cpu-cpuid.AVX512BW:true feature.node.kubernetes.io/cpu-cpuid.AVX512CD:true feature.node.kubernetes.io/cpu-cpuid.AVX512DQ:true feature.node.kubernetes.io/cpu-cpuid.AVX512F:true feature.node.kubernetes.io/cpu-cpuid.AVX512VL:true feature.node.kubernetes.io/cpu-cpuid.FMA3:true feature.node.kubernetes.io/cpu-cpuid.HLE:true feature.node.kubernetes.io/cpu-cpuid.IBPB:true feature.node.kubernetes.io/cpu-cpuid.MPX:true feature.node.kubernetes.io/cpu-cpuid.RTM:true feature.node.kubernetes.io/cpu-cpuid.SSE4:true feature.node.kubernetes.io/cpu-cpuid.SSE42:true feature.node.kubernetes.io/cpu-cpuid.STIBP:true feature.node.kubernetes.io/cpu-cpuid.VMX:true feature.node.kubernetes.io/cpu-cstate.enabled:true feature.node.kubernetes.io/cpu-hardware_multithreading:true feature.node.kubernetes.io/cpu-pstate.status:active feature.node.kubernetes.io/cpu-pstate.turbo:true feature.node.kubernetes.io/cpu-rdt.RDTCMT:true feature.node.kubernetes.io/cpu-rdt.RDTL3CA:true feature.node.kubernetes.io/cpu-rdt.RDTMBA:true feature.node.kubernetes.io/cpu-rdt.RDTMBM:true feature.node.kubernetes.io/cpu-rdt.RDTMON:true feature.node.kubernetes.io/kernel-config.NO_HZ:true feature.node.kubernetes.io/kernel-config.NO_HZ_FULL:true feature.node.kubernetes.io/kernel-selinux.enabled:true feature.node.kubernetes.io/kernel-version.full:3.10.0-1160.66.1.el7.x86_64 feature.node.kubernetes.io/kernel-version.major:3 feature.node.kubernetes.io/kernel-version.minor:10 feature.node.kubernetes.io/kernel-version.revision:0 feature.node.kubernetes.io/memory-numa:true feature.node.kubernetes.io/network-sriov.capable:true feature.node.kubernetes.io/network-sriov.configured:true feature.node.kubernetes.io/pci-0300_1a03.present:true feature.node.kubernetes.io/storage-nonrotationaldisk:true feature.node.kubernetes.io/system-os_release.ID:centos feature.node.kubernetes.io/system-os_release.VERSION_ID:7 feature.node.kubernetes.io/system-os_release.VERSION_ID.major:7 kubernetes.io/arch:amd64 kubernetes.io/hostname:node1 kubernetes.io/os:linux] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.207 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/extended-resources: nfd.node.kubernetes.io/feature-labels:cpu-cpuid.ADX,cpu-cpuid.AESNI,cpu-cpuid.AVX,cpu-cpuid.AVX2,cpu-cpuid.AVX512BW,cpu-cpuid.AVX512CD,cpu-cpuid.AVX512DQ,cpu-cpuid.AVX512F,cpu-cpuid.AVX512VL,cpu-cpuid.FMA3,cpu-cpuid.HLE,cpu-cpuid.IBPB,cpu-cpuid.MPX,cpu-cpuid.RTM,cpu-cpuid.SSE4,cpu-cpuid.SSE42,cpu-cpuid.STIBP,cpu-cpuid.VMX,cpu-cstate.enabled,cpu-hardware_multithreading,cpu-pstate.status,cpu-pstate.turbo,cpu-rdt.RDTCMT,cpu-rdt.RDTL3CA,cpu-rdt.RDTMBA,cpu-rdt.RDTMBM,cpu-rdt.RDTMON,kernel-config.NO_HZ,kernel-config.NO_HZ_FULL,kernel-selinux.enabled,kernel-version.full,kernel-version.major,kernel-version.minor,kernel-version.revision,memory-numa,network-sriov.capable,network-sriov.configured,pci-0300_1a03.present,storage-nonrotationaldisk,system-os_release.ID,system-os_release.VERSION_ID,system-os_release.VERSION_ID.major nfd.node.kubernetes.io/worker.version:v0.8.2 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2022-06-17 20:00:39 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.4.0/24\"":{}}}}} {kubeadm Update v1 2022-06-17 20:00:40 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {flanneld Update v1 2022-06-17 20:01:43 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {nfd-master Update v1 2022-06-17 20:09:35 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/extended-resources":{},"f:nfd.node.kubernetes.io/feature-labels":{},"f:nfd.node.kubernetes.io/worker.version":{}},"f:labels":{"f:feature.node.kubernetes.io/cpu-cpuid.ADX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AESNI":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX2":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512BW":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512CD":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512DQ":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512F":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512VL":{},"f:feature.node.kubernetes.io/cpu-cpuid.FMA3":{},"f:feature.node.kubernetes.io/cpu-cpuid.HLE":{},"f:feature.node.kubernetes.io/cpu-cpuid.IBPB":{},"f:feature.node.kubernetes.io/cpu-cpuid.MPX":{},"f:feature.node.kubernetes.io/cpu-cpuid.RTM":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE4":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE42":{},"f:feature.node.kubernetes.io/cpu-cpuid.STIBP":{},"f:feature.node.kubernetes.io/cpu-cpuid.VMX":{},"f:feature.node.kubernetes.io/cpu-cstate.enabled":{},"f:feature.node.kubernetes.io/cpu-hardware_multithreading":{},"f:feature.node.kubernetes.io/cpu-pstate.status":{},"f:feature.node.kubernetes.io/cpu-pstate.turbo":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTCMT":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTL3CA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBM":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMON":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ_FULL":{},"f:feature.node.kubernetes.io/kernel-selinux.enabled":{},"f:feature.node.kubernetes.io/kernel-version.full":{},"f:feature.node.kubernetes.io/kernel-version.major":{},"f:feature.node.kubernetes.io/kernel-version.minor":{},"f:feature.node.kubernetes.io/kernel-version.revision":{},"f:feature.node.kubernetes.io/memory-numa":{},"f:feature.node.kubernetes.io/network-sriov.capable":{},"f:feature.node.kubernetes.io/network-sriov.configured":{},"f:feature.node.kubernetes.io/pci-0300_1a03.present":{},"f:feature.node.kubernetes.io/storage-nonrotationaldisk":{},"f:feature.node.kubernetes.io/system-os_release.ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID.major":{}}}}} {Swagger-Codegen Update v1 2022-06-17 20:13:06 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:cmk.intel.com/cmk-node":{}}},"f:status":{"f:capacity":{"f:cmk.intel.com/exclusive-cores":{}}}}} {kubelet Update v1 2022-06-17 20:13:08 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:cmk.intel.com/exclusive-cores":{},"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:capacity":{"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.4.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.4.0/24],},Status:NodeStatus{Capacity:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{451201003520 0} {} 440625980Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{201269608448 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{77 0} {} 77 DecimalSI},ephemeral-storage: {{406080902496 0} {} 406080902496 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{178884608000 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-06-17 20:04:34 +0000 UTC,LastTransitionTime:2022-06-17 20:04:34 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-06-17 22:01:52 +0000 UTC,LastTransitionTime:2022-06-17 20:00:39 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-06-17 22:01:52 +0000 UTC,LastTransitionTime:2022-06-17 20:00:39 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-06-17 22:01:52 +0000 UTC,LastTransitionTime:2022-06-17 20:00:39 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-06-17 22:01:52 +0000 UTC,LastTransitionTime:2022-06-17 20:01:49 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.207,},NodeAddress{Type:Hostname,Address:node1,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:b4b206100a5d45e9959c4a79c836676a,SystemUUID:00CDA902-D022-E711-906E-0017A4403562,BootID:5a19e1a7-8d9a-4724-83a4-bd77b1a0f8f4,KernelVersion:3.10.0-1160.66.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.17,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[opnfv/barometer-collectd@sha256:f30e965aa6195e6ac4ca2410f5a15e3704c92e4afa5208178ca22a7911975d66],SizeBytes:1075575763,},ContainerImage{Names:[@ :],SizeBytes:1007077455,},ContainerImage{Names:[localhost:30500/cmk@sha256:7227e64d78c2a9dd290de0ec1cbbaf536dad977fc3efca629dc87d6ffb97071e cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:727740703,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[golang@sha256:db2475a1dbb2149508e5db31d7d77a75e6600d54be645f37681f03f2762169ba golang:alpine3.12],SizeBytes:301186719,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[grafana/grafana@sha256:ba39bf5131dcc0464134a3ff0e26e8c6380415249fa725e5f619176601255172 grafana/grafana:7.5.4],SizeBytes:203572842,},ContainerImage{Names:[quay.io/prometheus/prometheus@sha256:b899dbd1b9017b9a379f76ce5b40eead01a62762c4f2057eacef945c3c22d210 quay.io/prometheus/prometheus:v2.22.1],SizeBytes:168344243,},ContainerImage{Names:[nginx@sha256:a05b0cdd4fc1be3b224ba9662ebdf98fe44c09c0c9215b45f84344c12867002e nginx:1.21.1],SizeBytes:133175493,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:125930239,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[k8s.gcr.io/nfd/node-feature-discovery@sha256:74a1cbd82354f148277e20cdce25d57816e355a896bc67f67a0f722164b16945 k8s.gcr.io/nfd/node-feature-discovery:v0.8.2],SizeBytes:108486428,},ContainerImage{Names:[directxman12/k8s-prometheus-adapter@sha256:2b09a571757a12c0245f2f1a74db4d1b9386ff901cf57f5ce48a0a682bd0e3af directxman12/k8s-prometheus-adapter:v0.8.2],SizeBytes:68230450,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[localhost:30500/sriov-device-plugin@sha256:21d7abd21ac65aac7d19aaa2b1b05a71e496b7bf6251c76df58855be9c3aaa59 nfvpe/sriov-device-plugin:latest localhost:30500/sriov-device-plugin:v3.3.2],SizeBytes:42676189,},ContainerImage{Names:[localhost:30500/tasextender@sha256:a226a9c613b9eeed89115dd78ba697306e50d1b4466033c8415371714720c861 localhost:30500/tasextender:0.4],SizeBytes:28910791,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[prom/collectd-exporter@sha256:73fbda4d24421bff3b741c27efc36f1b6fbe7c57c378d56d4ff78101cd556654],SizeBytes:17463681,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:16032814,},ContainerImage{Names:[quay.io/prometheus-operator/prometheus-config-reloader@sha256:4dee0fcf1820355ddd6986c1317b555693776c731315544a99d6cc59a7e34ce9 quay.io/prometheus-operator/prometheus-config-reloader:v0.44.1],SizeBytes:13433274,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:11443478,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonewprivs@sha256:8ac1264691820febacf3aea5d152cbde6d10685731ec14966a9401c6f47a68ac k8s.gcr.io/e2e-test-images/nonewprivs:1.3],SizeBytes:7107254,},ContainerImage{Names:[appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172cead6502fec989198b98d8bb appropriate/curl:edge],SizeBytes:5654234,},ContainerImage{Names:[alpine@sha256:c75ac27b49326926b803b9ed43bf088bc220d22556de1bc5f72d742c91398f69 alpine:3.12],SizeBytes:5581590,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:1154361,},ContainerImage{Names:[busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 busybox:1.28],SizeBytes:1146369,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jun 17 22:01:58.422: INFO: Logging kubelet events for node node1 Jun 17 22:01:58.424: INFO: Logging pods the kubelet thinks is on node node1 Jun 17 22:01:58.441: INFO: client-envvars-38ddad96-0a9d-4c95-b902-5636348f134a started at 2022-06-17 22:01:53 +0000 UTC (0+1 container statuses recorded) Jun 17 22:01:58.441: INFO: Container env3cont ready: false, restart count 0 Jun 17 22:01:58.441: INFO: adopt-release-p7hdj started at 2022-06-17 22:01:55 +0000 UTC (0+1 container statuses recorded) Jun 17 22:01:58.441: INFO: Container c ready: false, restart count 0 Jun 17 22:01:58.441: INFO: nginx-proxy-node1 started at 2022-06-17 20:00:39 +0000 UTC (0+1 container statuses recorded) Jun 17 22:01:58.441: INFO: Container nginx-proxy ready: true, restart count 2 Jun 17 22:01:58.441: INFO: kube-multus-ds-amd64-m6vf8 started at 2022-06-17 20:01:47 +0000 UTC (0+1 container statuses recorded) Jun 17 22:01:58.441: INFO: Container kube-multus ready: true, restart count 1 Jun 17 22:01:58.441: INFO: execpod8gpkx started at 2022-06-17 22:00:19 +0000 UTC (0+1 container statuses recorded) Jun 17 22:01:58.441: INFO: Container agnhost-container ready: true, restart count 0 Jun 17 22:01:58.441: INFO: server-envvars-98b588af-2ad7-47f4-b319-3648a38c07a5 started at 2022-06-17 22:01:49 +0000 UTC (0+1 container statuses recorded) Jun 17 22:01:58.441: INFO: Container srv ready: true, restart count 0 Jun 17 22:01:58.441: INFO: busybox-host-aliases00fe61d8-5d2e-435d-b09f-685c654a426e started at 2022-06-17 22:01:54 +0000 UTC (0+1 container statuses recorded) Jun 17 22:01:58.441: INFO: Container busybox-host-aliases00fe61d8-5d2e-435d-b09f-685c654a426e ready: false, restart count 0 Jun 17 22:01:58.441: INFO: kubernetes-dashboard-785dcbb76d-26kg6 started at 2022-06-17 20:02:19 +0000 UTC (0+1 container statuses recorded) Jun 17 22:01:58.441: INFO: Container kubernetes-dashboard ready: true, restart count 2 Jun 17 22:01:58.441: INFO: tas-telemetry-aware-scheduling-84ff454dfb-tbvjv started at 2022-06-17 20:17:57 +0000 UTC (0+1 container statuses recorded) Jun 17 22:01:58.441: INFO: Container tas-extender ready: true, restart count 0 Jun 17 22:01:58.441: INFO: externalname-service-rm766 started at 2022-06-17 21:59:29 +0000 UTC (0+1 container statuses recorded) Jun 17 22:01:58.441: INFO: Container externalname-service ready: true, restart count 0 Jun 17 22:01:58.441: INFO: node-feature-discovery-worker-dgp4b started at 2022-06-17 20:09:28 +0000 UTC (0+1 container statuses recorded) Jun 17 22:01:58.441: INFO: Container nfd-worker ready: true, restart count 0 Jun 17 22:01:58.441: INFO: prometheus-k8s-0 started at 2022-06-17 20:14:56 +0000 UTC (0+4 container statuses recorded) Jun 17 22:01:58.441: INFO: Container config-reloader ready: true, restart count 0 Jun 17 22:01:58.441: INFO: Container custom-metrics-apiserver ready: true, restart count 0 Jun 17 22:01:58.441: INFO: Container grafana ready: true, restart count 0 Jun 17 22:01:58.441: INFO: Container prometheus ready: true, restart count 1 Jun 17 22:01:58.441: INFO: collectd-5src2 started at 2022-06-17 20:18:47 +0000 UTC (0+3 container statuses recorded) Jun 17 22:01:58.441: INFO: Container collectd ready: true, restart count 0 Jun 17 22:01:58.442: INFO: Container collectd-exporter ready: true, restart count 0 Jun 17 22:01:58.442: INFO: Container rbac-proxy ready: true, restart count 0 Jun 17 22:01:58.442: INFO: var-expansion-8fe100b7-cb54-442a-8a73-a4b304daf912 started at 2022-06-17 21:59:39 +0000 UTC (0+1 container statuses recorded) Jun 17 22:01:58.442: INFO: Container dapi-container ready: true, restart count 0 Jun 17 22:01:58.442: INFO: kube-flannel-wqcwq started at 2022-06-17 20:01:38 +0000 UTC (1+1 container statuses recorded) Jun 17 22:01:58.442: INFO: Init container install-cni ready: true, restart count 2 Jun 17 22:01:58.442: INFO: Container kube-flannel ready: true, restart count 2 Jun 17 22:01:58.442: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-whtq2 started at 2022-06-17 20:10:41 +0000 UTC (0+1 container statuses recorded) Jun 17 22:01:58.442: INFO: Container kube-sriovdp ready: true, restart count 0 Jun 17 22:01:58.442: INFO: affinity-nodeport-transition-rwm2c started at 2022-06-17 21:59:28 +0000 UTC (0+1 container statuses recorded) Jun 17 22:01:58.442: INFO: Container affinity-nodeport-transition ready: true, restart count 0 Jun 17 22:01:58.442: INFO: netserver-0 started at 2022-06-17 22:01:21 +0000 UTC (0+1 container statuses recorded) Jun 17 22:01:58.442: INFO: Container webserver ready: false, restart count 0 Jun 17 22:01:58.442: INFO: cmk-init-discover-node1-bvmrv started at 2022-06-17 20:13:02 +0000 UTC (0+3 container statuses recorded) Jun 17 22:01:58.442: INFO: Container discover ready: false, restart count 0 Jun 17 22:01:58.442: INFO: Container init ready: false, restart count 0 Jun 17 22:01:58.442: INFO: Container install ready: false, restart count 0 Jun 17 22:01:58.442: INFO: node-exporter-8ftgl started at 2022-06-17 20:14:54 +0000 UTC (0+2 container statuses recorded) Jun 17 22:01:58.442: INFO: Container kube-rbac-proxy ready: true, restart count 0 Jun 17 22:01:58.442: INFO: Container node-exporter ready: true, restart count 0 Jun 17 22:01:58.442: INFO: cmk-webhook-6c9d5f8578-qcmrd started at 2022-06-17 20:13:52 +0000 UTC (0+1 container statuses recorded) Jun 17 22:01:58.442: INFO: Container cmk-webhook ready: true, restart count 0 Jun 17 22:01:58.442: INFO: sample-apiserver-deployment-64f6b9dc99-hx87j started at 2022-06-17 22:01:45 +0000 UTC (0+2 container statuses recorded) Jun 17 22:01:58.442: INFO: Container etcd ready: false, restart count 0 Jun 17 22:01:58.442: INFO: Container sample-apiserver ready: false, restart count 0 Jun 17 22:01:58.442: INFO: kube-proxy-t4lqk started at 2022-06-17 20:00:43 +0000 UTC (0+1 container statuses recorded) Jun 17 22:01:58.442: INFO: Container kube-proxy ready: true, restart count 2 Jun 17 22:01:58.442: INFO: cmk-xh247 started at 2022-06-17 20:13:51 +0000 UTC (0+2 container statuses recorded) Jun 17 22:01:58.442: INFO: Container nodereport ready: true, restart count 0 Jun 17 22:01:58.442: INFO: Container reconcile ready: true, restart count 0 Jun 17 22:01:58.757: INFO: Latency metrics for node node1 Jun 17 22:01:58.757: INFO: Logging node info for node node2 Jun 17 22:01:58.760: INFO: Node Info: &Node{ObjectMeta:{node2 467d2582-10f7-475b-9f20-5b7c2e46267a 35869 0 2022-06-17 20:00:37 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux cmk.intel.com/cmk-node:true feature.node.kubernetes.io/cpu-cpuid.ADX:true feature.node.kubernetes.io/cpu-cpuid.AESNI:true feature.node.kubernetes.io/cpu-cpuid.AVX:true feature.node.kubernetes.io/cpu-cpuid.AVX2:true feature.node.kubernetes.io/cpu-cpuid.AVX512BW:true feature.node.kubernetes.io/cpu-cpuid.AVX512CD:true feature.node.kubernetes.io/cpu-cpuid.AVX512DQ:true feature.node.kubernetes.io/cpu-cpuid.AVX512F:true feature.node.kubernetes.io/cpu-cpuid.AVX512VL:true feature.node.kubernetes.io/cpu-cpuid.FMA3:true feature.node.kubernetes.io/cpu-cpuid.HLE:true feature.node.kubernetes.io/cpu-cpuid.IBPB:true feature.node.kubernetes.io/cpu-cpuid.MPX:true feature.node.kubernetes.io/cpu-cpuid.RTM:true feature.node.kubernetes.io/cpu-cpuid.SSE4:true feature.node.kubernetes.io/cpu-cpuid.SSE42:true feature.node.kubernetes.io/cpu-cpuid.STIBP:true feature.node.kubernetes.io/cpu-cpuid.VMX:true feature.node.kubernetes.io/cpu-cstate.enabled:true feature.node.kubernetes.io/cpu-hardware_multithreading:true feature.node.kubernetes.io/cpu-pstate.status:active feature.node.kubernetes.io/cpu-pstate.turbo:true feature.node.kubernetes.io/cpu-rdt.RDTCMT:true feature.node.kubernetes.io/cpu-rdt.RDTL3CA:true feature.node.kubernetes.io/cpu-rdt.RDTMBA:true feature.node.kubernetes.io/cpu-rdt.RDTMBM:true feature.node.kubernetes.io/cpu-rdt.RDTMON:true feature.node.kubernetes.io/kernel-config.NO_HZ:true feature.node.kubernetes.io/kernel-config.NO_HZ_FULL:true feature.node.kubernetes.io/kernel-selinux.enabled:true feature.node.kubernetes.io/kernel-version.full:3.10.0-1160.66.1.el7.x86_64 feature.node.kubernetes.io/kernel-version.major:3 feature.node.kubernetes.io/kernel-version.minor:10 feature.node.kubernetes.io/kernel-version.revision:0 feature.node.kubernetes.io/memory-numa:true feature.node.kubernetes.io/network-sriov.capable:true feature.node.kubernetes.io/network-sriov.configured:true feature.node.kubernetes.io/pci-0300_1a03.present:true feature.node.kubernetes.io/storage-nonrotationaldisk:true feature.node.kubernetes.io/system-os_release.ID:centos feature.node.kubernetes.io/system-os_release.VERSION_ID:7 feature.node.kubernetes.io/system-os_release.VERSION_ID.major:7 kubernetes.io/arch:amd64 kubernetes.io/hostname:node2 kubernetes.io/os:linux] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.208 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/extended-resources: nfd.node.kubernetes.io/feature-labels:cpu-cpuid.ADX,cpu-cpuid.AESNI,cpu-cpuid.AVX,cpu-cpuid.AVX2,cpu-cpuid.AVX512BW,cpu-cpuid.AVX512CD,cpu-cpuid.AVX512DQ,cpu-cpuid.AVX512F,cpu-cpuid.AVX512VL,cpu-cpuid.FMA3,cpu-cpuid.HLE,cpu-cpuid.IBPB,cpu-cpuid.MPX,cpu-cpuid.RTM,cpu-cpuid.SSE4,cpu-cpuid.SSE42,cpu-cpuid.STIBP,cpu-cpuid.VMX,cpu-cstate.enabled,cpu-hardware_multithreading,cpu-pstate.status,cpu-pstate.turbo,cpu-rdt.RDTCMT,cpu-rdt.RDTL3CA,cpu-rdt.RDTMBA,cpu-rdt.RDTMBM,cpu-rdt.RDTMON,kernel-config.NO_HZ,kernel-config.NO_HZ_FULL,kernel-selinux.enabled,kernel-version.full,kernel-version.major,kernel-version.minor,kernel-version.revision,memory-numa,network-sriov.capable,network-sriov.configured,pci-0300_1a03.present,storage-nonrotationaldisk,system-os_release.ID,system-os_release.VERSION_ID,system-os_release.VERSION_ID.major nfd.node.kubernetes.io/worker.version:v0.8.2 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2022-06-17 20:00:37 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.3.0/24\"":{}}}}} {kubeadm Update v1 2022-06-17 20:00:40 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {flanneld Update v1 2022-06-17 20:01:43 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {nfd-master Update v1 2022-06-17 20:09:34 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/extended-resources":{},"f:nfd.node.kubernetes.io/feature-labels":{},"f:nfd.node.kubernetes.io/worker.version":{}},"f:labels":{"f:feature.node.kubernetes.io/cpu-cpuid.ADX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AESNI":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX2":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512BW":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512CD":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512DQ":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512F":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512VL":{},"f:feature.node.kubernetes.io/cpu-cpuid.FMA3":{},"f:feature.node.kubernetes.io/cpu-cpuid.HLE":{},"f:feature.node.kubernetes.io/cpu-cpuid.IBPB":{},"f:feature.node.kubernetes.io/cpu-cpuid.MPX":{},"f:feature.node.kubernetes.io/cpu-cpuid.RTM":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE4":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE42":{},"f:feature.node.kubernetes.io/cpu-cpuid.STIBP":{},"f:feature.node.kubernetes.io/cpu-cpuid.VMX":{},"f:feature.node.kubernetes.io/cpu-cstate.enabled":{},"f:feature.node.kubernetes.io/cpu-hardware_multithreading":{},"f:feature.node.kubernetes.io/cpu-pstate.status":{},"f:feature.node.kubernetes.io/cpu-pstate.turbo":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTCMT":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTL3CA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBM":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMON":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ_FULL":{},"f:feature.node.kubernetes.io/kernel-selinux.enabled":{},"f:feature.node.kubernetes.io/kernel-version.full":{},"f:feature.node.kubernetes.io/kernel-version.major":{},"f:feature.node.kubernetes.io/kernel-version.minor":{},"f:feature.node.kubernetes.io/kernel-version.revision":{},"f:feature.node.kubernetes.io/memory-numa":{},"f:feature.node.kubernetes.io/network-sriov.capable":{},"f:feature.node.kubernetes.io/network-sriov.configured":{},"f:feature.node.kubernetes.io/pci-0300_1a03.present":{},"f:feature.node.kubernetes.io/storage-nonrotationaldisk":{},"f:feature.node.kubernetes.io/system-os_release.ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID.major":{}}}}} {Swagger-Codegen Update v1 2022-06-17 20:13:31 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:cmk.intel.com/cmk-node":{}}},"f:status":{"f:capacity":{"f:cmk.intel.com/exclusive-cores":{}}}}} {kubelet Update v1 2022-06-17 20:13:37 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:cmk.intel.com/exclusive-cores":{},"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:capacity":{"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.3.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.3.0/24],},Status:NodeStatus{Capacity:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{451201003520 0} {} 440625980Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{201269604352 0} {} 196552348Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{77 0} {} 77 DecimalSI},ephemeral-storage: {{406080902496 0} {} 406080902496 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{178884603904 0} {} 174691996Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-06-17 20:04:33 +0000 UTC,LastTransitionTime:2022-06-17 20:04:33 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-06-17 22:01:52 +0000 UTC,LastTransitionTime:2022-06-17 20:00:37 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-06-17 22:01:52 +0000 UTC,LastTransitionTime:2022-06-17 20:00:37 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-06-17 22:01:52 +0000 UTC,LastTransitionTime:2022-06-17 20:00:37 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-06-17 22:01:52 +0000 UTC,LastTransitionTime:2022-06-17 20:04:30 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.208,},NodeAddress{Type:Hostname,Address:node2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:3b9e31fbb30d4e48b9ac063755a76deb,SystemUUID:80B3CD56-852F-E711-906E-0017A4403562,BootID:5cd4c1a7-c6ca-496c-9122-4f944da708e6,KernelVersion:3.10.0-1160.66.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.17,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[opnfv/barometer-collectd@sha256:f30e965aa6195e6ac4ca2410f5a15e3704c92e4afa5208178ca22a7911975d66],SizeBytes:1075575763,},ContainerImage{Names:[cmk:v1.5.1],SizeBytes:727740703,},ContainerImage{Names:[localhost:30500/cmk@sha256:7227e64d78c2a9dd290de0ec1cbbaf536dad977fc3efca629dc87d6ffb97071e localhost:30500/cmk:v1.5.1],SizeBytes:727740703,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[aquasec/kube-hunter@sha256:2be6820bc1d7e0f57193a9a27d5a3e16b2fd93c53747b03ce8ca48c6fc323781 aquasec/kube-hunter:0.3.1],SizeBytes:347611549,},ContainerImage{Names:[sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 sirot/netperf-latest:latest],SizeBytes:282025213,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:702a992280fb7c3303e84a5801acbb4c9c7fcf48cffe0e9c8be3f0c60f74cf89 k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.4],SizeBytes:253371792,},ContainerImage{Names:[nginx@sha256:a05b0cdd4fc1be3b224ba9662ebdf98fe44c09c0c9215b45f84344c12867002e nginx:1.21.1],SizeBytes:133175493,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:125930239,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[k8s.gcr.io/nfd/node-feature-discovery@sha256:74a1cbd82354f148277e20cdce25d57816e355a896bc67f67a0f722164b16945 k8s.gcr.io/nfd/node-feature-discovery:v0.8.2],SizeBytes:108486428,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[localhost:30500/sriov-device-plugin@sha256:21d7abd21ac65aac7d19aaa2b1b05a71e496b7bf6251c76df58855be9c3aaa59 localhost:30500/sriov-device-plugin:v3.3.2],SizeBytes:42676189,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 aquasec/kube-bench:0.3.1],SizeBytes:19301876,},ContainerImage{Names:[prom/collectd-exporter@sha256:73fbda4d24421bff3b741c27efc36f1b6fbe7c57c378d56d4ff78101cd556654],SizeBytes:17463681,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:16032814,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:11443478,},ContainerImage{Names:[appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172cead6502fec989198b98d8bb appropriate/curl:edge],SizeBytes:5654234,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:1154361,},ContainerImage{Names:[busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 busybox:1.28],SizeBytes:1146369,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jun 17 22:01:58.761: INFO: Logging kubelet events for node node2 Jun 17 22:01:58.763: INFO: Logging pods the kubelet thinks is on node node2 Jun 17 22:01:58.782: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-xr9c5 started at 2022-06-17 20:10:41 +0000 UTC (0+1 container statuses recorded) Jun 17 22:01:58.782: INFO: Container kube-sriovdp ready: true, restart count 0 Jun 17 22:01:58.782: INFO: node-exporter-xgz6d started at 2022-06-17 20:14:54 +0000 UTC (0+2 container statuses recorded) Jun 17 22:01:58.782: INFO: Container kube-rbac-proxy ready: true, restart count 0 Jun 17 22:01:58.782: INFO: Container node-exporter ready: true, restart count 0 Jun 17 22:01:58.782: INFO: test-webserver-c67c950f-e38b-4445-ab3b-ceabf4cf4f10 started at 2022-06-17 22:01:26 +0000 UTC (0+1 container statuses recorded) Jun 17 22:01:58.782: INFO: Container test-webserver ready: true, restart count 0 Jun 17 22:01:58.782: INFO: kube-flannel-plbl8 started at 2022-06-17 20:01:38 +0000 UTC (1+1 container statuses recorded) Jun 17 22:01:58.782: INFO: Init container install-cni ready: true, restart count 2 Jun 17 22:01:58.782: INFO: Container kube-flannel ready: true, restart count 2 Jun 17 22:01:58.782: INFO: node-feature-discovery-worker-82r46 started at 2022-06-17 20:09:28 +0000 UTC (0+1 container statuses recorded) Jun 17 22:01:58.782: INFO: Container nfd-worker ready: true, restart count 0 Jun 17 22:01:58.782: INFO: cmk-init-discover-node2-z2vgz started at 2022-06-17 20:13:25 +0000 UTC (0+3 container statuses recorded) Jun 17 22:01:58.782: INFO: Container discover ready: false, restart count 0 Jun 17 22:01:58.782: INFO: Container init ready: false, restart count 0 Jun 17 22:01:58.782: INFO: Container install ready: false, restart count 0 Jun 17 22:01:58.782: INFO: adopt-release-bklcl started at 2022-06-17 22:01:55 +0000 UTC (0+1 container statuses recorded) Jun 17 22:01:58.782: INFO: Container c ready: false, restart count 0 Jun 17 22:01:58.782: INFO: nodeport-test-kqgs5 started at 2022-06-17 22:00:13 +0000 UTC (0+1 container statuses recorded) Jun 17 22:01:58.782: INFO: Container nodeport-test ready: true, restart count 0 Jun 17 22:01:58.782: INFO: nginx-proxy-node2 started at 2022-06-17 20:00:37 +0000 UTC (0+1 container statuses recorded) Jun 17 22:01:58.782: INFO: Container nginx-proxy ready: true, restart count 2 Jun 17 22:01:58.782: INFO: kube-proxy-pvtj6 started at 2022-06-17 20:00:43 +0000 UTC (0+1 container statuses recorded) Jun 17 22:01:58.782: INFO: Container kube-proxy ready: true, restart count 2 Jun 17 22:01:58.782: INFO: kube-multus-ds-amd64-hblk4 started at 2022-06-17 20:01:47 +0000 UTC (0+1 container statuses recorded) Jun 17 22:01:58.782: INFO: Container kube-multus ready: true, restart count 1 Jun 17 22:01:58.782: INFO: cmk-5gtjq started at 2022-06-17 20:13:52 +0000 UTC (0+2 container statuses recorded) Jun 17 22:01:58.782: INFO: Container nodereport ready: true, restart count 0 Jun 17 22:01:58.782: INFO: Container reconcile ready: true, restart count 0 Jun 17 22:01:58.782: INFO: collectd-6bcqz started at 2022-06-17 20:18:47 +0000 UTC (0+3 container statuses recorded) Jun 17 22:01:58.782: INFO: Container collectd ready: true, restart count 0 Jun 17 22:01:58.782: INFO: Container collectd-exporter ready: true, restart count 0 Jun 17 22:01:58.782: INFO: Container rbac-proxy ready: true, restart count 0 Jun 17 22:01:58.782: INFO: netserver-1 started at 2022-06-17 22:01:21 +0000 UTC (0+1 container statuses recorded) Jun 17 22:01:58.782: INFO: Container webserver ready: false, restart count 0 Jun 17 22:01:58.782: INFO: affinity-nodeport-transition-pmhvr started at 2022-06-17 21:59:28 +0000 UTC (0+1 container statuses recorded) Jun 17 22:01:58.782: INFO: Container affinity-nodeport-transition ready: true, restart count 0 Jun 17 22:01:58.782: INFO: test-container-pod started at 2022-06-17 22:01:43 +0000 UTC (0+1 container statuses recorded) Jun 17 22:01:58.782: INFO: Container webserver ready: false, restart count 0 Jun 17 22:01:58.782: INFO: send-events-26f1edeb-46d9-4567-b1b2-926258bc968b started at 2022-06-17 22:01:43 +0000 UTC (0+1 container statuses recorded) Jun 17 22:01:58.782: INFO: Container p ready: true, restart count 0 Jun 17 22:01:58.782: INFO: execpodt5pz9 started at 2022-06-17 21:59:47 +0000 UTC (0+1 container statuses recorded) Jun 17 22:01:58.782: INFO: Container agnhost-container ready: true, restart count 0 Jun 17 22:01:58.782: INFO: nodeport-test-l42bj started at 2022-06-17 22:00:13 +0000 UTC (0+1 container statuses recorded) Jun 17 22:01:58.782: INFO: Container nodeport-test ready: true, restart count 0 Jun 17 22:01:58.782: INFO: affinity-nodeport-transition-5p5xs started at 2022-06-17 21:59:28 +0000 UTC (0+1 container statuses recorded) Jun 17 22:01:58.782: INFO: Container affinity-nodeport-transition ready: true, restart count 0 Jun 17 22:01:58.782: INFO: externalname-service-krpfv started at 2022-06-17 21:59:29 +0000 UTC (0+1 container statuses recorded) Jun 17 22:01:58.782: INFO: Container externalname-service ready: true, restart count 0 Jun 17 22:01:58.782: INFO: busybox-readonly-fs55b4e198-b27e-4308-bce4-4ea01442a336 started at 2022-06-17 22:01:40 +0000 UTC (0+1 container statuses recorded) Jun 17 22:01:58.782: INFO: Container busybox-readonly-fs55b4e198-b27e-4308-bce4-4ea01442a336 ready: true, restart count 0 Jun 17 22:01:58.782: INFO: kubernetes-metrics-scraper-5558854cb-w4nk8 started at 2022-06-17 20:02:19 +0000 UTC (0+1 container statuses recorded) Jun 17 22:01:58.782: INFO: Container kubernetes-metrics-scraper ready: true, restart count 1 Jun 17 22:01:59.037: INFO: Latency metrics for node node2 Jun 17 22:01:59.037: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-6408" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750 • Failure [149.442 seconds] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should be able to change the type from ExternalName to NodePort [Conformance] [It] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Jun 17 22:01:58.041: Unexpected error: <*errors.errorString | 0xc003bd6360>: { s: "service is not reachable within 2m0s timeout on endpoint 10.10.190.207:30434 over TCP protocol", } service is not reachable within 2m0s timeout on endpoint 10.10.190.207:30434 over TCP protocol occurred /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1351 ------------------------------ {"msg":"FAILED [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","total":-1,"completed":3,"skipped":71,"failed":1,"failures":["[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]"]} SSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 17 21:59:09.094: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services W0617 21:59:09.130947 32 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Jun 17 21:59:09.131: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Jun 17 21:59:09.132: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746 [It] should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating service in namespace services-6595 Jun 17 21:59:09.156: INFO: The status of Pod kube-proxy-mode-detector is Pending, waiting for it to be Running (with Ready = true) Jun 17 21:59:11.161: INFO: The status of Pod kube-proxy-mode-detector is Pending, waiting for it to be Running (with Ready = true) Jun 17 21:59:13.159: INFO: The status of Pod kube-proxy-mode-detector is Pending, waiting for it to be Running (with Ready = true) Jun 17 21:59:15.160: INFO: The status of Pod kube-proxy-mode-detector is Pending, waiting for it to be Running (with Ready = true) Jun 17 21:59:17.160: INFO: The status of Pod kube-proxy-mode-detector is Running (Ready = true) Jun 17 21:59:17.165: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6595 exec kube-proxy-mode-detector -- /bin/sh -x -c curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode' Jun 17 21:59:19.490: INFO: stderr: "+ curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode\n" Jun 17 21:59:19.490: INFO: stdout: "iptables" Jun 17 21:59:19.490: INFO: proxyMode: iptables Jun 17 21:59:19.496: INFO: Waiting for pod kube-proxy-mode-detector to disappear Jun 17 21:59:19.498: INFO: Pod kube-proxy-mode-detector no longer exists STEP: creating service affinity-nodeport-timeout in namespace services-6595 STEP: creating replication controller affinity-nodeport-timeout in namespace services-6595 I0617 21:59:19.509415 32 runners.go:190] Created replication controller with name: affinity-nodeport-timeout, namespace: services-6595, replica count: 3 I0617 21:59:22.560193 32 runners.go:190] affinity-nodeport-timeout Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0617 21:59:25.561023 32 runners.go:190] affinity-nodeport-timeout Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0617 21:59:28.562083 32 runners.go:190] affinity-nodeport-timeout Pods: 3 out of 3 created, 1 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0617 21:59:31.563233 32 runners.go:190] affinity-nodeport-timeout Pods: 3 out of 3 created, 1 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0617 21:59:34.563713 32 runners.go:190] affinity-nodeport-timeout Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jun 17 21:59:34.573: INFO: Creating new exec pod Jun 17 21:59:43.594: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6595 exec execpod-affinity5sxw2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-timeout 80' Jun 17 21:59:43.856: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-nodeport-timeout 80\nConnection to affinity-nodeport-timeout 80 port [tcp/http] succeeded!\n" Jun 17 21:59:43.856: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" Jun 17 21:59:43.856: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6595 exec execpod-affinity5sxw2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.233.49.100 80' Jun 17 21:59:44.097: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 10.233.49.100 80\nConnection to 10.233.49.100 80 port [tcp/http] succeeded!\n" Jun 17 21:59:44.097: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" Jun 17 21:59:44.098: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6595 exec execpod-affinity5sxw2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32753' Jun 17 21:59:45.505: INFO: rc: 1 Jun 17 21:59:45.505: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6595 exec execpod-affinity5sxw2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32753: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32753 nc: connect to 10.10.190.207 port 32753 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 21:59:46.507: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6595 exec execpod-affinity5sxw2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32753' Jun 17 21:59:46.897: INFO: rc: 1 Jun 17 21:59:46.897: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6595 exec execpod-affinity5sxw2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32753: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32753 nc: connect to 10.10.190.207 port 32753 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 21:59:47.506: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6595 exec execpod-affinity5sxw2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32753' Jun 17 21:59:48.124: INFO: rc: 1 Jun 17 21:59:48.124: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6595 exec execpod-affinity5sxw2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32753: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32753 nc: connect to 10.10.190.207 port 32753 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 21:59:48.506: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6595 exec execpod-affinity5sxw2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32753' Jun 17 21:59:48.741: INFO: rc: 1 Jun 17 21:59:48.742: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6595 exec execpod-affinity5sxw2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32753: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32753 nc: connect to 10.10.190.207 port 32753 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 21:59:49.508: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6595 exec execpod-affinity5sxw2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32753' Jun 17 21:59:49.822: INFO: rc: 1 Jun 17 21:59:49.822: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6595 exec execpod-affinity5sxw2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32753: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32753 nc: connect to 10.10.190.207 port 32753 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 21:59:50.506: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6595 exec execpod-affinity5sxw2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32753' Jun 17 21:59:50.756: INFO: rc: 1 Jun 17 21:59:50.757: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6595 exec execpod-affinity5sxw2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32753: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32753 nc: connect to 10.10.190.207 port 32753 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 21:59:51.506: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6595 exec execpod-affinity5sxw2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32753' Jun 17 21:59:52.075: INFO: rc: 1 Jun 17 21:59:52.075: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6595 exec execpod-affinity5sxw2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32753: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32753 nc: connect to 10.10.190.207 port 32753 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 21:59:52.506: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6595 exec execpod-affinity5sxw2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32753' Jun 17 21:59:52.806: INFO: rc: 1 Jun 17 21:59:52.806: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6595 exec execpod-affinity5sxw2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32753: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32753 nc: connect to 10.10.190.207 port 32753 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 21:59:53.506: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6595 exec execpod-affinity5sxw2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32753' Jun 17 21:59:53.761: INFO: rc: 1 Jun 17 21:59:53.761: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6595 exec execpod-affinity5sxw2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32753: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32753 nc: connect to 10.10.190.207 port 32753 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 21:59:54.506: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6595 exec execpod-affinity5sxw2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32753' Jun 17 21:59:54.762: INFO: rc: 1 Jun 17 21:59:54.762: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6595 exec execpod-affinity5sxw2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32753: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32753 nc: connect to 10.10.190.207 port 32753 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 21:59:55.506: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6595 exec execpod-affinity5sxw2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32753' Jun 17 21:59:55.968: INFO: rc: 1 Jun 17 21:59:55.968: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6595 exec execpod-affinity5sxw2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32753: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32753 nc: connect to 10.10.190.207 port 32753 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 21:59:56.506: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6595 exec execpod-affinity5sxw2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32753' Jun 17 21:59:56.768: INFO: rc: 1 Jun 17 21:59:56.768: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6595 exec execpod-affinity5sxw2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32753: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32753 nc: connect to 10.10.190.207 port 32753 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 21:59:57.506: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6595 exec execpod-affinity5sxw2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32753' Jun 17 21:59:57.755: INFO: rc: 1 Jun 17 21:59:57.755: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6595 exec execpod-affinity5sxw2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32753: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32753 nc: connect to 10.10.190.207 port 32753 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 21:59:58.505: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6595 exec execpod-affinity5sxw2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32753' Jun 17 21:59:58.745: INFO: rc: 1 Jun 17 21:59:58.745: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6595 exec execpod-affinity5sxw2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32753: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32753 nc: connect to 10.10.190.207 port 32753 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 21:59:59.505: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6595 exec execpod-affinity5sxw2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32753' Jun 17 21:59:59.759: INFO: rc: 1 Jun 17 21:59:59.759: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6595 exec execpod-affinity5sxw2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32753: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32753 nc: connect to 10.10.190.207 port 32753 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:00:00.505: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6595 exec execpod-affinity5sxw2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32753' Jun 17 22:00:00.742: INFO: rc: 1 Jun 17 22:00:00.742: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6595 exec execpod-affinity5sxw2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32753: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32753 nc: connect to 10.10.190.207 port 32753 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:00:01.506: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6595 exec execpod-affinity5sxw2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32753' Jun 17 22:00:01.738: INFO: rc: 1 Jun 17 22:00:01.738: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6595 exec execpod-affinity5sxw2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32753: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32753 nc: connect to 10.10.190.207 port 32753 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:00:02.506: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6595 exec execpod-affinity5sxw2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32753' Jun 17 22:00:02.729: INFO: rc: 1 Jun 17 22:00:02.729: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6595 exec execpod-affinity5sxw2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32753: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32753 nc: connect to 10.10.190.207 port 32753 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:00:03.507: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6595 exec execpod-affinity5sxw2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32753' Jun 17 22:00:03.769: INFO: rc: 1 Jun 17 22:00:03.769: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6595 exec execpod-affinity5sxw2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32753: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32753 nc: connect to 10.10.190.207 port 32753 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:00:04.506: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6595 exec execpod-affinity5sxw2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32753' Jun 17 22:00:04.752: INFO: rc: 1 Jun 17 22:00:04.752: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6595 exec execpod-affinity5sxw2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32753: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32753 nc: connect to 10.10.190.207 port 32753 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:00:05.506: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6595 exec execpod-affinity5sxw2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32753' Jun 17 22:00:05.737: INFO: rc: 1 Jun 17 22:00:05.737: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6595 exec execpod-affinity5sxw2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32753: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32753 nc: connect to 10.10.190.207 port 32753 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:00:06.506: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6595 exec execpod-affinity5sxw2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32753' Jun 17 22:00:06.764: INFO: rc: 1 Jun 17 22:00:06.764: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6595 exec execpod-affinity5sxw2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32753: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32753 nc: connect to 10.10.190.207 port 32753 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:00:07.505: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6595 exec execpod-affinity5sxw2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32753' Jun 17 22:00:08.370: INFO: rc: 1 Jun 17 22:00:08.370: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6595 exec execpod-affinity5sxw2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32753: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32753 nc: connect to 10.10.190.207 port 32753 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:00:08.506: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6595 exec execpod-affinity5sxw2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32753' Jun 17 22:00:08.818: INFO: rc: 1 Jun 17 22:00:08.819: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6595 exec execpod-affinity5sxw2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32753: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32753 nc: connect to 10.10.190.207 port 32753 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:00:09.506: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6595 exec execpod-affinity5sxw2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32753' Jun 17 22:00:09.756: INFO: rc: 1 Jun 17 22:00:09.757: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6595 exec execpod-affinity5sxw2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32753: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32753 nc: connect to 10.10.190.207 port 32753 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:00:10.506: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6595 exec execpod-affinity5sxw2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32753' Jun 17 22:00:10.776: INFO: rc: 1 Jun 17 22:00:10.776: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6595 exec execpod-affinity5sxw2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32753: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32753 nc: connect to 10.10.190.207 port 32753 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:00:11.506: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6595 exec execpod-affinity5sxw2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32753' Jun 17 22:00:11.813: INFO: rc: 1 Jun 17 22:00:11.813: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6595 exec execpod-affinity5sxw2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32753: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32753 nc: connect to 10.10.190.207 port 32753 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:00:12.506: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6595 exec execpod-affinity5sxw2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32753' Jun 17 22:00:12.762: INFO: rc: 1 Jun 17 22:00:12.762: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6595 exec execpod-affinity5sxw2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32753: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32753 nc: connect to 10.10.190.207 port 32753 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:00:13.506: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6595 exec execpod-affinity5sxw2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32753' Jun 17 22:00:13.758: INFO: rc: 1 Jun 17 22:00:13.759: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6595 exec execpod-affinity5sxw2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32753: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32753 nc: connect to 10.10.190.207 port 32753 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:00:14.506: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6595 exec execpod-affinity5sxw2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32753' Jun 17 22:00:15.533: INFO: rc: 1 Jun 17 22:00:15.533: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6595 exec execpod-affinity5sxw2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32753: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32753 nc: connect to 10.10.190.207 port 32753 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:00:16.506: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6595 exec execpod-affinity5sxw2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32753' Jun 17 22:00:16.742: INFO: rc: 1 Jun 17 22:00:16.742: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6595 exec execpod-affinity5sxw2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32753: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32753 nc: connect to 10.10.190.207 port 32753 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:00:17.506: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6595 exec execpod-affinity5sxw2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32753' Jun 17 22:00:17.767: INFO: rc: 1 Jun 17 22:00:17.767: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6595 exec execpod-affinity5sxw2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32753: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32753 nc: connect to 10.10.190.207 port 32753 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:00:18.507: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6595 exec execpod-affinity5sxw2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32753' Jun 17 22:00:18.790: INFO: rc: 1 Jun 17 22:00:18.790: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6595 exec execpod-affinity5sxw2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32753: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32753 nc: connect to 10.10.190.207 port 32753 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:00:19.505: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6595 exec execpod-affinity5sxw2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32753' Jun 17 22:00:19.764: INFO: rc: 1 Jun 17 22:00:19.764: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6595 exec execpod-affinity5sxw2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32753: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32753 nc: connect to 10.10.190.207 port 32753 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:00:20.506: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6595 exec execpod-affinity5sxw2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32753' Jun 17 22:00:20.778: INFO: rc: 1 Jun 17 22:00:20.778: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6595 exec execpod-affinity5sxw2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32753: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32753 nc: connect to 10.10.190.207 port 32753 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:00:21.506: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6595 exec execpod-affinity5sxw2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32753' Jun 17 22:00:21.767: INFO: rc: 1 Jun 17 22:00:21.767: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6595 exec execpod-affinity5sxw2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32753: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32753 nc: connect to 10.10.190.207 port 32753 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:00:22.506: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6595 exec execpod-affinity5sxw2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32753' Jun 17 22:00:22.751: INFO: rc: 1 Jun 17 22:00:22.751: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6595 exec execpod-affinity5sxw2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32753: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32753 nc: connect to 10.10.190.207 port 32753 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:00:23.506: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6595 exec execpod-affinity5sxw2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32753' Jun 17 22:00:23.771: INFO: rc: 1 Jun 17 22:00:23.771: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6595 exec execpod-affinity5sxw2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32753: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32753 nc: connect to 10.10.190.207 port 32753 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:00:24.507: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6595 exec execpod-affinity5sxw2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32753' Jun 17 22:00:24.771: INFO: rc: 1 Jun 17 22:00:24.771: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6595 exec execpod-affinity5sxw2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32753: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32753 nc: connect to 10.10.190.207 port 32753 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:00:25.506: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6595 exec execpod-affinity5sxw2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32753' Jun 17 22:00:25.845: INFO: rc: 1 Jun 17 22:00:25.845: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6595 exec execpod-affinity5sxw2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32753: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32753 nc: connect to 10.10.190.207 port 32753 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:00:26.506: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6595 exec execpod-affinity5sxw2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32753' Jun 17 22:00:26.947: INFO: rc: 1 Jun 17 22:00:26.947: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6595 exec execpod-affinity5sxw2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32753: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32753 nc: connect to 10.10.190.207 port 32753 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:00:27.506: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6595 exec execpod-affinity5sxw2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32753' Jun 17 22:00:27.809: INFO: rc: 1 Jun 17 22:00:27.809: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6595 exec execpod-affinity5sxw2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32753: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32753 nc: connect to 10.10.190.207 port 32753 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:00:28.505: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6595 exec execpod-affinity5sxw2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32753' Jun 17 22:00:28.967: INFO: rc: 1 Jun 17 22:00:28.967: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6595 exec execpod-affinity5sxw2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32753: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32753 nc: connect to 10.10.190.207 port 32753 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:00:29.506: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6595 exec execpod-affinity5sxw2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32753' Jun 17 22:00:29.928: INFO: rc: 1 Jun 17 22:00:29.928: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6595 exec execpod-affinity5sxw2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32753: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32753 nc: connect to 10.10.190.207 port 32753 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:00:30.506: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6595 exec execpod-affinity5sxw2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32753' Jun 17 22:00:30.853: INFO: rc: 1 Jun 17 22:00:30.853: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6595 exec execpod-affinity5sxw2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32753: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32753 nc: connect to 10.10.190.207 port 32753 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:00:31.506: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6595 exec execpod-affinity5sxw2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32753' Jun 17 22:00:31.895: INFO: rc: 1 Jun 17 22:00:31.895: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6595 exec execpod-affinity5sxw2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32753: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32753 nc: connect to 10.10.190.207 port 32753 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:00:32.506: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6595 exec execpod-affinity5sxw2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32753' Jun 17 22:00:32.743: INFO: rc: 1 Jun 17 22:00:32.743: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6595 exec execpod-affinity5sxw2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32753: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32753 nc: connect to 10.10.190.207 port 32753 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:00:33.506: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6595 exec execpod-affinity5sxw2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32753' Jun 17 22:00:33.852: INFO: rc: 1 Jun 17 22:00:33.852: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6595 exec execpod-affinity5sxw2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32753: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32753 nc: connect to 10.10.190.207 port 32753 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:00:34.506: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6595 exec execpod-affinity5sxw2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32753' Jun 17 22:00:35.891: INFO: rc: 1 Jun 17 22:00:35.891: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6595 exec execpod-affinity5sxw2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32753: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32753 nc: connect to 10.10.190.207 port 32753 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:00:36.506: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6595 exec execpod-affinity5sxw2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32753' Jun 17 22:00:37.120: INFO: rc: 1 Jun 17 22:00:37.120: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6595 exec execpod-affinity5sxw2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32753: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32753 nc: connect to 10.10.190.207 port 32753 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:00:37.506: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6595 exec execpod-affinity5sxw2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32753' Jun 17 22:00:37.964: INFO: rc: 1 Jun 17 22:00:37.965: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6595 exec execpod-affinity5sxw2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32753: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32753 nc: connect to 10.10.190.207 port 32753 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:00:38.506: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6595 exec execpod-affinity5sxw2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32753' Jun 17 22:00:38.916: INFO: rc: 1 Jun 17 22:00:38.916: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6595 exec execpod-affinity5sxw2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32753: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32753 nc: connect to 10.10.190.207 port 32753 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:00:39.506: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6595 exec execpod-affinity5sxw2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32753' Jun 17 22:00:40.587: INFO: rc: 1 Jun 17 22:00:40.587: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6595 exec execpod-affinity5sxw2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32753: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32753 nc: connect to 10.10.190.207 port 32753 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:00:41.506: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6595 exec execpod-affinity5sxw2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32753' Jun 17 22:00:42.004: INFO: rc: 1 Jun 17 22:00:42.004: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6595 exec execpod-affinity5sxw2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32753: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32753 nc: connect to 10.10.190.207 port 32753 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:00:42.506: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6595 exec execpod-affinity5sxw2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32753' Jun 17 22:00:42.767: INFO: rc: 1 Jun 17 22:00:42.767: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6595 exec execpod-affinity5sxw2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32753: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32753 nc: connect to 10.10.190.207 port 32753 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:00:43.506: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6595 exec execpod-affinity5sxw2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32753' Jun 17 22:00:43.920: INFO: rc: 1 Jun 17 22:00:43.920: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6595 exec execpod-affinity5sxw2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32753: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32753 nc: connect to 10.10.190.207 port 32753 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:00:44.507: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6595 exec execpod-affinity5sxw2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32753' Jun 17 22:00:45.318: INFO: rc: 1 Jun 17 22:00:45.318: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6595 exec execpod-affinity5sxw2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32753: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32753 nc: connect to 10.10.190.207 port 32753 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:00:45.505: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6595 exec execpod-affinity5sxw2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32753' Jun 17 22:00:45.791: INFO: rc: 1 Jun 17 22:00:45.791: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6595 exec execpod-affinity5sxw2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32753: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32753 nc: connect to 10.10.190.207 port 32753 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:00:46.506: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6595 exec execpod-affinity5sxw2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32753' Jun 17 22:00:46.755: INFO: rc: 1 Jun 17 22:00:46.755: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6595 exec execpod-affinity5sxw2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32753: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32753 nc: connect to 10.10.190.207 port 32753 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:00:47.506: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6595 exec execpod-affinity5sxw2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32753' Jun 17 22:00:47.748: INFO: rc: 1 Jun 17 22:00:47.748: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6595 exec execpod-affinity5sxw2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32753: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32753 nc: connect to 10.10.190.207 port 32753 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:00:48.505: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6595 exec execpod-affinity5sxw2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32753' Jun 17 22:00:48.763: INFO: rc: 1 Jun 17 22:00:48.763: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6595 exec execpod-affinity5sxw2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32753: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32753 nc: connect to 10.10.190.207 port 32753 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:00:49.505: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6595 exec execpod-affinity5sxw2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32753' Jun 17 22:00:49.751: INFO: rc: 1 Jun 17 22:00:49.751: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6595 exec execpod-affinity5sxw2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32753: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32753 nc: connect to 10.10.190.207 port 32753 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:00:50.506: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6595 exec execpod-affinity5sxw2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32753' Jun 17 22:00:50.828: INFO: rc: 1 Jun 17 22:00:50.828: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6595 exec execpod-affinity5sxw2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32753: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32753 nc: connect to 10.10.190.207 port 32753 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:00:51.506: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6595 exec execpod-affinity5sxw2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32753' Jun 17 22:00:51.776: INFO: rc: 1 Jun 17 22:00:51.776: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6595 exec execpod-affinity5sxw2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32753: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32753 nc: connect to 10.10.190.207 port 32753 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:00:52.506: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6595 exec execpod-affinity5sxw2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32753' Jun 17 22:00:52.751: INFO: rc: 1 Jun 17 22:00:52.752: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6595 exec execpod-affinity5sxw2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32753: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32753 nc: connect to 10.10.190.207 port 32753 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:00:53.506: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6595 exec execpod-affinity5sxw2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32753' Jun 17 22:00:53.760: INFO: rc: 1 Jun 17 22:00:53.760: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6595 exec execpod-affinity5sxw2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32753: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32753 + echo hostName nc: connect to 10.10.190.207 port 32753 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:00:54.505: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6595 exec execpod-affinity5sxw2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32753' Jun 17 22:00:54.816: INFO: rc: 1 Jun 17 22:00:54.816: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6595 exec execpod-affinity5sxw2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32753: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32753 nc: connect to 10.10.190.207 port 32753 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:00:55.506: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6595 exec execpod-affinity5sxw2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32753' Jun 17 22:00:55.749: INFO: rc: 1 Jun 17 22:00:55.749: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6595 exec execpod-affinity5sxw2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32753: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32753 nc: connect to 10.10.190.207 port 32753 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:00:56.505: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6595 exec execpod-affinity5sxw2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32753' Jun 17 22:00:56.772: INFO: rc: 1 Jun 17 22:00:56.773: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6595 exec execpod-affinity5sxw2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32753: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32753 nc: connect to 10.10.190.207 port 32753 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:00:57.506: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6595 exec execpod-affinity5sxw2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32753' Jun 17 22:00:57.767: INFO: rc: 1 Jun 17 22:00:57.767: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6595 exec execpod-affinity5sxw2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32753: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32753 nc: connect to 10.10.190.207 port 32753 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:00:58.506: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6595 exec execpod-affinity5sxw2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32753' Jun 17 22:00:58.756: INFO: rc: 1 Jun 17 22:00:58.757: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6595 exec execpod-affinity5sxw2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32753: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32753 nc: connect to 10.10.190.207 port 32753 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:00:59.506: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6595 exec execpod-affinity5sxw2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32753' Jun 17 22:00:59.734: INFO: rc: 1 Jun 17 22:00:59.734: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6595 exec execpod-affinity5sxw2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32753: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32753 nc: connect to 10.10.190.207 port 32753 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:01:00.508: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6595 exec execpod-affinity5sxw2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32753' Jun 17 22:01:00.777: INFO: rc: 1 Jun 17 22:01:00.777: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6595 exec execpod-affinity5sxw2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32753: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32753 nc: connect to 10.10.190.207 port 32753 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:01:01.505: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6595 exec execpod-affinity5sxw2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32753' Jun 17 22:01:01.760: INFO: rc: 1 Jun 17 22:01:01.760: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6595 exec execpod-affinity5sxw2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32753: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32753 nc: connect to 10.10.190.207 port 32753 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:01:02.506: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6595 exec execpod-affinity5sxw2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32753' Jun 17 22:01:02.746: INFO: rc: 1 Jun 17 22:01:02.747: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6595 exec execpod-affinity5sxw2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32753: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32753 nc: connect to 10.10.190.207 port 32753 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:01:03.505: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6595 exec execpod-affinity5sxw2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32753' Jun 17 22:01:03.826: INFO: rc: 1 Jun 17 22:01:03.827: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6595 exec execpod-affinity5sxw2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32753: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32753 nc: connect to 10.10.190.207 port 32753 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:01:04.507: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6595 exec execpod-affinity5sxw2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32753' Jun 17 22:01:04.740: INFO: rc: 1 Jun 17 22:01:04.740: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6595 exec execpod-affinity5sxw2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32753: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32753 nc: connect to 10.10.190.207 port 32753 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:01:05.507: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6595 exec execpod-affinity5sxw2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32753' Jun 17 22:01:05.758: INFO: rc: 1 Jun 17 22:01:05.758: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6595 exec execpod-affinity5sxw2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32753: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32753 nc: connect to 10.10.190.207 port 32753 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:01:06.506: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6595 exec execpod-affinity5sxw2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32753' Jun 17 22:01:06.767: INFO: rc: 1 Jun 17 22:01:06.767: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6595 exec execpod-affinity5sxw2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32753: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32753 nc: connect to 10.10.190.207 port 32753 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:01:07.506: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6595 exec execpod-affinity5sxw2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32753' Jun 17 22:01:07.811: INFO: rc: 1 Jun 17 22:01:07.811: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6595 exec execpod-affinity5sxw2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32753: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32753 nc: connect to 10.10.190.207 port 32753 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:01:08.505: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6595 exec execpod-affinity5sxw2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32753' Jun 17 22:01:08.902: INFO: rc: 1 Jun 17 22:01:08.902: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6595 exec execpod-affinity5sxw2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32753: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32753 nc: connect to 10.10.190.207 port 32753 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:01:09.506: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6595 exec execpod-affinity5sxw2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32753' Jun 17 22:01:09.757: INFO: rc: 1 Jun 17 22:01:09.757: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6595 exec execpod-affinity5sxw2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32753: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32753 nc: connect to 10.10.190.207 port 32753 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:01:10.506: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6595 exec execpod-affinity5sxw2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32753' Jun 17 22:01:10.769: INFO: rc: 1 Jun 17 22:01:10.769: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6595 exec execpod-affinity5sxw2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32753: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32753 nc: connect to 10.10.190.207 port 32753 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:01:11.506: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6595 exec execpod-affinity5sxw2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32753' Jun 17 22:01:11.764: INFO: rc: 1 Jun 17 22:01:11.764: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6595 exec execpod-affinity5sxw2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32753: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32753 nc: connect to 10.10.190.207 port 32753 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:01:12.506: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6595 exec execpod-affinity5sxw2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32753' Jun 17 22:01:12.880: INFO: rc: 1 Jun 17 22:01:12.880: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6595 exec execpod-affinity5sxw2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32753: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32753 nc: connect to 10.10.190.207 port 32753 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:01:13.507: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6595 exec execpod-affinity5sxw2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32753' Jun 17 22:01:13.867: INFO: rc: 1 Jun 17 22:01:13.867: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6595 exec execpod-affinity5sxw2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32753: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32753 nc: connect to 10.10.190.207 port 32753 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:01:14.505: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6595 exec execpod-affinity5sxw2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32753' Jun 17 22:01:15.272: INFO: rc: 1 Jun 17 22:01:15.272: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6595 exec execpod-affinity5sxw2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32753: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32753 nc: connect to 10.10.190.207 port 32753 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:01:15.505: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6595 exec execpod-affinity5sxw2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32753' Jun 17 22:01:15.782: INFO: rc: 1 Jun 17 22:01:15.782: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6595 exec execpod-affinity5sxw2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32753: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32753 nc: connect to 10.10.190.207 port 32753 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:01:16.506: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6595 exec execpod-affinity5sxw2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32753' Jun 17 22:01:16.747: INFO: rc: 1 Jun 17 22:01:16.747: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6595 exec execpod-affinity5sxw2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32753: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32753 nc: connect to 10.10.190.207 port 32753 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:01:17.506: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6595 exec execpod-affinity5sxw2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32753' Jun 17 22:01:17.794: INFO: rc: 1 Jun 17 22:01:17.794: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6595 exec execpod-affinity5sxw2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32753: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32753 nc: connect to 10.10.190.207 port 32753 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:01:18.506: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6595 exec execpod-affinity5sxw2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32753' Jun 17 22:01:18.931: INFO: rc: 1 Jun 17 22:01:18.931: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6595 exec execpod-affinity5sxw2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32753: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32753 nc: connect to 10.10.190.207 port 32753 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:01:19.507: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6595 exec execpod-affinity5sxw2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32753' Jun 17 22:01:19.767: INFO: rc: 1 Jun 17 22:01:19.767: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6595 exec execpod-affinity5sxw2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32753: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32753 nc: connect to 10.10.190.207 port 32753 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:01:20.507: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6595 exec execpod-affinity5sxw2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32753' Jun 17 22:01:20.761: INFO: rc: 1 Jun 17 22:01:20.761: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6595 exec execpod-affinity5sxw2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32753: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32753 nc: connect to 10.10.190.207 port 32753 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:01:21.507: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6595 exec execpod-affinity5sxw2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32753' Jun 17 22:01:21.745: INFO: rc: 1 Jun 17 22:01:21.745: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6595 exec execpod-affinity5sxw2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32753: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32753 nc: connect to 10.10.190.207 port 32753 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:01:22.506: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6595 exec execpod-affinity5sxw2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32753' Jun 17 22:01:22.809: INFO: rc: 1 Jun 17 22:01:22.809: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6595 exec execpod-affinity5sxw2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32753: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32753 nc: connect to 10.10.190.207 port 32753 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:01:23.506: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6595 exec execpod-affinity5sxw2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32753' Jun 17 22:01:23.771: INFO: rc: 1 Jun 17 22:01:23.771: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6595 exec execpod-affinity5sxw2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32753: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32753 nc: connect to 10.10.190.207 port 32753 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:01:24.505: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6595 exec execpod-affinity5sxw2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32753' Jun 17 22:01:25.101: INFO: rc: 1 Jun 17 22:01:25.101: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6595 exec execpod-affinity5sxw2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32753: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32753 nc: connect to 10.10.190.207 port 32753 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:01:25.506: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6595 exec execpod-affinity5sxw2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32753' Jun 17 22:01:25.766: INFO: rc: 1 Jun 17 22:01:25.766: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6595 exec execpod-affinity5sxw2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32753: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32753 nc: connect to 10.10.190.207 port 32753 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:01:26.506: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6595 exec execpod-affinity5sxw2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32753' Jun 17 22:01:26.779: INFO: rc: 1 Jun 17 22:01:26.779: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6595 exec execpod-affinity5sxw2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32753: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32753 nc: connect to 10.10.190.207 port 32753 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:01:27.505: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6595 exec execpod-affinity5sxw2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32753' Jun 17 22:01:27.774: INFO: rc: 1 Jun 17 22:01:27.774: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6595 exec execpod-affinity5sxw2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32753: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32753 nc: connect to 10.10.190.207 port 32753 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:01:28.506: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6595 exec execpod-affinity5sxw2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32753' Jun 17 22:01:29.224: INFO: rc: 1 Jun 17 22:01:29.224: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6595 exec execpod-affinity5sxw2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32753: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32753 nc: connect to 10.10.190.207 port 32753 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:01:29.506: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6595 exec execpod-affinity5sxw2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32753' Jun 17 22:01:29.723: INFO: rc: 1 Jun 17 22:01:29.723: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6595 exec execpod-affinity5sxw2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32753: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32753 nc: connect to 10.10.190.207 port 32753 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:01:30.508: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6595 exec execpod-affinity5sxw2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32753' Jun 17 22:01:30.840: INFO: rc: 1 Jun 17 22:01:30.840: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6595 exec execpod-affinity5sxw2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32753: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32753 nc: connect to 10.10.190.207 port 32753 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:01:31.506: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6595 exec execpod-affinity5sxw2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32753' Jun 17 22:01:31.758: INFO: rc: 1 Jun 17 22:01:31.758: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6595 exec execpod-affinity5sxw2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32753: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32753 nc: connect to 10.10.190.207 port 32753 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:01:32.506: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6595 exec execpod-affinity5sxw2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32753' Jun 17 22:01:32.757: INFO: rc: 1 Jun 17 22:01:32.758: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6595 exec execpod-affinity5sxw2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32753: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32753 nc: connect to 10.10.190.207 port 32753 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:01:33.505: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6595 exec execpod-affinity5sxw2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32753' Jun 17 22:01:33.755: INFO: rc: 1 Jun 17 22:01:33.755: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6595 exec execpod-affinity5sxw2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32753: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32753 nc: connect to 10.10.190.207 port 32753 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:01:34.507: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6595 exec execpod-affinity5sxw2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32753' Jun 17 22:01:34.742: INFO: rc: 1 Jun 17 22:01:34.742: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6595 exec execpod-affinity5sxw2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32753: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32753 nc: connect to 10.10.190.207 port 32753 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:01:35.506: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6595 exec execpod-affinity5sxw2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32753' Jun 17 22:01:35.759: INFO: rc: 1 Jun 17 22:01:35.759: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6595 exec execpod-affinity5sxw2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32753: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32753 nc: connect to 10.10.190.207 port 32753 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:01:36.506: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6595 exec execpod-affinity5sxw2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32753' Jun 17 22:01:36.760: INFO: rc: 1 Jun 17 22:01:36.760: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6595 exec execpod-affinity5sxw2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32753: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32753 nc: connect to 10.10.190.207 port 32753 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:01:37.506: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6595 exec execpod-affinity5sxw2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32753' Jun 17 22:01:37.754: INFO: rc: 1 Jun 17 22:01:37.755: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6595 exec execpod-affinity5sxw2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32753: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32753 nc: connect to 10.10.190.207 port 32753 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:01:38.505: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6595 exec execpod-affinity5sxw2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32753' Jun 17 22:01:38.816: INFO: rc: 1 Jun 17 22:01:38.816: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6595 exec execpod-affinity5sxw2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32753: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32753 nc: connect to 10.10.190.207 port 32753 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:01:39.505: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6595 exec execpod-affinity5sxw2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32753' Jun 17 22:01:39.742: INFO: rc: 1 Jun 17 22:01:39.742: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6595 exec execpod-affinity5sxw2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32753: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32753 nc: connect to 10.10.190.207 port 32753 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:01:40.506: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6595 exec execpod-affinity5sxw2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32753' Jun 17 22:01:40.742: INFO: rc: 1 Jun 17 22:01:40.742: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6595 exec execpod-affinity5sxw2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32753: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32753 nc: connect to 10.10.190.207 port 32753 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:01:41.506: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6595 exec execpod-affinity5sxw2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32753' Jun 17 22:01:42.365: INFO: rc: 1 Jun 17 22:01:42.365: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6595 exec execpod-affinity5sxw2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32753: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32753 nc: connect to 10.10.190.207 port 32753 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:01:42.506: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6595 exec execpod-affinity5sxw2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32753' Jun 17 22:01:42.795: INFO: rc: 1 Jun 17 22:01:42.795: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6595 exec execpod-affinity5sxw2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32753: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32753 nc: connect to 10.10.190.207 port 32753 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:01:43.505: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6595 exec execpod-affinity5sxw2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32753' Jun 17 22:01:44.010: INFO: rc: 1 Jun 17 22:01:44.010: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6595 exec execpod-affinity5sxw2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32753: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32753 nc: connect to 10.10.190.207 port 32753 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:01:44.506: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6595 exec execpod-affinity5sxw2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32753' Jun 17 22:01:44.792: INFO: rc: 1 Jun 17 22:01:44.792: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6595 exec execpod-affinity5sxw2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32753: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32753 nc: connect to 10.10.190.207 port 32753 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:01:45.507: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6595 exec execpod-affinity5sxw2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32753' Jun 17 22:01:45.927: INFO: rc: 1 Jun 17 22:01:45.927: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6595 exec execpod-affinity5sxw2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32753: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32753 nc: connect to 10.10.190.207 port 32753 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:01:45.927: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6595 exec execpod-affinity5sxw2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32753' Jun 17 22:01:46.433: INFO: rc: 1 Jun 17 22:01:46.433: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6595 exec execpod-affinity5sxw2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32753: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32753 nc: connect to 10.10.190.207 port 32753 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:01:46.433: FAIL: Unexpected error: <*errors.errorString | 0xc002088520>: { s: "service is not reachable within 2m0s timeout on endpoint 10.10.190.207:32753 over TCP protocol", } service is not reachable within 2m0s timeout on endpoint 10.10.190.207:32753 over TCP protocol occurred Full Stack Trace k8s.io/kubernetes/test/e2e/network.execAffinityTestForSessionAffinityTimeout(0xc001bb54a0, 0x77b33d8, 0xc002cf6420, 0xc000b36f00) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:2497 +0x751 k8s.io/kubernetes/test/e2e/network.glob..func24.26() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1846 +0x9c k8s.io/kubernetes/test/e2e.RunE2ETests(0xc000b1db00) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x36c k8s.io/kubernetes/test/e2e.TestE2E(0xc000b1db00) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:144 +0x2b testing.tRunner(0xc000b1db00, 0x70f99e8) /usr/local/go/src/testing/testing.go:1193 +0xef created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1238 +0x2b3 Jun 17 22:01:46.435: INFO: Cleaning up the exec pod STEP: deleting ReplicationController affinity-nodeport-timeout in namespace services-6595, will wait for the garbage collector to delete the pods Jun 17 22:01:46.509: INFO: Deleting ReplicationController affinity-nodeport-timeout took: 5.029863ms Jun 17 22:01:46.610: INFO: Terminating ReplicationController affinity-nodeport-timeout pods took: 101.10441ms [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 STEP: Collecting events from namespace "services-6595". STEP: Found 35 events. Jun 17 22:01:58.428: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for affinity-nodeport-timeout-gvczm: { } Scheduled: Successfully assigned services-6595/affinity-nodeport-timeout-gvczm to node2 Jun 17 22:01:58.428: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for affinity-nodeport-timeout-jd99b: { } Scheduled: Successfully assigned services-6595/affinity-nodeport-timeout-jd99b to node2 Jun 17 22:01:58.428: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for affinity-nodeport-timeout-kkz2m: { } Scheduled: Successfully assigned services-6595/affinity-nodeport-timeout-kkz2m to node1 Jun 17 22:01:58.428: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for execpod-affinity5sxw2: { } Scheduled: Successfully assigned services-6595/execpod-affinity5sxw2 to node1 Jun 17 22:01:58.428: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for kube-proxy-mode-detector: { } Scheduled: Successfully assigned services-6595/kube-proxy-mode-detector to node2 Jun 17 22:01:58.428: INFO: At 2022-06-17 21:59:13 +0000 UTC - event for kube-proxy-mode-detector: {kubelet node2} Created: Created container agnhost-container Jun 17 22:01:58.428: INFO: At 2022-06-17 21:59:13 +0000 UTC - event for kube-proxy-mode-detector: {kubelet node2} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/agnhost:2.32" in 403.866116ms Jun 17 22:01:58.428: INFO: At 2022-06-17 21:59:13 +0000 UTC - event for kube-proxy-mode-detector: {kubelet node2} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/agnhost:2.32" Jun 17 22:01:58.428: INFO: At 2022-06-17 21:59:15 +0000 UTC - event for kube-proxy-mode-detector: {kubelet node2} Started: Started container agnhost-container Jun 17 22:01:58.428: INFO: At 2022-06-17 21:59:19 +0000 UTC - event for affinity-nodeport-timeout: {replication-controller } SuccessfulCreate: Created pod: affinity-nodeport-timeout-jd99b Jun 17 22:01:58.428: INFO: At 2022-06-17 21:59:19 +0000 UTC - event for affinity-nodeport-timeout: {replication-controller } SuccessfulCreate: Created pod: affinity-nodeport-timeout-kkz2m Jun 17 22:01:58.428: INFO: At 2022-06-17 21:59:19 +0000 UTC - event for affinity-nodeport-timeout: {replication-controller } SuccessfulCreate: Created pod: affinity-nodeport-timeout-gvczm Jun 17 22:01:58.428: INFO: At 2022-06-17 21:59:19 +0000 UTC - event for kube-proxy-mode-detector: {kubelet node2} Killing: Stopping container agnhost-container Jun 17 22:01:58.428: INFO: At 2022-06-17 21:59:24 +0000 UTC - event for affinity-nodeport-timeout-kkz2m: {kubelet node1} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/agnhost:2.32" Jun 17 22:01:58.428: INFO: At 2022-06-17 21:59:25 +0000 UTC - event for affinity-nodeport-timeout-kkz2m: {kubelet node1} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/agnhost:2.32" in 329.612937ms Jun 17 22:01:58.428: INFO: At 2022-06-17 21:59:25 +0000 UTC - event for affinity-nodeport-timeout-kkz2m: {kubelet node1} Created: Created container affinity-nodeport-timeout Jun 17 22:01:58.428: INFO: At 2022-06-17 21:59:26 +0000 UTC - event for affinity-nodeport-timeout-kkz2m: {kubelet node1} Started: Started container affinity-nodeport-timeout Jun 17 22:01:58.428: INFO: At 2022-06-17 21:59:27 +0000 UTC - event for affinity-nodeport-timeout-gvczm: {kubelet node2} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/agnhost:2.32" in 483.143889ms Jun 17 22:01:58.428: INFO: At 2022-06-17 21:59:27 +0000 UTC - event for affinity-nodeport-timeout-gvczm: {kubelet node2} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/agnhost:2.32" Jun 17 22:01:58.428: INFO: At 2022-06-17 21:59:28 +0000 UTC - event for affinity-nodeport-timeout-gvczm: {kubelet node2} Created: Created container affinity-nodeport-timeout Jun 17 22:01:58.428: INFO: At 2022-06-17 21:59:28 +0000 UTC - event for affinity-nodeport-timeout-jd99b: {kubelet node2} Created: Created container affinity-nodeport-timeout Jun 17 22:01:58.428: INFO: At 2022-06-17 21:59:28 +0000 UTC - event for affinity-nodeport-timeout-jd99b: {kubelet node2} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/agnhost:2.32" Jun 17 22:01:58.428: INFO: At 2022-06-17 21:59:28 +0000 UTC - event for affinity-nodeport-timeout-jd99b: {kubelet node2} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/agnhost:2.32" in 251.787448ms Jun 17 22:01:58.428: INFO: At 2022-06-17 21:59:29 +0000 UTC - event for affinity-nodeport-timeout-gvczm: {kubelet node2} Started: Started container affinity-nodeport-timeout Jun 17 22:01:58.428: INFO: At 2022-06-17 21:59:29 +0000 UTC - event for affinity-nodeport-timeout-jd99b: {kubelet node2} Started: Started container affinity-nodeport-timeout Jun 17 22:01:58.428: INFO: At 2022-06-17 21:59:37 +0000 UTC - event for execpod-affinity5sxw2: {kubelet node1} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "32538d83140fea0293fea6d0dac5ffc773527622b1d9a6ec350f52a10d0d8cf4" network for pod "execpod-affinity5sxw2": networkPlugin cni failed to set up pod "execpod-affinity5sxw2_services-6595" network: Multus: [services-6595/execpod-affinity5sxw2]: error setting the networks status: SetNetworkStatus: failed to update the pod execpod-affinity5sxw2 in out of cluster comm: SetNetworkStatus: failed to update the pod execpod-affinity5sxw2 in out of cluster comm: status update failed for pod /: resource name may not be empty Jun 17 22:01:58.428: INFO: At 2022-06-17 21:59:37 +0000 UTC - event for execpod-affinity5sxw2: {kubelet node1} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jun 17 22:01:58.428: INFO: At 2022-06-17 21:59:39 +0000 UTC - event for execpod-affinity5sxw2: {kubelet node1} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/agnhost:2.32" Jun 17 22:01:58.428: INFO: At 2022-06-17 21:59:39 +0000 UTC - event for execpod-affinity5sxw2: {kubelet node1} Started: Started container agnhost-container Jun 17 22:01:58.428: INFO: At 2022-06-17 21:59:39 +0000 UTC - event for execpod-affinity5sxw2: {kubelet node1} Created: Created container agnhost-container Jun 17 22:01:58.428: INFO: At 2022-06-17 21:59:39 +0000 UTC - event for execpod-affinity5sxw2: {kubelet node1} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/agnhost:2.32" in 333.141801ms Jun 17 22:01:58.428: INFO: At 2022-06-17 22:01:46 +0000 UTC - event for affinity-nodeport-timeout-gvczm: {kubelet node2} Killing: Stopping container affinity-nodeport-timeout Jun 17 22:01:58.428: INFO: At 2022-06-17 22:01:46 +0000 UTC - event for affinity-nodeport-timeout-jd99b: {kubelet node2} Killing: Stopping container affinity-nodeport-timeout Jun 17 22:01:58.428: INFO: At 2022-06-17 22:01:46 +0000 UTC - event for affinity-nodeport-timeout-kkz2m: {kubelet node1} Killing: Stopping container affinity-nodeport-timeout Jun 17 22:01:58.428: INFO: At 2022-06-17 22:01:46 +0000 UTC - event for execpod-affinity5sxw2: {kubelet node1} Killing: Stopping container agnhost-container Jun 17 22:01:58.431: INFO: POD NODE PHASE GRACE CONDITIONS Jun 17 22:01:58.431: INFO: Jun 17 22:01:58.435: INFO: Logging node info for node master1 Jun 17 22:01:58.437: INFO: Node Info: &Node{ObjectMeta:{master1 47691bb2-4ee9-4386-8bec-0f9db1917afd 35810 0 2022-06-17 19:59:00 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master1 kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.202 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2022-06-17 19:59:03 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}}} {flanneld Update v1 2022-06-17 20:01:43 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2022-06-17 20:01:45 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.0.0/24\"":{}},"f:taints":{}}}} {kubelet Update v1 2022-06-17 20:06:30 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{451201003520 0} {} 440625980Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234739200 0} {} 196518300Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{406080902496 0} {} 406080902496 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324575232 0} {} 195629468Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-06-17 20:04:36 +0000 UTC,LastTransitionTime:2022-06-17 20:04:36 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-06-17 22:01:49 +0000 UTC,LastTransitionTime:2022-06-17 19:58:57 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-06-17 22:01:49 +0000 UTC,LastTransitionTime:2022-06-17 19:58:57 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-06-17 22:01:49 +0000 UTC,LastTransitionTime:2022-06-17 19:58:57 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-06-17 22:01:49 +0000 UTC,LastTransitionTime:2022-06-17 20:01:45 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.202,},NodeAddress{Type:Hostname,Address:master1,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:f59e69c8e0cc41ff966b02f015e9cf30,SystemUUID:00ACFB60-0631-E711-906E-0017A4403562,BootID:81e1dc93-cb0d-4bf9-b7c4-28e0b4aef603,KernelVersion:3.10.0-1160.66.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.17,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:727740703,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 sirot/netperf-latest:latest],SizeBytes:282025213,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e k8s.gcr.io/coredns/coredns:v1.8.0],SizeBytes:42454755,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[localhost:30500/tasextender@sha256:a226a9c613b9eeed89115dd78ba697306e50d1b4466033c8415371714720c861 tasextender:latest localhost:30500/tasextender:0.4],SizeBytes:28910791,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[registry@sha256:1cd9409a311350c3072fe510b52046f104416376c126a479cef9a4dfe692cf57 registry:2.7.0],SizeBytes:24191168,},ContainerImage{Names:[nginx@sha256:b92d3b942c8b84da889ac3dc6e83bd20ffb8cd2d8298eba92c8b0bf88d52f03e nginx:1.20.1-alpine],SizeBytes:22721538,},ContainerImage{Names:[@ :],SizeBytes:5577654,},ContainerImage{Names:[alpine@sha256:c0e9560cda118f9ec63ddefb4a173a2b2a0347082d7dff7dc14272e7841a5b5a alpine:3.12.1],SizeBytes:5573013,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jun 17 22:01:58.438: INFO: Logging kubelet events for node master1 Jun 17 22:01:58.440: INFO: Logging pods the kubelet thinks is on node master1 Jun 17 22:01:58.460: INFO: node-exporter-bts5h started at 2022-06-17 20:14:54 +0000 UTC (0+2 container statuses recorded) Jun 17 22:01:58.460: INFO: Container kube-rbac-proxy ready: true, restart count 0 Jun 17 22:01:58.460: INFO: Container node-exporter ready: true, restart count 0 Jun 17 22:01:58.460: INFO: kube-scheduler-master1 started at 2022-06-17 20:08:08 +0000 UTC (0+1 container statuses recorded) Jun 17 22:01:58.460: INFO: Container kube-scheduler ready: true, restart count 0 Jun 17 22:01:58.460: INFO: kube-proxy-b2xlr started at 2022-06-17 20:00:43 +0000 UTC (0+1 container statuses recorded) Jun 17 22:01:58.460: INFO: Container kube-proxy ready: true, restart count 2 Jun 17 22:01:58.460: INFO: container-registry-65d7c44b96-hq7rp started at 2022-06-17 20:06:17 +0000 UTC (0+2 container statuses recorded) Jun 17 22:01:58.460: INFO: Container docker-registry ready: true, restart count 0 Jun 17 22:01:58.460: INFO: Container nginx ready: true, restart count 0 Jun 17 22:01:58.461: INFO: kube-multus-ds-amd64-rqb4r started at 2022-06-17 20:01:47 +0000 UTC (0+1 container statuses recorded) Jun 17 22:01:58.461: INFO: Container kube-multus ready: true, restart count 1 Jun 17 22:01:58.461: INFO: kube-apiserver-master1 started at 2022-06-17 20:00:04 +0000 UTC (0+1 container statuses recorded) Jun 17 22:01:58.461: INFO: Container kube-apiserver ready: true, restart count 0 Jun 17 22:01:58.461: INFO: kube-controller-manager-master1 started at 2022-06-17 20:08:08 +0000 UTC (0+1 container statuses recorded) Jun 17 22:01:58.461: INFO: Container kube-controller-manager ready: true, restart count 2 Jun 17 22:01:58.461: INFO: kube-flannel-z9nqz started at 2022-06-17 20:01:38 +0000 UTC (1+1 container statuses recorded) Jun 17 22:01:58.461: INFO: Init container install-cni ready: true, restart count 2 Jun 17 22:01:58.461: INFO: Container kube-flannel ready: true, restart count 2 Jun 17 22:01:58.540: INFO: Latency metrics for node master1 Jun 17 22:01:58.540: INFO: Logging node info for node master2 Jun 17 22:01:58.543: INFO: Node Info: &Node{ObjectMeta:{master2 71ab7827-6f85-4ecf-82ce-5b27d8ba1a11 35977 0 2022-06-17 19:59:29 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master2 kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.203 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/master.version:v0.8.2 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2022-06-17 19:59:30 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}}} {flanneld Update v1 2022-06-17 20:01:42 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2022-06-17 20:01:45 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}},"f:taints":{}}}} {nfd-master Update v1 2022-06-17 20:09:34 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/master.version":{}}}}} {kubelet Update v1 2022-06-17 20:09:40 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:capacity":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.1.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{451201003520 0} {} 440625980Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234743296 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{406080902496 0} {} 406080902496 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324579328 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-06-17 20:04:35 +0000 UTC,LastTransitionTime:2022-06-17 20:04:35 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-06-17 22:01:57 +0000 UTC,LastTransitionTime:2022-06-17 19:59:29 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-06-17 22:01:57 +0000 UTC,LastTransitionTime:2022-06-17 19:59:29 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-06-17 22:01:57 +0000 UTC,LastTransitionTime:2022-06-17 19:59:29 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-06-17 22:01:57 +0000 UTC,LastTransitionTime:2022-06-17 20:01:45 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.203,},NodeAddress{Type:Hostname,Address:master2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:ba0363db4fd2476098c500989c8b1fd5,SystemUUID:00A0DE53-E51D-E711-906E-0017A4403562,BootID:cafb2298-e9e8-4bc9-82ab-0feb6c416066,KernelVersion:3.10.0-1160.66.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.17,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:727740703,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 sirot/netperf-latest:latest],SizeBytes:282025213,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[k8s.gcr.io/nfd/node-feature-discovery@sha256:74a1cbd82354f148277e20cdce25d57816e355a896bc67f67a0f722164b16945 k8s.gcr.io/nfd/node-feature-discovery:v0.8.2],SizeBytes:108486428,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e k8s.gcr.io/coredns/coredns:v1.8.0],SizeBytes:42454755,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 aquasec/kube-bench:0.3.1],SizeBytes:19301876,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jun 17 22:01:58.543: INFO: Logging kubelet events for node master2 Jun 17 22:01:58.545: INFO: Logging pods the kubelet thinks is on node master2 Jun 17 22:01:58.557: INFO: dns-autoscaler-7df78bfcfb-ml447 started at 2022-06-17 20:02:16 +0000 UTC (0+1 container statuses recorded) Jun 17 22:01:58.557: INFO: Container autoscaler ready: true, restart count 1 Jun 17 22:01:58.557: INFO: kube-apiserver-master2 started at 2022-06-17 20:00:05 +0000 UTC (0+1 container statuses recorded) Jun 17 22:01:58.557: INFO: Container kube-apiserver ready: true, restart count 0 Jun 17 22:01:58.557: INFO: kube-proxy-52p78 started at 2022-06-17 20:00:43 +0000 UTC (0+1 container statuses recorded) Jun 17 22:01:58.557: INFO: Container kube-proxy ready: true, restart count 1 Jun 17 22:01:58.557: INFO: kube-multus-ds-amd64-spg7h started at 2022-06-17 20:01:47 +0000 UTC (0+1 container statuses recorded) Jun 17 22:01:58.557: INFO: Container kube-multus ready: true, restart count 1 Jun 17 22:01:58.557: INFO: coredns-8474476ff8-55pd7 started at 2022-06-17 20:02:14 +0000 UTC (0+1 container statuses recorded) Jun 17 22:01:58.557: INFO: Container coredns ready: true, restart count 1 Jun 17 22:01:58.557: INFO: node-exporter-ccmb2 started at 2022-06-17 20:14:54 +0000 UTC (0+2 container statuses recorded) Jun 17 22:01:58.557: INFO: Container kube-rbac-proxy ready: true, restart count 0 Jun 17 22:01:58.557: INFO: Container node-exporter ready: true, restart count 0 Jun 17 22:01:58.557: INFO: kube-controller-manager-master2 started at 2022-06-17 20:08:05 +0000 UTC (0+1 container statuses recorded) Jun 17 22:01:58.557: INFO: Container kube-controller-manager ready: true, restart count 2 Jun 17 22:01:58.557: INFO: kube-scheduler-master2 started at 2022-06-17 20:00:05 +0000 UTC (0+1 container statuses recorded) Jun 17 22:01:58.557: INFO: Container kube-scheduler ready: true, restart count 2 Jun 17 22:01:58.557: INFO: kube-flannel-kmc7f started at 2022-06-17 20:01:38 +0000 UTC (1+1 container statuses recorded) Jun 17 22:01:58.557: INFO: Init container install-cni ready: true, restart count 2 Jun 17 22:01:58.557: INFO: Container kube-flannel ready: true, restart count 2 Jun 17 22:01:58.557: INFO: node-feature-discovery-controller-cff799f9f-zlzkd started at 2022-06-17 20:09:28 +0000 UTC (0+1 container statuses recorded) Jun 17 22:01:58.557: INFO: Container nfd-controller ready: true, restart count 0 Jun 17 22:01:58.637: INFO: Latency metrics for node master2 Jun 17 22:01:58.637: INFO: Logging node info for node master3 Jun 17 22:01:58.641: INFO: Node Info: &Node{ObjectMeta:{master3 4495d2b3-3dc7-45fa-93e4-2ad5ef91730e 35932 0 2022-06-17 19:59:37 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master3 kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.204 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2022-06-17 19:59:39 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}}} {kube-controller-manager Update v1 2022-06-17 20:00:32 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.2.0/24\"":{}},"f:taints":{}}}} {flanneld Update v1 2022-06-17 20:01:42 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kubelet Update v1 2022-06-17 20:12:12 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:capacity":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.2.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{451201003520 0} {} 440625980Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234743296 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{406080902496 0} {} 406080902496 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324579328 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-06-17 20:04:33 +0000 UTC,LastTransitionTime:2022-06-17 20:04:33 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-06-17 22:01:55 +0000 UTC,LastTransitionTime:2022-06-17 19:59:37 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-06-17 22:01:55 +0000 UTC,LastTransitionTime:2022-06-17 19:59:37 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-06-17 22:01:55 +0000 UTC,LastTransitionTime:2022-06-17 19:59:37 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-06-17 22:01:55 +0000 UTC,LastTransitionTime:2022-06-17 20:01:45 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.204,},NodeAddress{Type:Hostname,Address:master3,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:e420146228b341cbbaf470c338ef023e,SystemUUID:008B1444-141E-E711-906E-0017A4403562,BootID:88e9c5d2-4324-4e63-8acf-ee80e9511e70,KernelVersion:3.10.0-1160.66.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.17,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:727740703,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[quay.io/prometheus-operator/prometheus-operator@sha256:850c86bfeda4389bc9c757a9fd17ca5a090ea6b424968178d4467492cfa13921 quay.io/prometheus-operator/prometheus-operator:v0.44.1],SizeBytes:42617274,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e k8s.gcr.io/coredns/coredns:v1.8.0],SizeBytes:42454755,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jun 17 22:01:58.641: INFO: Logging kubelet events for node master3 Jun 17 22:01:58.644: INFO: Logging pods the kubelet thinks is on node master3 Jun 17 22:01:58.652: INFO: kube-controller-manager-master3 started at 2022-06-17 20:08:07 +0000 UTC (0+1 container statuses recorded) Jun 17 22:01:58.652: INFO: Container kube-controller-manager ready: true, restart count 2 Jun 17 22:01:58.652: INFO: coredns-8474476ff8-plfdq started at 2022-06-17 20:02:18 +0000 UTC (0+1 container statuses recorded) Jun 17 22:01:58.652: INFO: Container coredns ready: true, restart count 1 Jun 17 22:01:58.652: INFO: prometheus-operator-585ccfb458-kz9ss started at 2022-06-17 20:14:47 +0000 UTC (0+2 container statuses recorded) Jun 17 22:01:58.652: INFO: Container kube-rbac-proxy ready: true, restart count 0 Jun 17 22:01:58.652: INFO: Container prometheus-operator ready: true, restart count 0 Jun 17 22:01:58.652: INFO: kube-multus-ds-amd64-vtvhp started at 2022-06-17 20:01:47 +0000 UTC (0+1 container statuses recorded) Jun 17 22:01:58.652: INFO: Container kube-multus ready: true, restart count 1 Jun 17 22:01:58.652: INFO: node-exporter-tv8q4 started at 2022-06-17 20:14:54 +0000 UTC (0+2 container statuses recorded) Jun 17 22:01:58.652: INFO: Container kube-rbac-proxy ready: true, restart count 0 Jun 17 22:01:58.652: INFO: Container node-exporter ready: true, restart count 0 Jun 17 22:01:58.653: INFO: kube-apiserver-master3 started at 2022-06-17 20:00:05 +0000 UTC (0+1 container statuses recorded) Jun 17 22:01:58.653: INFO: Container kube-apiserver ready: true, restart count 0 Jun 17 22:01:58.653: INFO: kube-scheduler-master3 started at 2022-06-17 20:08:07 +0000 UTC (0+1 container statuses recorded) Jun 17 22:01:58.653: INFO: Container kube-scheduler ready: true, restart count 2 Jun 17 22:01:58.653: INFO: kube-proxy-qw2lh started at 2022-06-17 20:00:43 +0000 UTC (0+1 container statuses recorded) Jun 17 22:01:58.653: INFO: Container kube-proxy ready: true, restart count 1 Jun 17 22:01:58.653: INFO: kube-flannel-7sp2w started at 2022-06-17 20:01:38 +0000 UTC (1+1 container statuses recorded) Jun 17 22:01:58.653: INFO: Init container install-cni ready: true, restart count 0 Jun 17 22:01:58.653: INFO: Container kube-flannel ready: true, restart count 2 Jun 17 22:01:58.740: INFO: Latency metrics for node master3 Jun 17 22:01:58.740: INFO: Logging node info for node node1 Jun 17 22:01:58.742: INFO: Node Info: &Node{ObjectMeta:{node1 2db3a28c-448f-4511-9db8-4ef739b681b1 35868 0 2022-06-17 20:00:39 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux cmk.intel.com/cmk-node:true feature.node.kubernetes.io/cpu-cpuid.ADX:true feature.node.kubernetes.io/cpu-cpuid.AESNI:true feature.node.kubernetes.io/cpu-cpuid.AVX:true feature.node.kubernetes.io/cpu-cpuid.AVX2:true feature.node.kubernetes.io/cpu-cpuid.AVX512BW:true feature.node.kubernetes.io/cpu-cpuid.AVX512CD:true feature.node.kubernetes.io/cpu-cpuid.AVX512DQ:true feature.node.kubernetes.io/cpu-cpuid.AVX512F:true feature.node.kubernetes.io/cpu-cpuid.AVX512VL:true feature.node.kubernetes.io/cpu-cpuid.FMA3:true feature.node.kubernetes.io/cpu-cpuid.HLE:true feature.node.kubernetes.io/cpu-cpuid.IBPB:true feature.node.kubernetes.io/cpu-cpuid.MPX:true feature.node.kubernetes.io/cpu-cpuid.RTM:true feature.node.kubernetes.io/cpu-cpuid.SSE4:true feature.node.kubernetes.io/cpu-cpuid.SSE42:true feature.node.kubernetes.io/cpu-cpuid.STIBP:true feature.node.kubernetes.io/cpu-cpuid.VMX:true feature.node.kubernetes.io/cpu-cstate.enabled:true feature.node.kubernetes.io/cpu-hardware_multithreading:true feature.node.kubernetes.io/cpu-pstate.status:active feature.node.kubernetes.io/cpu-pstate.turbo:true feature.node.kubernetes.io/cpu-rdt.RDTCMT:true feature.node.kubernetes.io/cpu-rdt.RDTL3CA:true feature.node.kubernetes.io/cpu-rdt.RDTMBA:true feature.node.kubernetes.io/cpu-rdt.RDTMBM:true feature.node.kubernetes.io/cpu-rdt.RDTMON:true feature.node.kubernetes.io/kernel-config.NO_HZ:true feature.node.kubernetes.io/kernel-config.NO_HZ_FULL:true feature.node.kubernetes.io/kernel-selinux.enabled:true feature.node.kubernetes.io/kernel-version.full:3.10.0-1160.66.1.el7.x86_64 feature.node.kubernetes.io/kernel-version.major:3 feature.node.kubernetes.io/kernel-version.minor:10 feature.node.kubernetes.io/kernel-version.revision:0 feature.node.kubernetes.io/memory-numa:true feature.node.kubernetes.io/network-sriov.capable:true feature.node.kubernetes.io/network-sriov.configured:true feature.node.kubernetes.io/pci-0300_1a03.present:true feature.node.kubernetes.io/storage-nonrotationaldisk:true feature.node.kubernetes.io/system-os_release.ID:centos feature.node.kubernetes.io/system-os_release.VERSION_ID:7 feature.node.kubernetes.io/system-os_release.VERSION_ID.major:7 kubernetes.io/arch:amd64 kubernetes.io/hostname:node1 kubernetes.io/os:linux] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.207 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/extended-resources: nfd.node.kubernetes.io/feature-labels:cpu-cpuid.ADX,cpu-cpuid.AESNI,cpu-cpuid.AVX,cpu-cpuid.AVX2,cpu-cpuid.AVX512BW,cpu-cpuid.AVX512CD,cpu-cpuid.AVX512DQ,cpu-cpuid.AVX512F,cpu-cpuid.AVX512VL,cpu-cpuid.FMA3,cpu-cpuid.HLE,cpu-cpuid.IBPB,cpu-cpuid.MPX,cpu-cpuid.RTM,cpu-cpuid.SSE4,cpu-cpuid.SSE42,cpu-cpuid.STIBP,cpu-cpuid.VMX,cpu-cstate.enabled,cpu-hardware_multithreading,cpu-pstate.status,cpu-pstate.turbo,cpu-rdt.RDTCMT,cpu-rdt.RDTL3CA,cpu-rdt.RDTMBA,cpu-rdt.RDTMBM,cpu-rdt.RDTMON,kernel-config.NO_HZ,kernel-config.NO_HZ_FULL,kernel-selinux.enabled,kernel-version.full,kernel-version.major,kernel-version.minor,kernel-version.revision,memory-numa,network-sriov.capable,network-sriov.configured,pci-0300_1a03.present,storage-nonrotationaldisk,system-os_release.ID,system-os_release.VERSION_ID,system-os_release.VERSION_ID.major nfd.node.kubernetes.io/worker.version:v0.8.2 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2022-06-17 20:00:39 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.4.0/24\"":{}}}}} {kubeadm Update v1 2022-06-17 20:00:40 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {flanneld Update v1 2022-06-17 20:01:43 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {nfd-master Update v1 2022-06-17 20:09:35 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/extended-resources":{},"f:nfd.node.kubernetes.io/feature-labels":{},"f:nfd.node.kubernetes.io/worker.version":{}},"f:labels":{"f:feature.node.kubernetes.io/cpu-cpuid.ADX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AESNI":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX2":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512BW":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512CD":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512DQ":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512F":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512VL":{},"f:feature.node.kubernetes.io/cpu-cpuid.FMA3":{},"f:feature.node.kubernetes.io/cpu-cpuid.HLE":{},"f:feature.node.kubernetes.io/cpu-cpuid.IBPB":{},"f:feature.node.kubernetes.io/cpu-cpuid.MPX":{},"f:feature.node.kubernetes.io/cpu-cpuid.RTM":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE4":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE42":{},"f:feature.node.kubernetes.io/cpu-cpuid.STIBP":{},"f:feature.node.kubernetes.io/cpu-cpuid.VMX":{},"f:feature.node.kubernetes.io/cpu-cstate.enabled":{},"f:feature.node.kubernetes.io/cpu-hardware_multithreading":{},"f:feature.node.kubernetes.io/cpu-pstate.status":{},"f:feature.node.kubernetes.io/cpu-pstate.turbo":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTCMT":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTL3CA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBM":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMON":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ_FULL":{},"f:feature.node.kubernetes.io/kernel-selinux.enabled":{},"f:feature.node.kubernetes.io/kernel-version.full":{},"f:feature.node.kubernetes.io/kernel-version.major":{},"f:feature.node.kubernetes.io/kernel-version.minor":{},"f:feature.node.kubernetes.io/kernel-version.revision":{},"f:feature.node.kubernetes.io/memory-numa":{},"f:feature.node.kubernetes.io/network-sriov.capable":{},"f:feature.node.kubernetes.io/network-sriov.configured":{},"f:feature.node.kubernetes.io/pci-0300_1a03.present":{},"f:feature.node.kubernetes.io/storage-nonrotationaldisk":{},"f:feature.node.kubernetes.io/system-os_release.ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID.major":{}}}}} {Swagger-Codegen Update v1 2022-06-17 20:13:06 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:cmk.intel.com/cmk-node":{}}},"f:status":{"f:capacity":{"f:cmk.intel.com/exclusive-cores":{}}}}} {kubelet Update v1 2022-06-17 20:13:08 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:cmk.intel.com/exclusive-cores":{},"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:capacity":{"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.4.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.4.0/24],},Status:NodeStatus{Capacity:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{451201003520 0} {} 440625980Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{201269608448 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{77 0} {} 77 DecimalSI},ephemeral-storage: {{406080902496 0} {} 406080902496 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{178884608000 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-06-17 20:04:34 +0000 UTC,LastTransitionTime:2022-06-17 20:04:34 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-06-17 22:01:52 +0000 UTC,LastTransitionTime:2022-06-17 20:00:39 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-06-17 22:01:52 +0000 UTC,LastTransitionTime:2022-06-17 20:00:39 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-06-17 22:01:52 +0000 UTC,LastTransitionTime:2022-06-17 20:00:39 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-06-17 22:01:52 +0000 UTC,LastTransitionTime:2022-06-17 20:01:49 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.207,},NodeAddress{Type:Hostname,Address:node1,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:b4b206100a5d45e9959c4a79c836676a,SystemUUID:00CDA902-D022-E711-906E-0017A4403562,BootID:5a19e1a7-8d9a-4724-83a4-bd77b1a0f8f4,KernelVersion:3.10.0-1160.66.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.17,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[opnfv/barometer-collectd@sha256:f30e965aa6195e6ac4ca2410f5a15e3704c92e4afa5208178ca22a7911975d66],SizeBytes:1075575763,},ContainerImage{Names:[@ :],SizeBytes:1007077455,},ContainerImage{Names:[localhost:30500/cmk@sha256:7227e64d78c2a9dd290de0ec1cbbaf536dad977fc3efca629dc87d6ffb97071e cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:727740703,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[golang@sha256:db2475a1dbb2149508e5db31d7d77a75e6600d54be645f37681f03f2762169ba golang:alpine3.12],SizeBytes:301186719,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[grafana/grafana@sha256:ba39bf5131dcc0464134a3ff0e26e8c6380415249fa725e5f619176601255172 grafana/grafana:7.5.4],SizeBytes:203572842,},ContainerImage{Names:[quay.io/prometheus/prometheus@sha256:b899dbd1b9017b9a379f76ce5b40eead01a62762c4f2057eacef945c3c22d210 quay.io/prometheus/prometheus:v2.22.1],SizeBytes:168344243,},ContainerImage{Names:[nginx@sha256:a05b0cdd4fc1be3b224ba9662ebdf98fe44c09c0c9215b45f84344c12867002e nginx:1.21.1],SizeBytes:133175493,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:125930239,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[k8s.gcr.io/nfd/node-feature-discovery@sha256:74a1cbd82354f148277e20cdce25d57816e355a896bc67f67a0f722164b16945 k8s.gcr.io/nfd/node-feature-discovery:v0.8.2],SizeBytes:108486428,},ContainerImage{Names:[directxman12/k8s-prometheus-adapter@sha256:2b09a571757a12c0245f2f1a74db4d1b9386ff901cf57f5ce48a0a682bd0e3af directxman12/k8s-prometheus-adapter:v0.8.2],SizeBytes:68230450,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[localhost:30500/sriov-device-plugin@sha256:21d7abd21ac65aac7d19aaa2b1b05a71e496b7bf6251c76df58855be9c3aaa59 nfvpe/sriov-device-plugin:latest localhost:30500/sriov-device-plugin:v3.3.2],SizeBytes:42676189,},ContainerImage{Names:[localhost:30500/tasextender@sha256:a226a9c613b9eeed89115dd78ba697306e50d1b4466033c8415371714720c861 localhost:30500/tasextender:0.4],SizeBytes:28910791,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[prom/collectd-exporter@sha256:73fbda4d24421bff3b741c27efc36f1b6fbe7c57c378d56d4ff78101cd556654],SizeBytes:17463681,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:16032814,},ContainerImage{Names:[quay.io/prometheus-operator/prometheus-config-reloader@sha256:4dee0fcf1820355ddd6986c1317b555693776c731315544a99d6cc59a7e34ce9 quay.io/prometheus-operator/prometheus-config-reloader:v0.44.1],SizeBytes:13433274,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:11443478,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonewprivs@sha256:8ac1264691820febacf3aea5d152cbde6d10685731ec14966a9401c6f47a68ac k8s.gcr.io/e2e-test-images/nonewprivs:1.3],SizeBytes:7107254,},ContainerImage{Names:[appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172cead6502fec989198b98d8bb appropriate/curl:edge],SizeBytes:5654234,},ContainerImage{Names:[alpine@sha256:c75ac27b49326926b803b9ed43bf088bc220d22556de1bc5f72d742c91398f69 alpine:3.12],SizeBytes:5581590,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:1154361,},ContainerImage{Names:[busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 busybox:1.28],SizeBytes:1146369,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jun 17 22:01:58.743: INFO: Logging kubelet events for node node1 Jun 17 22:01:58.745: INFO: Logging pods the kubelet thinks is on node node1 Jun 17 22:01:58.764: INFO: kubernetes-dashboard-785dcbb76d-26kg6 started at 2022-06-17 20:02:19 +0000 UTC (0+1 container statuses recorded) Jun 17 22:01:58.764: INFO: Container kubernetes-dashboard ready: true, restart count 2 Jun 17 22:01:58.764: INFO: tas-telemetry-aware-scheduling-84ff454dfb-tbvjv started at 2022-06-17 20:17:57 +0000 UTC (0+1 container statuses recorded) Jun 17 22:01:58.764: INFO: Container tas-extender ready: true, restart count 0 Jun 17 22:01:58.764: INFO: execpod8gpkx started at 2022-06-17 22:00:19 +0000 UTC (0+1 container statuses recorded) Jun 17 22:01:58.764: INFO: Container agnhost-container ready: true, restart count 0 Jun 17 22:01:58.764: INFO: server-envvars-98b588af-2ad7-47f4-b319-3648a38c07a5 started at 2022-06-17 22:01:49 +0000 UTC (0+1 container statuses recorded) Jun 17 22:01:58.764: INFO: Container srv ready: true, restart count 0 Jun 17 22:01:58.764: INFO: busybox-host-aliases00fe61d8-5d2e-435d-b09f-685c654a426e started at 2022-06-17 22:01:54 +0000 UTC (0+1 container statuses recorded) Jun 17 22:01:58.764: INFO: Container busybox-host-aliases00fe61d8-5d2e-435d-b09f-685c654a426e ready: false, restart count 0 Jun 17 22:01:58.764: INFO: node-feature-discovery-worker-dgp4b started at 2022-06-17 20:09:28 +0000 UTC (0+1 container statuses recorded) Jun 17 22:01:58.764: INFO: Container nfd-worker ready: true, restart count 0 Jun 17 22:01:58.764: INFO: prometheus-k8s-0 started at 2022-06-17 20:14:56 +0000 UTC (0+4 container statuses recorded) Jun 17 22:01:58.764: INFO: Container config-reloader ready: true, restart count 0 Jun 17 22:01:58.764: INFO: Container custom-metrics-apiserver ready: true, restart count 0 Jun 17 22:01:58.764: INFO: Container grafana ready: true, restart count 0 Jun 17 22:01:58.764: INFO: Container prometheus ready: true, restart count 1 Jun 17 22:01:58.764: INFO: externalname-service-rm766 started at 2022-06-17 21:59:29 +0000 UTC (0+1 container statuses recorded) Jun 17 22:01:58.764: INFO: Container externalname-service ready: true, restart count 0 Jun 17 22:01:58.764: INFO: collectd-5src2 started at 2022-06-17 20:18:47 +0000 UTC (0+3 container statuses recorded) Jun 17 22:01:58.764: INFO: Container collectd ready: true, restart count 0 Jun 17 22:01:58.764: INFO: Container collectd-exporter ready: true, restart count 0 Jun 17 22:01:58.764: INFO: Container rbac-proxy ready: true, restart count 0 Jun 17 22:01:58.764: INFO: kube-flannel-wqcwq started at 2022-06-17 20:01:38 +0000 UTC (1+1 container statuses recorded) Jun 17 22:01:58.764: INFO: Init container install-cni ready: true, restart count 2 Jun 17 22:01:58.764: INFO: Container kube-flannel ready: true, restart count 2 Jun 17 22:01:58.765: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-whtq2 started at 2022-06-17 20:10:41 +0000 UTC (0+1 container statuses recorded) Jun 17 22:01:58.765: INFO: Container kube-sriovdp ready: true, restart count 0 Jun 17 22:01:58.765: INFO: var-expansion-8fe100b7-cb54-442a-8a73-a4b304daf912 started at 2022-06-17 21:59:39 +0000 UTC (0+1 container statuses recorded) Jun 17 22:01:58.765: INFO: Container dapi-container ready: true, restart count 0 Jun 17 22:01:58.765: INFO: cmk-init-discover-node1-bvmrv started at 2022-06-17 20:13:02 +0000 UTC (0+3 container statuses recorded) Jun 17 22:01:58.765: INFO: Container discover ready: false, restart count 0 Jun 17 22:01:58.765: INFO: Container init ready: false, restart count 0 Jun 17 22:01:58.765: INFO: Container install ready: false, restart count 0 Jun 17 22:01:58.765: INFO: node-exporter-8ftgl started at 2022-06-17 20:14:54 +0000 UTC (0+2 container statuses recorded) Jun 17 22:01:58.765: INFO: Container kube-rbac-proxy ready: true, restart count 0 Jun 17 22:01:58.765: INFO: Container node-exporter ready: true, restart count 0 Jun 17 22:01:58.765: INFO: affinity-nodeport-transition-rwm2c started at 2022-06-17 21:59:28 +0000 UTC (0+1 container statuses recorded) Jun 17 22:01:58.765: INFO: Container affinity-nodeport-transition ready: true, restart count 0 Jun 17 22:01:58.765: INFO: netserver-0 started at 2022-06-17 22:01:21 +0000 UTC (0+1 container statuses recorded) Jun 17 22:01:58.765: INFO: Container webserver ready: false, restart count 0 Jun 17 22:01:58.765: INFO: cmk-webhook-6c9d5f8578-qcmrd started at 2022-06-17 20:13:52 +0000 UTC (0+1 container statuses recorded) Jun 17 22:01:58.765: INFO: Container cmk-webhook ready: true, restart count 0 Jun 17 22:01:58.765: INFO: kube-proxy-t4lqk started at 2022-06-17 20:00:43 +0000 UTC (0+1 container statuses recorded) Jun 17 22:01:58.765: INFO: Container kube-proxy ready: true, restart count 2 Jun 17 22:01:58.765: INFO: cmk-xh247 started at 2022-06-17 20:13:51 +0000 UTC (0+2 container statuses recorded) Jun 17 22:01:58.765: INFO: Container nodereport ready: true, restart count 0 Jun 17 22:01:58.765: INFO: Container reconcile ready: true, restart count 0 Jun 17 22:01:58.765: INFO: sample-apiserver-deployment-64f6b9dc99-hx87j started at 2022-06-17 22:01:45 +0000 UTC (0+2 container statuses recorded) Jun 17 22:01:58.765: INFO: Container etcd ready: false, restart count 0 Jun 17 22:01:58.765: INFO: Container sample-apiserver ready: false, restart count 0 Jun 17 22:01:58.765: INFO: nginx-proxy-node1 started at 2022-06-17 20:00:39 +0000 UTC (0+1 container statuses recorded) Jun 17 22:01:58.765: INFO: Container nginx-proxy ready: true, restart count 2 Jun 17 22:01:58.765: INFO: kube-multus-ds-amd64-m6vf8 started at 2022-06-17 20:01:47 +0000 UTC (0+1 container statuses recorded) Jun 17 22:01:58.765: INFO: Container kube-multus ready: true, restart count 1 Jun 17 22:01:58.765: INFO: client-envvars-38ddad96-0a9d-4c95-b902-5636348f134a started at 2022-06-17 22:01:53 +0000 UTC (0+1 container statuses recorded) Jun 17 22:01:58.765: INFO: Container env3cont ready: false, restart count 0 Jun 17 22:01:58.765: INFO: adopt-release-p7hdj started at 2022-06-17 22:01:55 +0000 UTC (0+1 container statuses recorded) Jun 17 22:01:58.765: INFO: Container c ready: false, restart count 0 Jun 17 22:01:59.065: INFO: Latency metrics for node node1 Jun 17 22:01:59.065: INFO: Logging node info for node node2 Jun 17 22:01:59.068: INFO: Node Info: &Node{ObjectMeta:{node2 467d2582-10f7-475b-9f20-5b7c2e46267a 35869 0 2022-06-17 20:00:37 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux cmk.intel.com/cmk-node:true feature.node.kubernetes.io/cpu-cpuid.ADX:true feature.node.kubernetes.io/cpu-cpuid.AESNI:true feature.node.kubernetes.io/cpu-cpuid.AVX:true feature.node.kubernetes.io/cpu-cpuid.AVX2:true feature.node.kubernetes.io/cpu-cpuid.AVX512BW:true feature.node.kubernetes.io/cpu-cpuid.AVX512CD:true feature.node.kubernetes.io/cpu-cpuid.AVX512DQ:true feature.node.kubernetes.io/cpu-cpuid.AVX512F:true feature.node.kubernetes.io/cpu-cpuid.AVX512VL:true feature.node.kubernetes.io/cpu-cpuid.FMA3:true feature.node.kubernetes.io/cpu-cpuid.HLE:true feature.node.kubernetes.io/cpu-cpuid.IBPB:true feature.node.kubernetes.io/cpu-cpuid.MPX:true feature.node.kubernetes.io/cpu-cpuid.RTM:true feature.node.kubernetes.io/cpu-cpuid.SSE4:true feature.node.kubernetes.io/cpu-cpuid.SSE42:true feature.node.kubernetes.io/cpu-cpuid.STIBP:true feature.node.kubernetes.io/cpu-cpuid.VMX:true feature.node.kubernetes.io/cpu-cstate.enabled:true feature.node.kubernetes.io/cpu-hardware_multithreading:true feature.node.kubernetes.io/cpu-pstate.status:active feature.node.kubernetes.io/cpu-pstate.turbo:true feature.node.kubernetes.io/cpu-rdt.RDTCMT:true feature.node.kubernetes.io/cpu-rdt.RDTL3CA:true feature.node.kubernetes.io/cpu-rdt.RDTMBA:true feature.node.kubernetes.io/cpu-rdt.RDTMBM:true feature.node.kubernetes.io/cpu-rdt.RDTMON:true feature.node.kubernetes.io/kernel-config.NO_HZ:true feature.node.kubernetes.io/kernel-config.NO_HZ_FULL:true feature.node.kubernetes.io/kernel-selinux.enabled:true feature.node.kubernetes.io/kernel-version.full:3.10.0-1160.66.1.el7.x86_64 feature.node.kubernetes.io/kernel-version.major:3 feature.node.kubernetes.io/kernel-version.minor:10 feature.node.kubernetes.io/kernel-version.revision:0 feature.node.kubernetes.io/memory-numa:true feature.node.kubernetes.io/network-sriov.capable:true feature.node.kubernetes.io/network-sriov.configured:true feature.node.kubernetes.io/pci-0300_1a03.present:true feature.node.kubernetes.io/storage-nonrotationaldisk:true feature.node.kubernetes.io/system-os_release.ID:centos feature.node.kubernetes.io/system-os_release.VERSION_ID:7 feature.node.kubernetes.io/system-os_release.VERSION_ID.major:7 kubernetes.io/arch:amd64 kubernetes.io/hostname:node2 kubernetes.io/os:linux] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.208 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/extended-resources: nfd.node.kubernetes.io/feature-labels:cpu-cpuid.ADX,cpu-cpuid.AESNI,cpu-cpuid.AVX,cpu-cpuid.AVX2,cpu-cpuid.AVX512BW,cpu-cpuid.AVX512CD,cpu-cpuid.AVX512DQ,cpu-cpuid.AVX512F,cpu-cpuid.AVX512VL,cpu-cpuid.FMA3,cpu-cpuid.HLE,cpu-cpuid.IBPB,cpu-cpuid.MPX,cpu-cpuid.RTM,cpu-cpuid.SSE4,cpu-cpuid.SSE42,cpu-cpuid.STIBP,cpu-cpuid.VMX,cpu-cstate.enabled,cpu-hardware_multithreading,cpu-pstate.status,cpu-pstate.turbo,cpu-rdt.RDTCMT,cpu-rdt.RDTL3CA,cpu-rdt.RDTMBA,cpu-rdt.RDTMBM,cpu-rdt.RDTMON,kernel-config.NO_HZ,kernel-config.NO_HZ_FULL,kernel-selinux.enabled,kernel-version.full,kernel-version.major,kernel-version.minor,kernel-version.revision,memory-numa,network-sriov.capable,network-sriov.configured,pci-0300_1a03.present,storage-nonrotationaldisk,system-os_release.ID,system-os_release.VERSION_ID,system-os_release.VERSION_ID.major nfd.node.kubernetes.io/worker.version:v0.8.2 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2022-06-17 20:00:37 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.3.0/24\"":{}}}}} {kubeadm Update v1 2022-06-17 20:00:40 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {flanneld Update v1 2022-06-17 20:01:43 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {nfd-master Update v1 2022-06-17 20:09:34 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/extended-resources":{},"f:nfd.node.kubernetes.io/feature-labels":{},"f:nfd.node.kubernetes.io/worker.version":{}},"f:labels":{"f:feature.node.kubernetes.io/cpu-cpuid.ADX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AESNI":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX2":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512BW":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512CD":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512DQ":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512F":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512VL":{},"f:feature.node.kubernetes.io/cpu-cpuid.FMA3":{},"f:feature.node.kubernetes.io/cpu-cpuid.HLE":{},"f:feature.node.kubernetes.io/cpu-cpuid.IBPB":{},"f:feature.node.kubernetes.io/cpu-cpuid.MPX":{},"f:feature.node.kubernetes.io/cpu-cpuid.RTM":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE4":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE42":{},"f:feature.node.kubernetes.io/cpu-cpuid.STIBP":{},"f:feature.node.kubernetes.io/cpu-cpuid.VMX":{},"f:feature.node.kubernetes.io/cpu-cstate.enabled":{},"f:feature.node.kubernetes.io/cpu-hardware_multithreading":{},"f:feature.node.kubernetes.io/cpu-pstate.status":{},"f:feature.node.kubernetes.io/cpu-pstate.turbo":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTCMT":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTL3CA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBM":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMON":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ_FULL":{},"f:feature.node.kubernetes.io/kernel-selinux.enabled":{},"f:feature.node.kubernetes.io/kernel-version.full":{},"f:feature.node.kubernetes.io/kernel-version.major":{},"f:feature.node.kubernetes.io/kernel-version.minor":{},"f:feature.node.kubernetes.io/kernel-version.revision":{},"f:feature.node.kubernetes.io/memory-numa":{},"f:feature.node.kubernetes.io/network-sriov.capable":{},"f:feature.node.kubernetes.io/network-sriov.configured":{},"f:feature.node.kubernetes.io/pci-0300_1a03.present":{},"f:feature.node.kubernetes.io/storage-nonrotationaldisk":{},"f:feature.node.kubernetes.io/system-os_release.ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID.major":{}}}}} {Swagger-Codegen Update v1 2022-06-17 20:13:31 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:cmk.intel.com/cmk-node":{}}},"f:status":{"f:capacity":{"f:cmk.intel.com/exclusive-cores":{}}}}} {kubelet Update v1 2022-06-17 20:13:37 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:cmk.intel.com/exclusive-cores":{},"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:capacity":{"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.3.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.3.0/24],},Status:NodeStatus{Capacity:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{451201003520 0} {} 440625980Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{201269604352 0} {} 196552348Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{77 0} {} 77 DecimalSI},ephemeral-storage: {{406080902496 0} {} 406080902496 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{178884603904 0} {} 174691996Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-06-17 20:04:33 +0000 UTC,LastTransitionTime:2022-06-17 20:04:33 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-06-17 22:01:52 +0000 UTC,LastTransitionTime:2022-06-17 20:00:37 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-06-17 22:01:52 +0000 UTC,LastTransitionTime:2022-06-17 20:00:37 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-06-17 22:01:52 +0000 UTC,LastTransitionTime:2022-06-17 20:00:37 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-06-17 22:01:52 +0000 UTC,LastTransitionTime:2022-06-17 20:04:30 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.208,},NodeAddress{Type:Hostname,Address:node2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:3b9e31fbb30d4e48b9ac063755a76deb,SystemUUID:80B3CD56-852F-E711-906E-0017A4403562,BootID:5cd4c1a7-c6ca-496c-9122-4f944da708e6,KernelVersion:3.10.0-1160.66.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.17,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[opnfv/barometer-collectd@sha256:f30e965aa6195e6ac4ca2410f5a15e3704c92e4afa5208178ca22a7911975d66],SizeBytes:1075575763,},ContainerImage{Names:[cmk:v1.5.1],SizeBytes:727740703,},ContainerImage{Names:[localhost:30500/cmk@sha256:7227e64d78c2a9dd290de0ec1cbbaf536dad977fc3efca629dc87d6ffb97071e localhost:30500/cmk:v1.5.1],SizeBytes:727740703,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[aquasec/kube-hunter@sha256:2be6820bc1d7e0f57193a9a27d5a3e16b2fd93c53747b03ce8ca48c6fc323781 aquasec/kube-hunter:0.3.1],SizeBytes:347611549,},ContainerImage{Names:[sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 sirot/netperf-latest:latest],SizeBytes:282025213,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:702a992280fb7c3303e84a5801acbb4c9c7fcf48cffe0e9c8be3f0c60f74cf89 k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.4],SizeBytes:253371792,},ContainerImage{Names:[nginx@sha256:a05b0cdd4fc1be3b224ba9662ebdf98fe44c09c0c9215b45f84344c12867002e nginx:1.21.1],SizeBytes:133175493,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:125930239,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[k8s.gcr.io/nfd/node-feature-discovery@sha256:74a1cbd82354f148277e20cdce25d57816e355a896bc67f67a0f722164b16945 k8s.gcr.io/nfd/node-feature-discovery:v0.8.2],SizeBytes:108486428,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[localhost:30500/sriov-device-plugin@sha256:21d7abd21ac65aac7d19aaa2b1b05a71e496b7bf6251c76df58855be9c3aaa59 localhost:30500/sriov-device-plugin:v3.3.2],SizeBytes:42676189,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 aquasec/kube-bench:0.3.1],SizeBytes:19301876,},ContainerImage{Names:[prom/collectd-exporter@sha256:73fbda4d24421bff3b741c27efc36f1b6fbe7c57c378d56d4ff78101cd556654],SizeBytes:17463681,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:16032814,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:11443478,},ContainerImage{Names:[appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172cead6502fec989198b98d8bb appropriate/curl:edge],SizeBytes:5654234,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:1154361,},ContainerImage{Names:[busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 busybox:1.28],SizeBytes:1146369,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jun 17 22:01:59.069: INFO: Logging kubelet events for node node2 Jun 17 22:01:59.071: INFO: Logging pods the kubelet thinks is on node node2 Jun 17 22:01:59.092: INFO: cmk-5gtjq started at 2022-06-17 20:13:52 +0000 UTC (0+2 container statuses recorded) Jun 17 22:01:59.092: INFO: Container nodereport ready: true, restart count 0 Jun 17 22:01:59.092: INFO: Container reconcile ready: true, restart count 0 Jun 17 22:01:59.092: INFO: collectd-6bcqz started at 2022-06-17 20:18:47 +0000 UTC (0+3 container statuses recorded) Jun 17 22:01:59.092: INFO: Container collectd ready: true, restart count 0 Jun 17 22:01:59.092: INFO: Container collectd-exporter ready: true, restart count 0 Jun 17 22:01:59.092: INFO: Container rbac-proxy ready: true, restart count 0 Jun 17 22:01:59.092: INFO: netserver-1 started at 2022-06-17 22:01:21 +0000 UTC (0+1 container statuses recorded) Jun 17 22:01:59.092: INFO: Container webserver ready: false, restart count 0 Jun 17 22:01:59.092: INFO: affinity-nodeport-transition-pmhvr started at 2022-06-17 21:59:28 +0000 UTC (0+1 container statuses recorded) Jun 17 22:01:59.092: INFO: Container affinity-nodeport-transition ready: true, restart count 0 Jun 17 22:01:59.092: INFO: nginx-proxy-node2 started at 2022-06-17 20:00:37 +0000 UTC (0+1 container statuses recorded) Jun 17 22:01:59.092: INFO: Container nginx-proxy ready: true, restart count 2 Jun 17 22:01:59.092: INFO: kube-proxy-pvtj6 started at 2022-06-17 20:00:43 +0000 UTC (0+1 container statuses recorded) Jun 17 22:01:59.092: INFO: Container kube-proxy ready: true, restart count 2 Jun 17 22:01:59.092: INFO: kube-multus-ds-amd64-hblk4 started at 2022-06-17 20:01:47 +0000 UTC (0+1 container statuses recorded) Jun 17 22:01:59.092: INFO: Container kube-multus ready: true, restart count 1 Jun 17 22:01:59.092: INFO: test-container-pod started at 2022-06-17 22:01:43 +0000 UTC (0+1 container statuses recorded) Jun 17 22:01:59.092: INFO: Container webserver ready: false, restart count 0 Jun 17 22:01:59.092: INFO: send-events-26f1edeb-46d9-4567-b1b2-926258bc968b started at 2022-06-17 22:01:43 +0000 UTC (0+1 container statuses recorded) Jun 17 22:01:59.092: INFO: Container p ready: true, restart count 0 Jun 17 22:01:59.092: INFO: execpodt5pz9 started at 2022-06-17 21:59:47 +0000 UTC (0+1 container statuses recorded) Jun 17 22:01:59.092: INFO: Container agnhost-container ready: true, restart count 0 Jun 17 22:01:59.092: INFO: nodeport-test-l42bj started at 2022-06-17 22:00:13 +0000 UTC (0+1 container statuses recorded) Jun 17 22:01:59.092: INFO: Container nodeport-test ready: true, restart count 0 Jun 17 22:01:59.092: INFO: busybox-readonly-fs55b4e198-b27e-4308-bce4-4ea01442a336 started at 2022-06-17 22:01:40 +0000 UTC (0+1 container statuses recorded) Jun 17 22:01:59.092: INFO: Container busybox-readonly-fs55b4e198-b27e-4308-bce4-4ea01442a336 ready: true, restart count 0 Jun 17 22:01:59.092: INFO: affinity-nodeport-transition-5p5xs started at 2022-06-17 21:59:28 +0000 UTC (0+1 container statuses recorded) Jun 17 22:01:59.092: INFO: Container affinity-nodeport-transition ready: true, restart count 0 Jun 17 22:01:59.092: INFO: externalname-service-krpfv started at 2022-06-17 21:59:29 +0000 UTC (0+1 container statuses recorded) Jun 17 22:01:59.092: INFO: Container externalname-service ready: true, restart count 0 Jun 17 22:01:59.092: INFO: kubernetes-metrics-scraper-5558854cb-w4nk8 started at 2022-06-17 20:02:19 +0000 UTC (0+1 container statuses recorded) Jun 17 22:01:59.092: INFO: Container kubernetes-metrics-scraper ready: true, restart count 1 Jun 17 22:01:59.092: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-xr9c5 started at 2022-06-17 20:10:41 +0000 UTC (0+1 container statuses recorded) Jun 17 22:01:59.092: INFO: Container kube-sriovdp ready: true, restart count 0 Jun 17 22:01:59.092: INFO: node-exporter-xgz6d started at 2022-06-17 20:14:54 +0000 UTC (0+2 container statuses recorded) Jun 17 22:01:59.092: INFO: Container kube-rbac-proxy ready: true, restart count 0 Jun 17 22:01:59.092: INFO: Container node-exporter ready: true, restart count 0 Jun 17 22:01:59.092: INFO: test-webserver-c67c950f-e38b-4445-ab3b-ceabf4cf4f10 started at 2022-06-17 22:01:26 +0000 UTC (0+1 container statuses recorded) Jun 17 22:01:59.092: INFO: Container test-webserver ready: true, restart count 0 Jun 17 22:01:59.092: INFO: kube-flannel-plbl8 started at 2022-06-17 20:01:38 +0000 UTC (1+1 container statuses recorded) Jun 17 22:01:59.092: INFO: Init container install-cni ready: true, restart count 2 Jun 17 22:01:59.092: INFO: Container kube-flannel ready: true, restart count 2 Jun 17 22:01:59.092: INFO: node-feature-discovery-worker-82r46 started at 2022-06-17 20:09:28 +0000 UTC (0+1 container statuses recorded) Jun 17 22:01:59.092: INFO: Container nfd-worker ready: true, restart count 0 Jun 17 22:01:59.092: INFO: cmk-init-discover-node2-z2vgz started at 2022-06-17 20:13:25 +0000 UTC (0+3 container statuses recorded) Jun 17 22:01:59.092: INFO: Container discover ready: false, restart count 0 Jun 17 22:01:59.092: INFO: Container init ready: false, restart count 0 Jun 17 22:01:59.092: INFO: Container install ready: false, restart count 0 Jun 17 22:01:59.092: INFO: adopt-release-bklcl started at 2022-06-17 22:01:55 +0000 UTC (0+1 container statuses recorded) Jun 17 22:01:59.092: INFO: Container c ready: false, restart count 0 Jun 17 22:01:59.092: INFO: nodeport-test-kqgs5 started at 2022-06-17 22:00:13 +0000 UTC (0+1 container statuses recorded) Jun 17 22:01:59.092: INFO: Container nodeport-test ready: true, restart count 0 Jun 17 22:01:59.334: INFO: Latency metrics for node node2 Jun 17 22:01:59.334: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-6595" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750 • Failure [170.247 seconds] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance] [It] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Jun 17 22:01:46.433: Unexpected error: <*errors.errorString | 0xc002088520>: { s: "service is not reachable within 2m0s timeout on endpoint 10.10.190.207:32753 over TCP protocol", } service is not reachable within 2m0s timeout on endpoint 10.10.190.207:32753 over TCP protocol occurred /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:2497 ------------------------------ {"msg":"FAILED [sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","total":-1,"completed":0,"skipped":11,"failed":1,"failures":["[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]"]} SSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 17 21:59:28.359: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746 [It] should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating service in namespace services-3974 STEP: creating service affinity-nodeport-transition in namespace services-3974 STEP: creating replication controller affinity-nodeport-transition in namespace services-3974 I0617 21:59:28.390978 29 runners.go:190] Created replication controller with name: affinity-nodeport-transition, namespace: services-3974, replica count: 3 I0617 21:59:31.441826 29 runners.go:190] affinity-nodeport-transition Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0617 21:59:34.442355 29 runners.go:190] affinity-nodeport-transition Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0617 21:59:37.442738 29 runners.go:190] affinity-nodeport-transition Pods: 3 out of 3 created, 1 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0617 21:59:40.445757 29 runners.go:190] affinity-nodeport-transition Pods: 3 out of 3 created, 1 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0617 21:59:43.446508 29 runners.go:190] affinity-nodeport-transition Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jun 17 21:59:43.460: INFO: Creating new exec pod Jun 17 21:59:54.483: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3974 exec execpod-affinityjrb7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-transition 80' Jun 17 21:59:54.726: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-nodeport-transition 80\nConnection to affinity-nodeport-transition 80 port [tcp/http] succeeded!\n" Jun 17 21:59:54.727: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" Jun 17 21:59:54.727: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3974 exec execpod-affinityjrb7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.233.61.150 80' Jun 17 21:59:55.029: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 10.233.61.150 80\nConnection to 10.233.61.150 80 port [tcp/http] succeeded!\n" Jun 17 21:59:55.029: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" Jun 17 21:59:55.029: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3974 exec execpod-affinityjrb7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30464' Jun 17 21:59:55.479: INFO: rc: 1 Jun 17 21:59:55.480: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3974 exec execpod-affinityjrb7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30464: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30464 nc: connect to 10.10.190.207 port 30464 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 21:59:56.481: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3974 exec execpod-affinityjrb7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30464' Jun 17 21:59:56.734: INFO: rc: 1 Jun 17 21:59:56.734: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3974 exec execpod-affinityjrb7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30464: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30464 nc: connect to 10.10.190.207 port 30464 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 21:59:57.481: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3974 exec execpod-affinityjrb7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30464' Jun 17 21:59:57.725: INFO: rc: 1 Jun 17 21:59:57.725: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3974 exec execpod-affinityjrb7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30464: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30464 nc: connect to 10.10.190.207 port 30464 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 21:59:58.480: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3974 exec execpod-affinityjrb7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30464' Jun 17 21:59:59.145: INFO: rc: 1 Jun 17 21:59:59.145: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3974 exec execpod-affinityjrb7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30464: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30464 nc: connect to 10.10.190.207 port 30464 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 21:59:59.480: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3974 exec execpod-affinityjrb7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30464' Jun 17 21:59:59.873: INFO: rc: 1 Jun 17 21:59:59.873: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3974 exec execpod-affinityjrb7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30464: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30464 nc: connect to 10.10.190.207 port 30464 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:00:00.480: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3974 exec execpod-affinityjrb7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30464' Jun 17 22:00:00.718: INFO: rc: 1 Jun 17 22:00:00.718: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3974 exec execpod-affinityjrb7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30464: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30464 nc: connect to 10.10.190.207 port 30464 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:00:01.481: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3974 exec execpod-affinityjrb7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30464' Jun 17 22:00:01.731: INFO: rc: 1 Jun 17 22:00:01.731: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3974 exec execpod-affinityjrb7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30464: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30464 nc: connect to 10.10.190.207 port 30464 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:00:02.480: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3974 exec execpod-affinityjrb7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30464' Jun 17 22:00:02.727: INFO: rc: 1 Jun 17 22:00:02.727: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3974 exec execpod-affinityjrb7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30464: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30464 nc: connect to 10.10.190.207 port 30464 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:00:03.482: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3974 exec execpod-affinityjrb7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30464' Jun 17 22:00:03.751: INFO: rc: 1 Jun 17 22:00:03.751: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3974 exec execpod-affinityjrb7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30464: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30464 nc: connect to 10.10.190.207 port 30464 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:00:04.480: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3974 exec execpod-affinityjrb7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30464' Jun 17 22:00:04.711: INFO: rc: 1 Jun 17 22:00:04.711: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3974 exec execpod-affinityjrb7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30464: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30464 nc: connect to 10.10.190.207 port 30464 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:00:05.480: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3974 exec execpod-affinityjrb7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30464' Jun 17 22:00:05.746: INFO: rc: 1 Jun 17 22:00:05.746: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3974 exec execpod-affinityjrb7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30464: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30464 nc: connect to 10.10.190.207 port 30464 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:00:06.480: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3974 exec execpod-affinityjrb7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30464' Jun 17 22:00:06.729: INFO: rc: 1 Jun 17 22:00:06.729: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3974 exec execpod-affinityjrb7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30464: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30464 nc: connect to 10.10.190.207 port 30464 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:00:07.481: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3974 exec execpod-affinityjrb7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30464' Jun 17 22:00:07.725: INFO: rc: 1 Jun 17 22:00:07.725: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3974 exec execpod-affinityjrb7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30464: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30464 nc: connect to 10.10.190.207 port 30464 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:00:08.481: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3974 exec execpod-affinityjrb7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30464' Jun 17 22:00:08.766: INFO: rc: 1 Jun 17 22:00:08.766: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3974 exec execpod-affinityjrb7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30464: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30464 nc: connect to 10.10.190.207 port 30464 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:00:09.480: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3974 exec execpod-affinityjrb7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30464' Jun 17 22:00:09.771: INFO: rc: 1 Jun 17 22:00:09.771: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3974 exec execpod-affinityjrb7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30464: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30464 nc: connect to 10.10.190.207 port 30464 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:00:10.480: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3974 exec execpod-affinityjrb7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30464' Jun 17 22:00:10.756: INFO: rc: 1 Jun 17 22:00:10.756: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3974 exec execpod-affinityjrb7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30464: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30464 nc: connect to 10.10.190.207 port 30464 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:00:11.481: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3974 exec execpod-affinityjrb7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30464' Jun 17 22:00:12.270: INFO: rc: 1 Jun 17 22:00:12.270: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3974 exec execpod-affinityjrb7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30464: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30464 nc: connect to 10.10.190.207 port 30464 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:00:12.480: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3974 exec execpod-affinityjrb7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30464' Jun 17 22:00:12.718: INFO: rc: 1 Jun 17 22:00:12.718: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3974 exec execpod-affinityjrb7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30464: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30464 nc: connect to 10.10.190.207 port 30464 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:00:13.480: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3974 exec execpod-affinityjrb7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30464' Jun 17 22:00:13.765: INFO: rc: 1 Jun 17 22:00:13.765: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3974 exec execpod-affinityjrb7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30464: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30464 nc: connect to 10.10.190.207 port 30464 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:00:14.481: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3974 exec execpod-affinityjrb7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30464' Jun 17 22:00:14.813: INFO: rc: 1 Jun 17 22:00:14.813: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3974 exec execpod-affinityjrb7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30464: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30464 nc: connect to 10.10.190.207 port 30464 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:00:15.481: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3974 exec execpod-affinityjrb7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30464' Jun 17 22:00:16.105: INFO: rc: 1 Jun 17 22:00:16.105: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3974 exec execpod-affinityjrb7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30464: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30464 + echo hostName nc: connect to 10.10.190.207 port 30464 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:00:16.480: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3974 exec execpod-affinityjrb7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30464' Jun 17 22:00:16.852: INFO: rc: 1 Jun 17 22:00:16.852: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3974 exec execpod-affinityjrb7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30464: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30464 nc: connect to 10.10.190.207 port 30464 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:00:17.480: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3974 exec execpod-affinityjrb7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30464' Jun 17 22:00:17.798: INFO: rc: 1 Jun 17 22:00:17.798: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3974 exec execpod-affinityjrb7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30464: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30464 nc: connect to 10.10.190.207 port 30464 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:00:18.480: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3974 exec execpod-affinityjrb7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30464' Jun 17 22:00:18.872: INFO: rc: 1 Jun 17 22:00:18.872: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3974 exec execpod-affinityjrb7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30464: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30464 nc: connect to 10.10.190.207 port 30464 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:00:19.481: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3974 exec execpod-affinityjrb7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30464' Jun 17 22:00:19.713: INFO: rc: 1 Jun 17 22:00:19.713: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3974 exec execpod-affinityjrb7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30464: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30464 nc: connect to 10.10.190.207 port 30464 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:00:20.481: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3974 exec execpod-affinityjrb7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30464' Jun 17 22:00:20.777: INFO: rc: 1 Jun 17 22:00:20.777: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3974 exec execpod-affinityjrb7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30464: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30464 nc: connect to 10.10.190.207 port 30464 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:00:21.480: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3974 exec execpod-affinityjrb7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30464' Jun 17 22:00:21.737: INFO: rc: 1 Jun 17 22:00:21.737: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3974 exec execpod-affinityjrb7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30464: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30464 nc: connect to 10.10.190.207 port 30464 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:00:22.481: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3974 exec execpod-affinityjrb7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30464' Jun 17 22:00:22.868: INFO: rc: 1 Jun 17 22:00:22.868: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3974 exec execpod-affinityjrb7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30464: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30464 nc: connect to 10.10.190.207 port 30464 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:00:23.481: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3974 exec execpod-affinityjrb7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30464' Jun 17 22:00:23.730: INFO: rc: 1 Jun 17 22:00:23.730: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3974 exec execpod-affinityjrb7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30464: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30464 nc: connect to 10.10.190.207 port 30464 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:00:24.480: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3974 exec execpod-affinityjrb7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30464' Jun 17 22:00:24.735: INFO: rc: 1 Jun 17 22:00:24.735: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3974 exec execpod-affinityjrb7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30464: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30464 nc: connect to 10.10.190.207 port 30464 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:00:25.480: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3974 exec execpod-affinityjrb7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30464' Jun 17 22:00:25.847: INFO: rc: 1 Jun 17 22:00:25.847: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3974 exec execpod-affinityjrb7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30464: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30464 nc: connect to 10.10.190.207 port 30464 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:00:26.480: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3974 exec execpod-affinityjrb7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30464' Jun 17 22:00:26.890: INFO: rc: 1 Jun 17 22:00:26.890: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3974 exec execpod-affinityjrb7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30464: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30464 nc: connect to 10.10.190.207 port 30464 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:00:27.480: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3974 exec execpod-affinityjrb7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30464' Jun 17 22:00:27.995: INFO: rc: 1 Jun 17 22:00:27.995: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3974 exec execpod-affinityjrb7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30464: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30464 + echo hostName nc: connect to 10.10.190.207 port 30464 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:00:28.481: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3974 exec execpod-affinityjrb7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30464' Jun 17 22:00:28.796: INFO: rc: 1 Jun 17 22:00:28.796: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3974 exec execpod-affinityjrb7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30464: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30464 nc: connect to 10.10.190.207 port 30464 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:00:29.480: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3974 exec execpod-affinityjrb7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30464' Jun 17 22:00:29.805: INFO: rc: 1 Jun 17 22:00:29.806: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3974 exec execpod-affinityjrb7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30464: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30464 nc: connect to 10.10.190.207 port 30464 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:00:30.480: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3974 exec execpod-affinityjrb7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30464' Jun 17 22:00:30.717: INFO: rc: 1 Jun 17 22:00:30.717: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3974 exec execpod-affinityjrb7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30464: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30464 nc: connect to 10.10.190.207 port 30464 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:00:31.480: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3974 exec execpod-affinityjrb7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30464' Jun 17 22:00:31.862: INFO: rc: 1 Jun 17 22:00:31.862: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3974 exec execpod-affinityjrb7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30464: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30464 nc: connect to 10.10.190.207 port 30464 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:00:32.480: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3974 exec execpod-affinityjrb7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30464' Jun 17 22:00:32.980: INFO: rc: 1 Jun 17 22:00:32.980: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3974 exec execpod-affinityjrb7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30464: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30464 nc: connect to 10.10.190.207 port 30464 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:00:33.481: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3974 exec execpod-affinityjrb7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30464' Jun 17 22:00:33.734: INFO: rc: 1 Jun 17 22:00:33.734: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3974 exec execpod-affinityjrb7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30464: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30464 nc: connect to 10.10.190.207 port 30464 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:00:34.481: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3974 exec execpod-affinityjrb7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30464' Jun 17 22:00:34.815: INFO: rc: 1 Jun 17 22:00:34.815: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3974 exec execpod-affinityjrb7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30464: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30464 nc: connect to 10.10.190.207 port 30464 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:00:35.480: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3974 exec execpod-affinityjrb7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30464' Jun 17 22:00:35.951: INFO: rc: 1 Jun 17 22:00:35.951: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3974 exec execpod-affinityjrb7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30464: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30464 nc: connect to 10.10.190.207 port 30464 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:00:36.480: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3974 exec execpod-affinityjrb7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30464' Jun 17 22:00:36.927: INFO: rc: 1 Jun 17 22:00:36.927: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3974 exec execpod-affinityjrb7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30464: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30464 nc: connect to 10.10.190.207 port 30464 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:00:37.480: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3974 exec execpod-affinityjrb7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30464' Jun 17 22:00:37.760: INFO: rc: 1 Jun 17 22:00:37.760: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3974 exec execpod-affinityjrb7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30464: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30464 nc: connect to 10.10.190.207 port 30464 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:00:38.480: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3974 exec execpod-affinityjrb7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30464' Jun 17 22:00:38.718: INFO: rc: 1 Jun 17 22:00:38.718: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3974 exec execpod-affinityjrb7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30464: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30464 nc: connect to 10.10.190.207 port 30464 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:00:39.480: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3974 exec execpod-affinityjrb7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30464' Jun 17 22:00:39.731: INFO: rc: 1 Jun 17 22:00:39.731: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3974 exec execpod-affinityjrb7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30464: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30464 nc: connect to 10.10.190.207 port 30464 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:00:40.480: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3974 exec execpod-affinityjrb7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30464' Jun 17 22:00:40.723: INFO: rc: 1 Jun 17 22:00:40.723: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3974 exec execpod-affinityjrb7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30464: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30464 nc: connect to 10.10.190.207 port 30464 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:00:41.480: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3974 exec execpod-affinityjrb7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30464' Jun 17 22:00:41.727: INFO: rc: 1 Jun 17 22:00:41.727: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3974 exec execpod-affinityjrb7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30464: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30464 nc: connect to 10.10.190.207 port 30464 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:00:42.480: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3974 exec execpod-affinityjrb7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30464' Jun 17 22:00:42.702: INFO: rc: 1 Jun 17 22:00:42.702: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3974 exec execpod-affinityjrb7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30464: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30464 nc: connect to 10.10.190.207 port 30464 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:00:43.480: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3974 exec execpod-affinityjrb7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30464' Jun 17 22:00:43.715: INFO: rc: 1 Jun 17 22:00:43.715: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3974 exec execpod-affinityjrb7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30464: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30464 nc: connect to 10.10.190.207 port 30464 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:00:44.481: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3974 exec execpod-affinityjrb7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30464' Jun 17 22:00:44.850: INFO: rc: 1 Jun 17 22:00:44.850: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3974 exec execpod-affinityjrb7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30464: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30464 nc: connect to 10.10.190.207 port 30464 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:00:45.481: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3974 exec execpod-affinityjrb7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30464' Jun 17 22:00:45.980: INFO: rc: 1 Jun 17 22:00:45.980: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3974 exec execpod-affinityjrb7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30464: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30464 nc: connect to 10.10.190.207 port 30464 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:00:46.480: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3974 exec execpod-affinityjrb7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30464' Jun 17 22:00:46.737: INFO: rc: 1 Jun 17 22:00:46.737: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3974 exec execpod-affinityjrb7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30464: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30464 nc: connect to 10.10.190.207 port 30464 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:00:47.480: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3974 exec execpod-affinityjrb7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30464' Jun 17 22:00:47.762: INFO: rc: 1 Jun 17 22:00:47.762: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3974 exec execpod-affinityjrb7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30464: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30464 nc: connect to 10.10.190.207 port 30464 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:00:48.482: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3974 exec execpod-affinityjrb7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30464' Jun 17 22:00:48.913: INFO: rc: 1 Jun 17 22:00:48.913: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3974 exec execpod-affinityjrb7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30464: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30464 nc: connect to 10.10.190.207 port 30464 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:00:49.480: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3974 exec execpod-affinityjrb7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30464' Jun 17 22:00:49.727: INFO: rc: 1 Jun 17 22:00:49.727: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3974 exec execpod-affinityjrb7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30464: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30464 nc: connect to 10.10.190.207 port 30464 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:00:50.481: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3974 exec execpod-affinityjrb7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30464' Jun 17 22:00:50.842: INFO: rc: 1 Jun 17 22:00:50.842: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3974 exec execpod-affinityjrb7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30464: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30464 nc: connect to 10.10.190.207 port 30464 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:00:51.481: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3974 exec execpod-affinityjrb7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30464' Jun 17 22:00:51.957: INFO: rc: 1 Jun 17 22:00:51.957: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3974 exec execpod-affinityjrb7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30464: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30464 nc: connect to 10.10.190.207 port 30464 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:00:52.481: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3974 exec execpod-affinityjrb7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30464' Jun 17 22:00:52.796: INFO: rc: 1 Jun 17 22:00:52.796: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3974 exec execpod-affinityjrb7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30464: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30464 + echo hostName nc: connect to 10.10.190.207 port 30464 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:00:53.481: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3974 exec execpod-affinityjrb7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30464' Jun 17 22:00:53.729: INFO: rc: 1 Jun 17 22:00:53.729: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3974 exec execpod-affinityjrb7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30464: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30464 + echo hostName nc: connect to 10.10.190.207 port 30464 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:00:54.481: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3974 exec execpod-affinityjrb7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30464' Jun 17 22:00:54.880: INFO: rc: 1 Jun 17 22:00:54.880: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3974 exec execpod-affinityjrb7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30464: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30464 nc: connect to 10.10.190.207 port 30464 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:00:55.480: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3974 exec execpod-affinityjrb7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30464' Jun 17 22:00:55.737: INFO: rc: 1 Jun 17 22:00:55.737: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3974 exec execpod-affinityjrb7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30464: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30464 nc: connect to 10.10.190.207 port 30464 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:00:56.480: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3974 exec execpod-affinityjrb7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30464' Jun 17 22:00:56.722: INFO: rc: 1 Jun 17 22:00:56.722: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3974 exec execpod-affinityjrb7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30464: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30464 nc: connect to 10.10.190.207 port 30464 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:00:57.480: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3974 exec execpod-affinityjrb7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30464' Jun 17 22:00:57.745: INFO: rc: 1 Jun 17 22:00:57.745: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3974 exec execpod-affinityjrb7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30464: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30464 nc: connect to 10.10.190.207 port 30464 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:00:58.481: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3974 exec execpod-affinityjrb7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30464' Jun 17 22:00:58.931: INFO: rc: 1 Jun 17 22:00:58.931: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3974 exec execpod-affinityjrb7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30464: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30464 nc: connect to 10.10.190.207 port 30464 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:00:59.481: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3974 exec execpod-affinityjrb7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30464' Jun 17 22:00:59.932: INFO: rc: 1 Jun 17 22:00:59.932: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3974 exec execpod-affinityjrb7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30464: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30464 nc: connect to 10.10.190.207 port 30464 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:01:00.480: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3974 exec execpod-affinityjrb7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30464' Jun 17 22:01:00.793: INFO: rc: 1 Jun 17 22:01:00.793: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3974 exec execpod-affinityjrb7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30464: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30464 nc: connect to 10.10.190.207 port 30464 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:01:01.480: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3974 exec execpod-affinityjrb7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30464' Jun 17 22:01:01.751: INFO: rc: 1 Jun 17 22:01:01.751: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3974 exec execpod-affinityjrb7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30464: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30464 nc: connect to 10.10.190.207 port 30464 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:01:02.481: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3974 exec execpod-affinityjrb7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30464' Jun 17 22:01:02.806: INFO: rc: 1 Jun 17 22:01:02.806: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3974 exec execpod-affinityjrb7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30464: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30464 nc: connect to 10.10.190.207 port 30464 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:01:03.480: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3974 exec execpod-affinityjrb7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30464' Jun 17 22:01:03.862: INFO: rc: 1 Jun 17 22:01:03.862: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3974 exec execpod-affinityjrb7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30464: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30464 nc: connect to 10.10.190.207 port 30464 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:01:04.481: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3974 exec execpod-affinityjrb7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30464' Jun 17 22:01:04.740: INFO: rc: 1 Jun 17 22:01:04.740: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3974 exec execpod-affinityjrb7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30464: Command stdout: stderr: + echo+ hostName nc -v -t -w 2 10.10.190.207 30464 nc: connect to 10.10.190.207 port 30464 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:01:05.481: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3974 exec execpod-affinityjrb7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30464' Jun 17 22:01:05.747: INFO: rc: 1 Jun 17 22:01:05.747: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3974 exec execpod-affinityjrb7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30464: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30464 nc: connect to 10.10.190.207 port 30464 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:01:06.480: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3974 exec execpod-affinityjrb7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30464' Jun 17 22:01:06.718: INFO: rc: 1 Jun 17 22:01:06.718: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3974 exec execpod-affinityjrb7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30464: Command stdout: stderr: + echo hostName+ nc -v -t -w 2 10.10.190.207 30464 nc: connect to 10.10.190.207 port 30464 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:01:07.480: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3974 exec execpod-affinityjrb7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30464' Jun 17 22:01:07.886: INFO: rc: 1 Jun 17 22:01:07.886: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3974 exec execpod-affinityjrb7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30464: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30464 nc: connect to 10.10.190.207 port 30464 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:01:08.480: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3974 exec execpod-affinityjrb7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30464' Jun 17 22:01:08.718: INFO: rc: 1 Jun 17 22:01:08.719: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3974 exec execpod-affinityjrb7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30464: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30464 nc: connect to 10.10.190.207 port 30464 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:01:09.480: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3974 exec execpod-affinityjrb7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30464' Jun 17 22:01:09.720: INFO: rc: 1 Jun 17 22:01:09.720: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3974 exec execpod-affinityjrb7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30464: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30464 nc: connect to 10.10.190.207 port 30464 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:01:10.481: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3974 exec execpod-affinityjrb7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30464' Jun 17 22:01:10.726: INFO: rc: 1 Jun 17 22:01:10.727: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3974 exec execpod-affinityjrb7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30464: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30464 nc: connect to 10.10.190.207 port 30464 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:01:11.480: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3974 exec execpod-affinityjrb7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30464' Jun 17 22:01:11.749: INFO: rc: 1 Jun 17 22:01:11.749: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3974 exec execpod-affinityjrb7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30464: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30464 nc: connect to 10.10.190.207 port 30464 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:01:12.481: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3974 exec execpod-affinityjrb7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30464' Jun 17 22:01:12.741: INFO: rc: 1 Jun 17 22:01:12.741: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3974 exec execpod-affinityjrb7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30464: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30464 nc: connect to 10.10.190.207 port 30464 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:01:13.480: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3974 exec execpod-affinityjrb7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30464' Jun 17 22:01:13.827: INFO: rc: 1 Jun 17 22:01:13.827: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3974 exec execpod-affinityjrb7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30464: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30464 nc: connect to 10.10.190.207 port 30464 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:01:14.481: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3974 exec execpod-affinityjrb7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30464' Jun 17 22:01:14.720: INFO: rc: 1 Jun 17 22:01:14.720: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3974 exec execpod-affinityjrb7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30464: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30464 nc: connect to 10.10.190.207 port 30464 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:01:15.480: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3974 exec execpod-affinityjrb7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30464' Jun 17 22:01:15.773: INFO: rc: 1 Jun 17 22:01:15.773: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3974 exec execpod-affinityjrb7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30464: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30464 nc: connect to 10.10.190.207 port 30464 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:01:16.481: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3974 exec execpod-affinityjrb7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30464' Jun 17 22:01:16.750: INFO: rc: 1 Jun 17 22:01:16.750: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3974 exec execpod-affinityjrb7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30464: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30464 nc: connect to 10.10.190.207 port 30464 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:01:17.480: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3974 exec execpod-affinityjrb7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30464' Jun 17 22:01:17.733: INFO: rc: 1 Jun 17 22:01:17.733: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3974 exec execpod-affinityjrb7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30464: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30464 nc: connect to 10.10.190.207 port 30464 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:01:18.480: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3974 exec execpod-affinityjrb7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30464' Jun 17 22:01:18.724: INFO: rc: 1 Jun 17 22:01:18.725: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3974 exec execpod-affinityjrb7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30464: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30464 nc: connect to 10.10.190.207 port 30464 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:01:19.480: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3974 exec execpod-affinityjrb7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30464' Jun 17 22:01:19.954: INFO: rc: 1 Jun 17 22:01:19.954: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3974 exec execpod-affinityjrb7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30464: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30464 nc: connect to 10.10.190.207 port 30464 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:01:20.480: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3974 exec execpod-affinityjrb7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30464' Jun 17 22:01:20.796: INFO: rc: 1 Jun 17 22:01:20.796: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3974 exec execpod-affinityjrb7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30464: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30464 nc: connect to 10.10.190.207 port 30464 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:01:21.480: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3974 exec execpod-affinityjrb7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30464' Jun 17 22:01:21.731: INFO: rc: 1 Jun 17 22:01:21.731: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3974 exec execpod-affinityjrb7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30464: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30464 nc: connect to 10.10.190.207 port 30464 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:01:22.480: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3974 exec execpod-affinityjrb7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30464' Jun 17 22:01:23.011: INFO: rc: 1 Jun 17 22:01:23.011: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3974 exec execpod-affinityjrb7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30464: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30464 nc: connect to 10.10.190.207 port 30464 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:01:23.481: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3974 exec execpod-affinityjrb7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30464' Jun 17 22:01:23.722: INFO: rc: 1 Jun 17 22:01:23.722: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3974 exec execpod-affinityjrb7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30464: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30464 nc: connect to 10.10.190.207 port 30464 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:01:24.481: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3974 exec execpod-affinityjrb7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30464' Jun 17 22:01:24.716: INFO: rc: 1 Jun 17 22:01:24.716: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3974 exec execpod-affinityjrb7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30464: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30464 nc: connect to 10.10.190.207 port 30464 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:01:25.482: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3974 exec execpod-affinityjrb7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30464' Jun 17 22:01:25.743: INFO: rc: 1 Jun 17 22:01:25.743: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3974 exec execpod-affinityjrb7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30464: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30464 nc: connect to 10.10.190.207 port 30464 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:01:26.480: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3974 exec execpod-affinityjrb7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30464' Jun 17 22:01:26.734: INFO: rc: 1 Jun 17 22:01:26.734: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3974 exec execpod-affinityjrb7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30464: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30464 nc: connect to 10.10.190.207 port 30464 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:01:27.480: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3974 exec execpod-affinityjrb7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30464' Jun 17 22:01:27.893: INFO: rc: 1 Jun 17 22:01:27.893: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3974 exec execpod-affinityjrb7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30464: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30464 nc: connect to 10.10.190.207 port 30464 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:01:28.480: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3974 exec execpod-affinityjrb7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30464' Jun 17 22:01:28.722: INFO: rc: 1 Jun 17 22:01:28.722: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3974 exec execpod-affinityjrb7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30464: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30464 nc: connect to 10.10.190.207 port 30464 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:01:29.481: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3974 exec execpod-affinityjrb7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30464' Jun 17 22:01:29.741: INFO: rc: 1 Jun 17 22:01:29.741: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3974 exec execpod-affinityjrb7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30464: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30464 nc: connect to 10.10.190.207 port 30464 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:01:30.480: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3974 exec execpod-affinityjrb7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30464' Jun 17 22:01:30.701: INFO: rc: 1 Jun 17 22:01:30.701: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3974 exec execpod-affinityjrb7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30464: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30464 nc: connect to 10.10.190.207 port 30464 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:01:31.480: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3974 exec execpod-affinityjrb7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30464' Jun 17 22:01:31.733: INFO: rc: 1 Jun 17 22:01:31.733: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3974 exec execpod-affinityjrb7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30464: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30464 nc: connect to 10.10.190.207 port 30464 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:01:32.480: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3974 exec execpod-affinityjrb7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30464' Jun 17 22:01:32.721: INFO: rc: 1 Jun 17 22:01:32.721: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3974 exec execpod-affinityjrb7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30464: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30464 nc: connect to 10.10.190.207 port 30464 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:01:33.480: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3974 exec execpod-affinityjrb7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30464' Jun 17 22:01:33.738: INFO: rc: 1 Jun 17 22:01:33.738: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3974 exec execpod-affinityjrb7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30464: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30464 nc: connect to 10.10.190.207 port 30464 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:01:34.481: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3974 exec execpod-affinityjrb7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30464' Jun 17 22:01:34.728: INFO: rc: 1 Jun 17 22:01:34.728: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3974 exec execpod-affinityjrb7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30464: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30464 nc: connect to 10.10.190.207 port 30464 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:01:35.480: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3974 exec execpod-affinityjrb7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30464' Jun 17 22:01:35.752: INFO: rc: 1 Jun 17 22:01:35.752: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3974 exec execpod-affinityjrb7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30464: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30464 nc: connect to 10.10.190.207 port 30464 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:01:36.481: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3974 exec execpod-affinityjrb7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30464' Jun 17 22:01:36.732: INFO: rc: 1 Jun 17 22:01:36.732: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3974 exec execpod-affinityjrb7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30464: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30464 nc: connect to 10.10.190.207 port 30464 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:01:37.480: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3974 exec execpod-affinityjrb7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30464' Jun 17 22:01:37.762: INFO: rc: 1 Jun 17 22:01:37.762: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3974 exec execpod-affinityjrb7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30464: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30464 nc: connect to 10.10.190.207 port 30464 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:01:38.480: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3974 exec execpod-affinityjrb7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30464' Jun 17 22:01:38.737: INFO: rc: 1 Jun 17 22:01:38.737: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3974 exec execpod-affinityjrb7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30464: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30464 nc: connect to 10.10.190.207 port 30464 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:01:39.480: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3974 exec execpod-affinityjrb7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30464' Jun 17 22:01:39.873: INFO: rc: 1 Jun 17 22:01:39.873: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3974 exec execpod-affinityjrb7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30464: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30464 nc: connect to 10.10.190.207 port 30464 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:01:40.481: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3974 exec execpod-affinityjrb7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30464' Jun 17 22:01:40.976: INFO: rc: 1 Jun 17 22:01:40.977: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3974 exec execpod-affinityjrb7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30464: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30464 nc: connect to 10.10.190.207 port 30464 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:01:41.480: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3974 exec execpod-affinityjrb7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30464' Jun 17 22:01:41.720: INFO: rc: 1 Jun 17 22:01:41.720: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3974 exec execpod-affinityjrb7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30464: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30464 nc: connect to 10.10.190.207 port 30464 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:01:42.480: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3974 exec execpod-affinityjrb7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30464' Jun 17 22:01:42.754: INFO: rc: 1 Jun 17 22:01:42.754: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3974 exec execpod-affinityjrb7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30464: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30464 nc: connect to 10.10.190.207 port 30464 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:01:43.482: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3974 exec execpod-affinityjrb7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30464' Jun 17 22:01:43.766: INFO: rc: 1 Jun 17 22:01:43.767: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3974 exec execpod-affinityjrb7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30464: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30464 nc: connect to 10.10.190.207 port 30464 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:01:44.481: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3974 exec execpod-affinityjrb7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30464' Jun 17 22:01:45.909: INFO: rc: 1 Jun 17 22:01:45.909: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3974 exec execpod-affinityjrb7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30464: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30464 nc: connect to 10.10.190.207 port 30464 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:01:46.481: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3974 exec execpod-affinityjrb7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30464' Jun 17 22:01:46.855: INFO: rc: 1 Jun 17 22:01:46.855: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3974 exec execpod-affinityjrb7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30464: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30464 nc: connect to 10.10.190.207 port 30464 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:01:47.480: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3974 exec execpod-affinityjrb7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30464' Jun 17 22:01:47.761: INFO: rc: 1 Jun 17 22:01:47.761: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3974 exec execpod-affinityjrb7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30464: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30464 nc: connect to 10.10.190.207 port 30464 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:01:48.481: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3974 exec execpod-affinityjrb7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30464' Jun 17 22:01:48.727: INFO: rc: 1 Jun 17 22:01:48.727: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3974 exec execpod-affinityjrb7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30464: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30464 nc: connect to 10.10.190.207 port 30464 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:01:49.480: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3974 exec execpod-affinityjrb7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30464' Jun 17 22:01:49.748: INFO: rc: 1 Jun 17 22:01:49.748: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3974 exec execpod-affinityjrb7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30464: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30464 nc: connect to 10.10.190.207 port 30464 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:01:50.480: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3974 exec execpod-affinityjrb7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30464' Jun 17 22:01:50.755: INFO: rc: 1 Jun 17 22:01:50.755: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3974 exec execpod-affinityjrb7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30464: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30464 nc: connect to 10.10.190.207 port 30464 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:01:51.480: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3974 exec execpod-affinityjrb7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30464' Jun 17 22:01:51.742: INFO: rc: 1 Jun 17 22:01:51.742: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3974 exec execpod-affinityjrb7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30464: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30464 nc: connect to 10.10.190.207 port 30464 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:01:52.480: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3974 exec execpod-affinityjrb7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30464' Jun 17 22:01:52.733: INFO: rc: 1 Jun 17 22:01:52.733: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3974 exec execpod-affinityjrb7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30464: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30464 nc: connect to 10.10.190.207 port 30464 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:01:53.480: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3974 exec execpod-affinityjrb7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30464' Jun 17 22:01:53.714: INFO: rc: 1 Jun 17 22:01:53.714: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3974 exec execpod-affinityjrb7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30464: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30464 nc: connect to 10.10.190.207 port 30464 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:01:54.481: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3974 exec execpod-affinityjrb7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30464' Jun 17 22:01:54.754: INFO: rc: 1 Jun 17 22:01:54.754: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3974 exec execpod-affinityjrb7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30464: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30464 nc: connect to 10.10.190.207 port 30464 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:01:55.480: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3974 exec execpod-affinityjrb7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30464' Jun 17 22:01:55.750: INFO: rc: 1 Jun 17 22:01:55.750: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3974 exec execpod-affinityjrb7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30464: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30464 nc: connect to 10.10.190.207 port 30464 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:01:55.750: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3974 exec execpod-affinityjrb7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30464' Jun 17 22:01:56.122: INFO: rc: 1 Jun 17 22:01:56.123: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3974 exec execpod-affinityjrb7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30464: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30464 nc: connect to 10.10.190.207 port 30464 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:01:56.123: FAIL: Unexpected error: <*errors.errorString | 0xc000c607e0>: { s: "service is not reachable within 2m0s timeout on endpoint 10.10.190.207:30464 over TCP protocol", } service is not reachable within 2m0s timeout on endpoint 10.10.190.207:30464 over TCP protocol occurred Full Stack Trace k8s.io/kubernetes/test/e2e/network.execAffinityTestForNonLBServiceWithOptionalTransition(0xc0015badc0, 0x77b33d8, 0xc003facdc0, 0xc003d08a00, 0x1) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:2576 +0x625 k8s.io/kubernetes/test/e2e/network.execAffinityTestForNonLBServiceWithTransition(...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:2531 k8s.io/kubernetes/test/e2e/network.glob..func24.27() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1862 +0xa5 k8s.io/kubernetes/test/e2e.RunE2ETests(0xc001abcf00) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x36c k8s.io/kubernetes/test/e2e.TestE2E(0xc001abcf00) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:144 +0x2b testing.tRunner(0xc001abcf00, 0x70f99e8) /usr/local/go/src/testing/testing.go:1193 +0xef created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1238 +0x2b3 Jun 17 22:01:56.124: INFO: Cleaning up the exec pod STEP: deleting ReplicationController affinity-nodeport-transition in namespace services-3974, will wait for the garbage collector to delete the pods Jun 17 22:01:56.198: INFO: Deleting ReplicationController affinity-nodeport-transition took: 3.668159ms Jun 17 22:01:56.299: INFO: Terminating ReplicationController affinity-nodeport-transition pods took: 101.27091ms [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 STEP: Collecting events from namespace "services-3974". STEP: Found 27 events. Jun 17 22:02:09.418: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for affinity-nodeport-transition-5p5xs: { } Scheduled: Successfully assigned services-3974/affinity-nodeport-transition-5p5xs to node2 Jun 17 22:02:09.418: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for affinity-nodeport-transition-pmhvr: { } Scheduled: Successfully assigned services-3974/affinity-nodeport-transition-pmhvr to node2 Jun 17 22:02:09.418: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for affinity-nodeport-transition-rwm2c: { } Scheduled: Successfully assigned services-3974/affinity-nodeport-transition-rwm2c to node1 Jun 17 22:02:09.418: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for execpod-affinityjrb7w: { } Scheduled: Successfully assigned services-3974/execpod-affinityjrb7w to node2 Jun 17 22:02:09.418: INFO: At 2022-06-17 21:59:28 +0000 UTC - event for affinity-nodeport-transition: {replication-controller } SuccessfulCreate: Created pod: affinity-nodeport-transition-pmhvr Jun 17 22:02:09.418: INFO: At 2022-06-17 21:59:28 +0000 UTC - event for affinity-nodeport-transition: {replication-controller } SuccessfulCreate: Created pod: affinity-nodeport-transition-5p5xs Jun 17 22:02:09.418: INFO: At 2022-06-17 21:59:28 +0000 UTC - event for affinity-nodeport-transition: {replication-controller } SuccessfulCreate: Created pod: affinity-nodeport-transition-rwm2c Jun 17 22:02:09.418: INFO: At 2022-06-17 21:59:31 +0000 UTC - event for affinity-nodeport-transition-rwm2c: {kubelet node1} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/agnhost:2.32" Jun 17 22:02:09.418: INFO: At 2022-06-17 21:59:32 +0000 UTC - event for affinity-nodeport-transition-pmhvr: {kubelet node2} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/agnhost:2.32" Jun 17 22:02:09.418: INFO: At 2022-06-17 21:59:32 +0000 UTC - event for affinity-nodeport-transition-rwm2c: {kubelet node1} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/agnhost:2.32" in 359.04537ms Jun 17 22:02:09.418: INFO: At 2022-06-17 21:59:32 +0000 UTC - event for affinity-nodeport-transition-rwm2c: {kubelet node1} Created: Created container affinity-nodeport-transition Jun 17 22:02:09.418: INFO: At 2022-06-17 21:59:32 +0000 UTC - event for affinity-nodeport-transition-rwm2c: {kubelet node1} Started: Started container affinity-nodeport-transition Jun 17 22:02:09.418: INFO: At 2022-06-17 21:59:33 +0000 UTC - event for affinity-nodeport-transition-5p5xs: {kubelet node2} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/agnhost:2.32" Jun 17 22:02:09.418: INFO: At 2022-06-17 21:59:40 +0000 UTC - event for affinity-nodeport-transition-5p5xs: {kubelet node2} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/agnhost:2.32" in 7.133924215s Jun 17 22:02:09.418: INFO: At 2022-06-17 21:59:40 +0000 UTC - event for affinity-nodeport-transition-5p5xs: {kubelet node2} Created: Created container affinity-nodeport-transition Jun 17 22:02:09.418: INFO: At 2022-06-17 21:59:40 +0000 UTC - event for affinity-nodeport-transition-5p5xs: {kubelet node2} Started: Started container affinity-nodeport-transition Jun 17 22:02:09.418: INFO: At 2022-06-17 21:59:40 +0000 UTC - event for affinity-nodeport-transition-pmhvr: {kubelet node2} Created: Created container affinity-nodeport-transition Jun 17 22:02:09.418: INFO: At 2022-06-17 21:59:40 +0000 UTC - event for affinity-nodeport-transition-pmhvr: {kubelet node2} Started: Started container affinity-nodeport-transition Jun 17 22:02:09.418: INFO: At 2022-06-17 21:59:40 +0000 UTC - event for affinity-nodeport-transition-pmhvr: {kubelet node2} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/agnhost:2.32" in 7.281970379s Jun 17 22:02:09.418: INFO: At 2022-06-17 21:59:47 +0000 UTC - event for execpod-affinityjrb7w: {kubelet node2} Created: Created container agnhost-container Jun 17 22:02:09.418: INFO: At 2022-06-17 21:59:47 +0000 UTC - event for execpod-affinityjrb7w: {kubelet node2} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/agnhost:2.32" Jun 17 22:02:09.418: INFO: At 2022-06-17 21:59:47 +0000 UTC - event for execpod-affinityjrb7w: {kubelet node2} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/agnhost:2.32" in 485.427852ms Jun 17 22:02:09.418: INFO: At 2022-06-17 21:59:48 +0000 UTC - event for execpod-affinityjrb7w: {kubelet node2} Started: Started container agnhost-container Jun 17 22:02:09.418: INFO: At 2022-06-17 22:01:56 +0000 UTC - event for affinity-nodeport-transition-5p5xs: {kubelet node2} Killing: Stopping container affinity-nodeport-transition Jun 17 22:02:09.418: INFO: At 2022-06-17 22:01:56 +0000 UTC - event for affinity-nodeport-transition-pmhvr: {kubelet node2} Killing: Stopping container affinity-nodeport-transition Jun 17 22:02:09.418: INFO: At 2022-06-17 22:01:56 +0000 UTC - event for affinity-nodeport-transition-rwm2c: {kubelet node1} Killing: Stopping container affinity-nodeport-transition Jun 17 22:02:09.418: INFO: At 2022-06-17 22:01:56 +0000 UTC - event for execpod-affinityjrb7w: {kubelet node2} Killing: Stopping container agnhost-container Jun 17 22:02:09.420: INFO: POD NODE PHASE GRACE CONDITIONS Jun 17 22:02:09.420: INFO: Jun 17 22:02:09.424: INFO: Logging node info for node master1 Jun 17 22:02:09.426: INFO: Node Info: &Node{ObjectMeta:{master1 47691bb2-4ee9-4386-8bec-0f9db1917afd 36054 0 2022-06-17 19:59:00 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master1 kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.202 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2022-06-17 19:59:03 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}}} {flanneld Update v1 2022-06-17 20:01:43 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2022-06-17 20:01:45 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.0.0/24\"":{}},"f:taints":{}}}} {kubelet Update v1 2022-06-17 20:06:30 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{451201003520 0} {} 440625980Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234739200 0} {} 196518300Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{406080902496 0} {} 406080902496 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324575232 0} {} 195629468Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-06-17 20:04:36 +0000 UTC,LastTransitionTime:2022-06-17 20:04:36 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-06-17 22:01:59 +0000 UTC,LastTransitionTime:2022-06-17 19:58:57 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-06-17 22:01:59 +0000 UTC,LastTransitionTime:2022-06-17 19:58:57 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-06-17 22:01:59 +0000 UTC,LastTransitionTime:2022-06-17 19:58:57 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-06-17 22:01:59 +0000 UTC,LastTransitionTime:2022-06-17 20:01:45 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.202,},NodeAddress{Type:Hostname,Address:master1,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:f59e69c8e0cc41ff966b02f015e9cf30,SystemUUID:00ACFB60-0631-E711-906E-0017A4403562,BootID:81e1dc93-cb0d-4bf9-b7c4-28e0b4aef603,KernelVersion:3.10.0-1160.66.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.17,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:727740703,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 sirot/netperf-latest:latest],SizeBytes:282025213,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e k8s.gcr.io/coredns/coredns:v1.8.0],SizeBytes:42454755,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[localhost:30500/tasextender@sha256:a226a9c613b9eeed89115dd78ba697306e50d1b4466033c8415371714720c861 tasextender:latest localhost:30500/tasextender:0.4],SizeBytes:28910791,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[registry@sha256:1cd9409a311350c3072fe510b52046f104416376c126a479cef9a4dfe692cf57 registry:2.7.0],SizeBytes:24191168,},ContainerImage{Names:[nginx@sha256:b92d3b942c8b84da889ac3dc6e83bd20ffb8cd2d8298eba92c8b0bf88d52f03e nginx:1.20.1-alpine],SizeBytes:22721538,},ContainerImage{Names:[@ :],SizeBytes:5577654,},ContainerImage{Names:[alpine@sha256:c0e9560cda118f9ec63ddefb4a173a2b2a0347082d7dff7dc14272e7841a5b5a alpine:3.12.1],SizeBytes:5573013,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jun 17 22:02:09.426: INFO: Logging kubelet events for node master1 Jun 17 22:02:09.429: INFO: Logging pods the kubelet thinks is on node master1 Jun 17 22:02:09.450: INFO: kube-flannel-z9nqz started at 2022-06-17 20:01:38 +0000 UTC (1+1 container statuses recorded) Jun 17 22:02:09.450: INFO: Init container install-cni ready: true, restart count 2 Jun 17 22:02:09.450: INFO: Container kube-flannel ready: true, restart count 2 Jun 17 22:02:09.450: INFO: kube-multus-ds-amd64-rqb4r started at 2022-06-17 20:01:47 +0000 UTC (0+1 container statuses recorded) Jun 17 22:02:09.450: INFO: Container kube-multus ready: true, restart count 1 Jun 17 22:02:09.450: INFO: kube-apiserver-master1 started at 2022-06-17 20:00:04 +0000 UTC (0+1 container statuses recorded) Jun 17 22:02:09.450: INFO: Container kube-apiserver ready: true, restart count 0 Jun 17 22:02:09.450: INFO: kube-controller-manager-master1 started at 2022-06-17 20:08:08 +0000 UTC (0+1 container statuses recorded) Jun 17 22:02:09.450: INFO: Container kube-controller-manager ready: true, restart count 2 Jun 17 22:02:09.450: INFO: container-registry-65d7c44b96-hq7rp started at 2022-06-17 20:06:17 +0000 UTC (0+2 container statuses recorded) Jun 17 22:02:09.450: INFO: Container docker-registry ready: true, restart count 0 Jun 17 22:02:09.450: INFO: Container nginx ready: true, restart count 0 Jun 17 22:02:09.450: INFO: node-exporter-bts5h started at 2022-06-17 20:14:54 +0000 UTC (0+2 container statuses recorded) Jun 17 22:02:09.450: INFO: Container kube-rbac-proxy ready: true, restart count 0 Jun 17 22:02:09.450: INFO: Container node-exporter ready: true, restart count 0 Jun 17 22:02:09.450: INFO: kube-scheduler-master1 started at 2022-06-17 20:08:08 +0000 UTC (0+1 container statuses recorded) Jun 17 22:02:09.450: INFO: Container kube-scheduler ready: true, restart count 0 Jun 17 22:02:09.450: INFO: kube-proxy-b2xlr started at 2022-06-17 20:00:43 +0000 UTC (0+1 container statuses recorded) Jun 17 22:02:09.450: INFO: Container kube-proxy ready: true, restart count 2 Jun 17 22:02:09.533: INFO: Latency metrics for node master1 Jun 17 22:02:09.533: INFO: Logging node info for node master2 Jun 17 22:02:09.536: INFO: Node Info: &Node{ObjectMeta:{master2 71ab7827-6f85-4ecf-82ce-5b27d8ba1a11 36207 0 2022-06-17 19:59:29 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master2 kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.203 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/master.version:v0.8.2 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2022-06-17 19:59:30 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}}} {flanneld Update v1 2022-06-17 20:01:42 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2022-06-17 20:01:45 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}},"f:taints":{}}}} {nfd-master Update v1 2022-06-17 20:09:34 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/master.version":{}}}}} {kubelet Update v1 2022-06-17 20:09:40 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:capacity":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.1.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{451201003520 0} {} 440625980Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234743296 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{406080902496 0} {} 406080902496 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324579328 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-06-17 20:04:35 +0000 UTC,LastTransitionTime:2022-06-17 20:04:35 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-06-17 22:02:07 +0000 UTC,LastTransitionTime:2022-06-17 19:59:29 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-06-17 22:02:07 +0000 UTC,LastTransitionTime:2022-06-17 19:59:29 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-06-17 22:02:07 +0000 UTC,LastTransitionTime:2022-06-17 19:59:29 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-06-17 22:02:07 +0000 UTC,LastTransitionTime:2022-06-17 20:01:45 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.203,},NodeAddress{Type:Hostname,Address:master2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:ba0363db4fd2476098c500989c8b1fd5,SystemUUID:00A0DE53-E51D-E711-906E-0017A4403562,BootID:cafb2298-e9e8-4bc9-82ab-0feb6c416066,KernelVersion:3.10.0-1160.66.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.17,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:727740703,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 sirot/netperf-latest:latest],SizeBytes:282025213,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[k8s.gcr.io/nfd/node-feature-discovery@sha256:74a1cbd82354f148277e20cdce25d57816e355a896bc67f67a0f722164b16945 k8s.gcr.io/nfd/node-feature-discovery:v0.8.2],SizeBytes:108486428,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e k8s.gcr.io/coredns/coredns:v1.8.0],SizeBytes:42454755,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 aquasec/kube-bench:0.3.1],SizeBytes:19301876,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jun 17 22:02:09.537: INFO: Logging kubelet events for node master2 Jun 17 22:02:09.539: INFO: Logging pods the kubelet thinks is on node master2 Jun 17 22:02:09.549: INFO: node-feature-discovery-controller-cff799f9f-zlzkd started at 2022-06-17 20:09:28 +0000 UTC (0+1 container statuses recorded) Jun 17 22:02:09.549: INFO: Container nfd-controller ready: true, restart count 0 Jun 17 22:02:09.549: INFO: node-exporter-ccmb2 started at 2022-06-17 20:14:54 +0000 UTC (0+2 container statuses recorded) Jun 17 22:02:09.549: INFO: Container kube-rbac-proxy ready: true, restart count 0 Jun 17 22:02:09.549: INFO: Container node-exporter ready: true, restart count 0 Jun 17 22:02:09.549: INFO: kube-controller-manager-master2 started at 2022-06-17 20:08:05 +0000 UTC (0+1 container statuses recorded) Jun 17 22:02:09.549: INFO: Container kube-controller-manager ready: true, restart count 2 Jun 17 22:02:09.549: INFO: kube-scheduler-master2 started at 2022-06-17 20:00:05 +0000 UTC (0+1 container statuses recorded) Jun 17 22:02:09.549: INFO: Container kube-scheduler ready: true, restart count 2 Jun 17 22:02:09.549: INFO: kube-flannel-kmc7f started at 2022-06-17 20:01:38 +0000 UTC (1+1 container statuses recorded) Jun 17 22:02:09.549: INFO: Init container install-cni ready: true, restart count 2 Jun 17 22:02:09.549: INFO: Container kube-flannel ready: true, restart count 2 Jun 17 22:02:09.549: INFO: coredns-8474476ff8-55pd7 started at 2022-06-17 20:02:14 +0000 UTC (0+1 container statuses recorded) Jun 17 22:02:09.549: INFO: Container coredns ready: true, restart count 1 Jun 17 22:02:09.549: INFO: dns-autoscaler-7df78bfcfb-ml447 started at 2022-06-17 20:02:16 +0000 UTC (0+1 container statuses recorded) Jun 17 22:02:09.549: INFO: Container autoscaler ready: true, restart count 1 Jun 17 22:02:09.549: INFO: kube-apiserver-master2 started at 2022-06-17 20:00:05 +0000 UTC (0+1 container statuses recorded) Jun 17 22:02:09.549: INFO: Container kube-apiserver ready: true, restart count 0 Jun 17 22:02:09.549: INFO: kube-proxy-52p78 started at 2022-06-17 20:00:43 +0000 UTC (0+1 container statuses recorded) Jun 17 22:02:09.549: INFO: Container kube-proxy ready: true, restart count 1 Jun 17 22:02:09.549: INFO: kube-multus-ds-amd64-spg7h started at 2022-06-17 20:01:47 +0000 UTC (0+1 container statuses recorded) Jun 17 22:02:09.549: INFO: Container kube-multus ready: true, restart count 1 Jun 17 22:02:09.635: INFO: Latency metrics for node master2 Jun 17 22:02:09.635: INFO: Logging node info for node master3 Jun 17 22:02:09.638: INFO: Node Info: &Node{ObjectMeta:{master3 4495d2b3-3dc7-45fa-93e4-2ad5ef91730e 36189 0 2022-06-17 19:59:37 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master3 kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.204 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2022-06-17 19:59:39 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}}} {kube-controller-manager Update v1 2022-06-17 20:00:32 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.2.0/24\"":{}},"f:taints":{}}}} {flanneld Update v1 2022-06-17 20:01:42 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kubelet Update v1 2022-06-17 20:12:12 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:capacity":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.2.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{451201003520 0} {} 440625980Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234743296 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{406080902496 0} {} 406080902496 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324579328 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-06-17 20:04:33 +0000 UTC,LastTransitionTime:2022-06-17 20:04:33 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-06-17 22:02:05 +0000 UTC,LastTransitionTime:2022-06-17 19:59:37 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-06-17 22:02:05 +0000 UTC,LastTransitionTime:2022-06-17 19:59:37 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-06-17 22:02:05 +0000 UTC,LastTransitionTime:2022-06-17 19:59:37 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-06-17 22:02:05 +0000 UTC,LastTransitionTime:2022-06-17 20:01:45 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.204,},NodeAddress{Type:Hostname,Address:master3,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:e420146228b341cbbaf470c338ef023e,SystemUUID:008B1444-141E-E711-906E-0017A4403562,BootID:88e9c5d2-4324-4e63-8acf-ee80e9511e70,KernelVersion:3.10.0-1160.66.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.17,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:727740703,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[quay.io/prometheus-operator/prometheus-operator@sha256:850c86bfeda4389bc9c757a9fd17ca5a090ea6b424968178d4467492cfa13921 quay.io/prometheus-operator/prometheus-operator:v0.44.1],SizeBytes:42617274,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e k8s.gcr.io/coredns/coredns:v1.8.0],SizeBytes:42454755,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jun 17 22:02:09.638: INFO: Logging kubelet events for node master3 Jun 17 22:02:09.641: INFO: Logging pods the kubelet thinks is on node master3 Jun 17 22:02:09.650: INFO: kube-controller-manager-master3 started at 2022-06-17 20:08:07 +0000 UTC (0+1 container statuses recorded) Jun 17 22:02:09.650: INFO: Container kube-controller-manager ready: true, restart count 2 Jun 17 22:02:09.650: INFO: coredns-8474476ff8-plfdq started at 2022-06-17 20:02:18 +0000 UTC (0+1 container statuses recorded) Jun 17 22:02:09.650: INFO: Container coredns ready: true, restart count 1 Jun 17 22:02:09.650: INFO: prometheus-operator-585ccfb458-kz9ss started at 2022-06-17 20:14:47 +0000 UTC (0+2 container statuses recorded) Jun 17 22:02:09.650: INFO: Container kube-rbac-proxy ready: true, restart count 0 Jun 17 22:02:09.650: INFO: Container prometheus-operator ready: true, restart count 0 Jun 17 22:02:09.650: INFO: kube-multus-ds-amd64-vtvhp started at 2022-06-17 20:01:47 +0000 UTC (0+1 container statuses recorded) Jun 17 22:02:09.650: INFO: Container kube-multus ready: true, restart count 1 Jun 17 22:02:09.650: INFO: node-exporter-tv8q4 started at 2022-06-17 20:14:54 +0000 UTC (0+2 container statuses recorded) Jun 17 22:02:09.650: INFO: Container kube-rbac-proxy ready: true, restart count 0 Jun 17 22:02:09.650: INFO: Container node-exporter ready: true, restart count 0 Jun 17 22:02:09.650: INFO: kube-apiserver-master3 started at 2022-06-17 20:00:05 +0000 UTC (0+1 container statuses recorded) Jun 17 22:02:09.650: INFO: Container kube-apiserver ready: true, restart count 0 Jun 17 22:02:09.650: INFO: kube-scheduler-master3 started at 2022-06-17 20:08:07 +0000 UTC (0+1 container statuses recorded) Jun 17 22:02:09.650: INFO: Container kube-scheduler ready: true, restart count 2 Jun 17 22:02:09.650: INFO: kube-proxy-qw2lh started at 2022-06-17 20:00:43 +0000 UTC (0+1 container statuses recorded) Jun 17 22:02:09.650: INFO: Container kube-proxy ready: true, restart count 1 Jun 17 22:02:09.650: INFO: kube-flannel-7sp2w started at 2022-06-17 20:01:38 +0000 UTC (1+1 container statuses recorded) Jun 17 22:02:09.650: INFO: Init container install-cni ready: true, restart count 0 Jun 17 22:02:09.650: INFO: Container kube-flannel ready: true, restart count 2 Jun 17 22:02:09.736: INFO: Latency metrics for node master3 Jun 17 22:02:09.736: INFO: Logging node info for node node1 Jun 17 22:02:09.739: INFO: Node Info: &Node{ObjectMeta:{node1 2db3a28c-448f-4511-9db8-4ef739b681b1 36094 0 2022-06-17 20:00:39 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux cmk.intel.com/cmk-node:true feature.node.kubernetes.io/cpu-cpuid.ADX:true feature.node.kubernetes.io/cpu-cpuid.AESNI:true feature.node.kubernetes.io/cpu-cpuid.AVX:true feature.node.kubernetes.io/cpu-cpuid.AVX2:true feature.node.kubernetes.io/cpu-cpuid.AVX512BW:true feature.node.kubernetes.io/cpu-cpuid.AVX512CD:true feature.node.kubernetes.io/cpu-cpuid.AVX512DQ:true feature.node.kubernetes.io/cpu-cpuid.AVX512F:true feature.node.kubernetes.io/cpu-cpuid.AVX512VL:true feature.node.kubernetes.io/cpu-cpuid.FMA3:true feature.node.kubernetes.io/cpu-cpuid.HLE:true feature.node.kubernetes.io/cpu-cpuid.IBPB:true feature.node.kubernetes.io/cpu-cpuid.MPX:true feature.node.kubernetes.io/cpu-cpuid.RTM:true feature.node.kubernetes.io/cpu-cpuid.SSE4:true feature.node.kubernetes.io/cpu-cpuid.SSE42:true feature.node.kubernetes.io/cpu-cpuid.STIBP:true feature.node.kubernetes.io/cpu-cpuid.VMX:true feature.node.kubernetes.io/cpu-cstate.enabled:true feature.node.kubernetes.io/cpu-hardware_multithreading:true feature.node.kubernetes.io/cpu-pstate.status:active feature.node.kubernetes.io/cpu-pstate.turbo:true feature.node.kubernetes.io/cpu-rdt.RDTCMT:true feature.node.kubernetes.io/cpu-rdt.RDTL3CA:true feature.node.kubernetes.io/cpu-rdt.RDTMBA:true feature.node.kubernetes.io/cpu-rdt.RDTMBM:true feature.node.kubernetes.io/cpu-rdt.RDTMON:true feature.node.kubernetes.io/kernel-config.NO_HZ:true feature.node.kubernetes.io/kernel-config.NO_HZ_FULL:true feature.node.kubernetes.io/kernel-selinux.enabled:true feature.node.kubernetes.io/kernel-version.full:3.10.0-1160.66.1.el7.x86_64 feature.node.kubernetes.io/kernel-version.major:3 feature.node.kubernetes.io/kernel-version.minor:10 feature.node.kubernetes.io/kernel-version.revision:0 feature.node.kubernetes.io/memory-numa:true feature.node.kubernetes.io/network-sriov.capable:true feature.node.kubernetes.io/network-sriov.configured:true feature.node.kubernetes.io/pci-0300_1a03.present:true feature.node.kubernetes.io/storage-nonrotationaldisk:true feature.node.kubernetes.io/system-os_release.ID:centos feature.node.kubernetes.io/system-os_release.VERSION_ID:7 feature.node.kubernetes.io/system-os_release.VERSION_ID.major:7 kubernetes.io/arch:amd64 kubernetes.io/hostname:node1 kubernetes.io/os:linux] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.207 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/extended-resources: nfd.node.kubernetes.io/feature-labels:cpu-cpuid.ADX,cpu-cpuid.AESNI,cpu-cpuid.AVX,cpu-cpuid.AVX2,cpu-cpuid.AVX512BW,cpu-cpuid.AVX512CD,cpu-cpuid.AVX512DQ,cpu-cpuid.AVX512F,cpu-cpuid.AVX512VL,cpu-cpuid.FMA3,cpu-cpuid.HLE,cpu-cpuid.IBPB,cpu-cpuid.MPX,cpu-cpuid.RTM,cpu-cpuid.SSE4,cpu-cpuid.SSE42,cpu-cpuid.STIBP,cpu-cpuid.VMX,cpu-cstate.enabled,cpu-hardware_multithreading,cpu-pstate.status,cpu-pstate.turbo,cpu-rdt.RDTCMT,cpu-rdt.RDTL3CA,cpu-rdt.RDTMBA,cpu-rdt.RDTMBM,cpu-rdt.RDTMON,kernel-config.NO_HZ,kernel-config.NO_HZ_FULL,kernel-selinux.enabled,kernel-version.full,kernel-version.major,kernel-version.minor,kernel-version.revision,memory-numa,network-sriov.capable,network-sriov.configured,pci-0300_1a03.present,storage-nonrotationaldisk,system-os_release.ID,system-os_release.VERSION_ID,system-os_release.VERSION_ID.major nfd.node.kubernetes.io/worker.version:v0.8.2 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2022-06-17 20:00:39 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.4.0/24\"":{}}}}} {kubeadm Update v1 2022-06-17 20:00:40 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {flanneld Update v1 2022-06-17 20:01:43 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {nfd-master Update v1 2022-06-17 20:09:35 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/extended-resources":{},"f:nfd.node.kubernetes.io/feature-labels":{},"f:nfd.node.kubernetes.io/worker.version":{}},"f:labels":{"f:feature.node.kubernetes.io/cpu-cpuid.ADX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AESNI":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX2":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512BW":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512CD":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512DQ":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512F":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512VL":{},"f:feature.node.kubernetes.io/cpu-cpuid.FMA3":{},"f:feature.node.kubernetes.io/cpu-cpuid.HLE":{},"f:feature.node.kubernetes.io/cpu-cpuid.IBPB":{},"f:feature.node.kubernetes.io/cpu-cpuid.MPX":{},"f:feature.node.kubernetes.io/cpu-cpuid.RTM":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE4":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE42":{},"f:feature.node.kubernetes.io/cpu-cpuid.STIBP":{},"f:feature.node.kubernetes.io/cpu-cpuid.VMX":{},"f:feature.node.kubernetes.io/cpu-cstate.enabled":{},"f:feature.node.kubernetes.io/cpu-hardware_multithreading":{},"f:feature.node.kubernetes.io/cpu-pstate.status":{},"f:feature.node.kubernetes.io/cpu-pstate.turbo":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTCMT":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTL3CA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBM":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMON":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ_FULL":{},"f:feature.node.kubernetes.io/kernel-selinux.enabled":{},"f:feature.node.kubernetes.io/kernel-version.full":{},"f:feature.node.kubernetes.io/kernel-version.major":{},"f:feature.node.kubernetes.io/kernel-version.minor":{},"f:feature.node.kubernetes.io/kernel-version.revision":{},"f:feature.node.kubernetes.io/memory-numa":{},"f:feature.node.kubernetes.io/network-sriov.capable":{},"f:feature.node.kubernetes.io/network-sriov.configured":{},"f:feature.node.kubernetes.io/pci-0300_1a03.present":{},"f:feature.node.kubernetes.io/storage-nonrotationaldisk":{},"f:feature.node.kubernetes.io/system-os_release.ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID.major":{}}}}} {Swagger-Codegen Update v1 2022-06-17 20:13:06 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:cmk.intel.com/cmk-node":{}}},"f:status":{"f:capacity":{"f:cmk.intel.com/exclusive-cores":{}}}}} {kubelet Update v1 2022-06-17 20:13:08 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:cmk.intel.com/exclusive-cores":{},"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:capacity":{"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.4.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.4.0/24],},Status:NodeStatus{Capacity:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{451201003520 0} {} 440625980Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{201269608448 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{77 0} {} 77 DecimalSI},ephemeral-storage: {{406080902496 0} {} 406080902496 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{178884608000 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-06-17 20:04:34 +0000 UTC,LastTransitionTime:2022-06-17 20:04:34 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-06-17 22:02:02 +0000 UTC,LastTransitionTime:2022-06-17 20:00:39 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-06-17 22:02:02 +0000 UTC,LastTransitionTime:2022-06-17 20:00:39 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-06-17 22:02:02 +0000 UTC,LastTransitionTime:2022-06-17 20:00:39 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-06-17 22:02:02 +0000 UTC,LastTransitionTime:2022-06-17 20:01:49 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.207,},NodeAddress{Type:Hostname,Address:node1,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:b4b206100a5d45e9959c4a79c836676a,SystemUUID:00CDA902-D022-E711-906E-0017A4403562,BootID:5a19e1a7-8d9a-4724-83a4-bd77b1a0f8f4,KernelVersion:3.10.0-1160.66.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.17,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[opnfv/barometer-collectd@sha256:f30e965aa6195e6ac4ca2410f5a15e3704c92e4afa5208178ca22a7911975d66],SizeBytes:1075575763,},ContainerImage{Names:[@ :],SizeBytes:1007077455,},ContainerImage{Names:[localhost:30500/cmk@sha256:7227e64d78c2a9dd290de0ec1cbbaf536dad977fc3efca629dc87d6ffb97071e cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:727740703,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[golang@sha256:db2475a1dbb2149508e5db31d7d77a75e6600d54be645f37681f03f2762169ba golang:alpine3.12],SizeBytes:301186719,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[grafana/grafana@sha256:ba39bf5131dcc0464134a3ff0e26e8c6380415249fa725e5f619176601255172 grafana/grafana:7.5.4],SizeBytes:203572842,},ContainerImage{Names:[quay.io/prometheus/prometheus@sha256:b899dbd1b9017b9a379f76ce5b40eead01a62762c4f2057eacef945c3c22d210 quay.io/prometheus/prometheus:v2.22.1],SizeBytes:168344243,},ContainerImage{Names:[nginx@sha256:a05b0cdd4fc1be3b224ba9662ebdf98fe44c09c0c9215b45f84344c12867002e nginx:1.21.1],SizeBytes:133175493,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:125930239,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[k8s.gcr.io/nfd/node-feature-discovery@sha256:74a1cbd82354f148277e20cdce25d57816e355a896bc67f67a0f722164b16945 k8s.gcr.io/nfd/node-feature-discovery:v0.8.2],SizeBytes:108486428,},ContainerImage{Names:[directxman12/k8s-prometheus-adapter@sha256:2b09a571757a12c0245f2f1a74db4d1b9386ff901cf57f5ce48a0a682bd0e3af directxman12/k8s-prometheus-adapter:v0.8.2],SizeBytes:68230450,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[localhost:30500/sriov-device-plugin@sha256:21d7abd21ac65aac7d19aaa2b1b05a71e496b7bf6251c76df58855be9c3aaa59 nfvpe/sriov-device-plugin:latest localhost:30500/sriov-device-plugin:v3.3.2],SizeBytes:42676189,},ContainerImage{Names:[localhost:30500/tasextender@sha256:a226a9c613b9eeed89115dd78ba697306e50d1b4466033c8415371714720c861 localhost:30500/tasextender:0.4],SizeBytes:28910791,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[prom/collectd-exporter@sha256:73fbda4d24421bff3b741c27efc36f1b6fbe7c57c378d56d4ff78101cd556654],SizeBytes:17463681,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:16032814,},ContainerImage{Names:[quay.io/prometheus-operator/prometheus-config-reloader@sha256:4dee0fcf1820355ddd6986c1317b555693776c731315544a99d6cc59a7e34ce9 quay.io/prometheus-operator/prometheus-config-reloader:v0.44.1],SizeBytes:13433274,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:11443478,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonewprivs@sha256:8ac1264691820febacf3aea5d152cbde6d10685731ec14966a9401c6f47a68ac k8s.gcr.io/e2e-test-images/nonewprivs:1.3],SizeBytes:7107254,},ContainerImage{Names:[appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172cead6502fec989198b98d8bb appropriate/curl:edge],SizeBytes:5654234,},ContainerImage{Names:[alpine@sha256:c75ac27b49326926b803b9ed43bf088bc220d22556de1bc5f72d742c91398f69 alpine:3.12],SizeBytes:5581590,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:1154361,},ContainerImage{Names:[busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 busybox:1.28],SizeBytes:1146369,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jun 17 22:02:09.739: INFO: Logging kubelet events for node node1 Jun 17 22:02:09.742: INFO: Logging pods the kubelet thinks is on node node1 Jun 17 22:02:09.758: INFO: collectd-5src2 started at 2022-06-17 20:18:47 +0000 UTC (0+3 container statuses recorded) Jun 17 22:02:09.758: INFO: Container collectd ready: true, restart count 0 Jun 17 22:02:09.758: INFO: Container collectd-exporter ready: true, restart count 0 Jun 17 22:02:09.758: INFO: Container rbac-proxy ready: true, restart count 0 Jun 17 22:02:09.758: INFO: var-expansion-8fe100b7-cb54-442a-8a73-a4b304daf912 started at 2022-06-17 21:59:39 +0000 UTC (0+1 container statuses recorded) Jun 17 22:02:09.758: INFO: Container dapi-container ready: true, restart count 0 Jun 17 22:02:09.758: INFO: kube-flannel-wqcwq started at 2022-06-17 20:01:38 +0000 UTC (1+1 container statuses recorded) Jun 17 22:02:09.758: INFO: Init container install-cni ready: true, restart count 2 Jun 17 22:02:09.758: INFO: Container kube-flannel ready: true, restart count 2 Jun 17 22:02:09.758: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-whtq2 started at 2022-06-17 20:10:41 +0000 UTC (0+1 container statuses recorded) Jun 17 22:02:09.758: INFO: Container kube-sriovdp ready: true, restart count 0 Jun 17 22:02:09.758: INFO: cmk-init-discover-node1-bvmrv started at 2022-06-17 20:13:02 +0000 UTC (0+3 container statuses recorded) Jun 17 22:02:09.758: INFO: Container discover ready: false, restart count 0 Jun 17 22:02:09.758: INFO: Container init ready: false, restart count 0 Jun 17 22:02:09.758: INFO: Container install ready: false, restart count 0 Jun 17 22:02:09.758: INFO: node-exporter-8ftgl started at 2022-06-17 20:14:54 +0000 UTC (0+2 container statuses recorded) Jun 17 22:02:09.758: INFO: Container kube-rbac-proxy ready: true, restart count 0 Jun 17 22:02:09.758: INFO: Container node-exporter ready: true, restart count 0 Jun 17 22:02:09.758: INFO: cmk-webhook-6c9d5f8578-qcmrd started at 2022-06-17 20:13:52 +0000 UTC (0+1 container statuses recorded) Jun 17 22:02:09.758: INFO: Container cmk-webhook ready: true, restart count 0 Jun 17 22:02:09.758: INFO: replace-27591722-kb5sh started at 2022-06-17 22:02:00 +0000 UTC (0+1 container statuses recorded) Jun 17 22:02:09.758: INFO: Container c ready: false, restart count 0 Jun 17 22:02:09.758: INFO: sample-apiserver-deployment-64f6b9dc99-hx87j started at 2022-06-17 22:01:45 +0000 UTC (0+2 container statuses recorded) Jun 17 22:02:09.758: INFO: Container etcd ready: false, restart count 0 Jun 17 22:02:09.758: INFO: Container sample-apiserver ready: false, restart count 0 Jun 17 22:02:09.758: INFO: kube-proxy-t4lqk started at 2022-06-17 20:00:43 +0000 UTC (0+1 container statuses recorded) Jun 17 22:02:09.758: INFO: Container kube-proxy ready: true, restart count 2 Jun 17 22:02:09.758: INFO: cmk-xh247 started at 2022-06-17 20:13:51 +0000 UTC (0+2 container statuses recorded) Jun 17 22:02:09.758: INFO: Container nodereport ready: true, restart count 0 Jun 17 22:02:09.758: INFO: Container reconcile ready: true, restart count 0 Jun 17 22:02:09.758: INFO: client-envvars-38ddad96-0a9d-4c95-b902-5636348f134a started at 2022-06-17 22:01:53 +0000 UTC (0+1 container statuses recorded) Jun 17 22:02:09.758: INFO: Container env3cont ready: false, restart count 0 Jun 17 22:02:09.758: INFO: adopt-release-p7hdj started at 2022-06-17 22:01:55 +0000 UTC (0+1 container statuses recorded) Jun 17 22:02:09.759: INFO: Container c ready: false, restart count 0 Jun 17 22:02:09.759: INFO: execpodphlr7 started at 2022-06-17 22:02:05 +0000 UTC (0+1 container statuses recorded) Jun 17 22:02:09.759: INFO: Container agnhost-container ready: false, restart count 0 Jun 17 22:02:09.759: INFO: nginx-proxy-node1 started at 2022-06-17 20:00:39 +0000 UTC (0+1 container statuses recorded) Jun 17 22:02:09.759: INFO: Container nginx-proxy ready: true, restart count 2 Jun 17 22:02:09.759: INFO: kube-multus-ds-amd64-m6vf8 started at 2022-06-17 20:01:47 +0000 UTC (0+1 container statuses recorded) Jun 17 22:02:09.759: INFO: Container kube-multus ready: true, restart count 1 Jun 17 22:02:09.759: INFO: execpod8gpkx started at 2022-06-17 22:00:19 +0000 UTC (0+1 container statuses recorded) Jun 17 22:02:09.759: INFO: Container agnhost-container ready: true, restart count 0 Jun 17 22:02:09.759: INFO: server-envvars-98b588af-2ad7-47f4-b319-3648a38c07a5 started at 2022-06-17 22:01:49 +0000 UTC (0+1 container statuses recorded) Jun 17 22:02:09.759: INFO: Container srv ready: true, restart count 0 Jun 17 22:02:09.759: INFO: busybox-host-aliases00fe61d8-5d2e-435d-b09f-685c654a426e started at 2022-06-17 22:01:54 +0000 UTC (0+1 container statuses recorded) Jun 17 22:02:09.759: INFO: Container busybox-host-aliases00fe61d8-5d2e-435d-b09f-685c654a426e ready: false, restart count 0 Jun 17 22:02:09.759: INFO: kubernetes-dashboard-785dcbb76d-26kg6 started at 2022-06-17 20:02:19 +0000 UTC (0+1 container statuses recorded) Jun 17 22:02:09.759: INFO: Container kubernetes-dashboard ready: true, restart count 2 Jun 17 22:02:09.759: INFO: tas-telemetry-aware-scheduling-84ff454dfb-tbvjv started at 2022-06-17 20:17:57 +0000 UTC (0+1 container statuses recorded) Jun 17 22:02:09.759: INFO: Container tas-extender ready: true, restart count 0 Jun 17 22:02:09.759: INFO: node-feature-discovery-worker-dgp4b started at 2022-06-17 20:09:28 +0000 UTC (0+1 container statuses recorded) Jun 17 22:02:09.759: INFO: Container nfd-worker ready: true, restart count 0 Jun 17 22:02:09.759: INFO: prometheus-k8s-0 started at 2022-06-17 20:14:56 +0000 UTC (0+4 container statuses recorded) Jun 17 22:02:09.759: INFO: Container config-reloader ready: true, restart count 0 Jun 17 22:02:09.759: INFO: Container custom-metrics-apiserver ready: true, restart count 0 Jun 17 22:02:09.759: INFO: Container grafana ready: true, restart count 0 Jun 17 22:02:09.759: INFO: Container prometheus ready: true, restart count 1 Jun 17 22:02:10.100: INFO: Latency metrics for node node1 Jun 17 22:02:10.101: INFO: Logging node info for node node2 Jun 17 22:02:10.104: INFO: Node Info: &Node{ObjectMeta:{node2 467d2582-10f7-475b-9f20-5b7c2e46267a 36100 0 2022-06-17 20:00:37 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux cmk.intel.com/cmk-node:true feature.node.kubernetes.io/cpu-cpuid.ADX:true feature.node.kubernetes.io/cpu-cpuid.AESNI:true feature.node.kubernetes.io/cpu-cpuid.AVX:true feature.node.kubernetes.io/cpu-cpuid.AVX2:true feature.node.kubernetes.io/cpu-cpuid.AVX512BW:true feature.node.kubernetes.io/cpu-cpuid.AVX512CD:true feature.node.kubernetes.io/cpu-cpuid.AVX512DQ:true feature.node.kubernetes.io/cpu-cpuid.AVX512F:true feature.node.kubernetes.io/cpu-cpuid.AVX512VL:true feature.node.kubernetes.io/cpu-cpuid.FMA3:true feature.node.kubernetes.io/cpu-cpuid.HLE:true feature.node.kubernetes.io/cpu-cpuid.IBPB:true feature.node.kubernetes.io/cpu-cpuid.MPX:true feature.node.kubernetes.io/cpu-cpuid.RTM:true feature.node.kubernetes.io/cpu-cpuid.SSE4:true feature.node.kubernetes.io/cpu-cpuid.SSE42:true feature.node.kubernetes.io/cpu-cpuid.STIBP:true feature.node.kubernetes.io/cpu-cpuid.VMX:true feature.node.kubernetes.io/cpu-cstate.enabled:true feature.node.kubernetes.io/cpu-hardware_multithreading:true feature.node.kubernetes.io/cpu-pstate.status:active feature.node.kubernetes.io/cpu-pstate.turbo:true feature.node.kubernetes.io/cpu-rdt.RDTCMT:true feature.node.kubernetes.io/cpu-rdt.RDTL3CA:true feature.node.kubernetes.io/cpu-rdt.RDTMBA:true feature.node.kubernetes.io/cpu-rdt.RDTMBM:true feature.node.kubernetes.io/cpu-rdt.RDTMON:true feature.node.kubernetes.io/kernel-config.NO_HZ:true feature.node.kubernetes.io/kernel-config.NO_HZ_FULL:true feature.node.kubernetes.io/kernel-selinux.enabled:true feature.node.kubernetes.io/kernel-version.full:3.10.0-1160.66.1.el7.x86_64 feature.node.kubernetes.io/kernel-version.major:3 feature.node.kubernetes.io/kernel-version.minor:10 feature.node.kubernetes.io/kernel-version.revision:0 feature.node.kubernetes.io/memory-numa:true feature.node.kubernetes.io/network-sriov.capable:true feature.node.kubernetes.io/network-sriov.configured:true feature.node.kubernetes.io/pci-0300_1a03.present:true feature.node.kubernetes.io/storage-nonrotationaldisk:true feature.node.kubernetes.io/system-os_release.ID:centos feature.node.kubernetes.io/system-os_release.VERSION_ID:7 feature.node.kubernetes.io/system-os_release.VERSION_ID.major:7 kubernetes.io/arch:amd64 kubernetes.io/hostname:node2 kubernetes.io/os:linux] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.208 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/extended-resources: nfd.node.kubernetes.io/feature-labels:cpu-cpuid.ADX,cpu-cpuid.AESNI,cpu-cpuid.AVX,cpu-cpuid.AVX2,cpu-cpuid.AVX512BW,cpu-cpuid.AVX512CD,cpu-cpuid.AVX512DQ,cpu-cpuid.AVX512F,cpu-cpuid.AVX512VL,cpu-cpuid.FMA3,cpu-cpuid.HLE,cpu-cpuid.IBPB,cpu-cpuid.MPX,cpu-cpuid.RTM,cpu-cpuid.SSE4,cpu-cpuid.SSE42,cpu-cpuid.STIBP,cpu-cpuid.VMX,cpu-cstate.enabled,cpu-hardware_multithreading,cpu-pstate.status,cpu-pstate.turbo,cpu-rdt.RDTCMT,cpu-rdt.RDTL3CA,cpu-rdt.RDTMBA,cpu-rdt.RDTMBM,cpu-rdt.RDTMON,kernel-config.NO_HZ,kernel-config.NO_HZ_FULL,kernel-selinux.enabled,kernel-version.full,kernel-version.major,kernel-version.minor,kernel-version.revision,memory-numa,network-sriov.capable,network-sriov.configured,pci-0300_1a03.present,storage-nonrotationaldisk,system-os_release.ID,system-os_release.VERSION_ID,system-os_release.VERSION_ID.major nfd.node.kubernetes.io/worker.version:v0.8.2 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2022-06-17 20:00:37 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.3.0/24\"":{}}}}} {kubeadm Update v1 2022-06-17 20:00:40 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {flanneld Update v1 2022-06-17 20:01:43 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {nfd-master Update v1 2022-06-17 20:09:34 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/extended-resources":{},"f:nfd.node.kubernetes.io/feature-labels":{},"f:nfd.node.kubernetes.io/worker.version":{}},"f:labels":{"f:feature.node.kubernetes.io/cpu-cpuid.ADX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AESNI":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX2":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512BW":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512CD":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512DQ":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512F":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512VL":{},"f:feature.node.kubernetes.io/cpu-cpuid.FMA3":{},"f:feature.node.kubernetes.io/cpu-cpuid.HLE":{},"f:feature.node.kubernetes.io/cpu-cpuid.IBPB":{},"f:feature.node.kubernetes.io/cpu-cpuid.MPX":{},"f:feature.node.kubernetes.io/cpu-cpuid.RTM":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE4":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE42":{},"f:feature.node.kubernetes.io/cpu-cpuid.STIBP":{},"f:feature.node.kubernetes.io/cpu-cpuid.VMX":{},"f:feature.node.kubernetes.io/cpu-cstate.enabled":{},"f:feature.node.kubernetes.io/cpu-hardware_multithreading":{},"f:feature.node.kubernetes.io/cpu-pstate.status":{},"f:feature.node.kubernetes.io/cpu-pstate.turbo":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTCMT":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTL3CA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBM":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMON":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ_FULL":{},"f:feature.node.kubernetes.io/kernel-selinux.enabled":{},"f:feature.node.kubernetes.io/kernel-version.full":{},"f:feature.node.kubernetes.io/kernel-version.major":{},"f:feature.node.kubernetes.io/kernel-version.minor":{},"f:feature.node.kubernetes.io/kernel-version.revision":{},"f:feature.node.kubernetes.io/memory-numa":{},"f:feature.node.kubernetes.io/network-sriov.capable":{},"f:feature.node.kubernetes.io/network-sriov.configured":{},"f:feature.node.kubernetes.io/pci-0300_1a03.present":{},"f:feature.node.kubernetes.io/storage-nonrotationaldisk":{},"f:feature.node.kubernetes.io/system-os_release.ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID.major":{}}}}} {Swagger-Codegen Update v1 2022-06-17 20:13:31 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:cmk.intel.com/cmk-node":{}}},"f:status":{"f:capacity":{"f:cmk.intel.com/exclusive-cores":{}}}}} {kubelet Update v1 2022-06-17 20:13:37 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:cmk.intel.com/exclusive-cores":{},"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:capacity":{"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.3.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.3.0/24],},Status:NodeStatus{Capacity:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{451201003520 0} {} 440625980Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{201269604352 0} {} 196552348Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{77 0} {} 77 DecimalSI},ephemeral-storage: {{406080902496 0} {} 406080902496 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{178884603904 0} {} 174691996Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-06-17 20:04:33 +0000 UTC,LastTransitionTime:2022-06-17 20:04:33 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-06-17 22:02:02 +0000 UTC,LastTransitionTime:2022-06-17 20:00:37 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-06-17 22:02:02 +0000 UTC,LastTransitionTime:2022-06-17 20:00:37 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-06-17 22:02:02 +0000 UTC,LastTransitionTime:2022-06-17 20:00:37 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-06-17 22:02:02 +0000 UTC,LastTransitionTime:2022-06-17 20:04:30 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.208,},NodeAddress{Type:Hostname,Address:node2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:3b9e31fbb30d4e48b9ac063755a76deb,SystemUUID:80B3CD56-852F-E711-906E-0017A4403562,BootID:5cd4c1a7-c6ca-496c-9122-4f944da708e6,KernelVersion:3.10.0-1160.66.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.17,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[opnfv/barometer-collectd@sha256:f30e965aa6195e6ac4ca2410f5a15e3704c92e4afa5208178ca22a7911975d66],SizeBytes:1075575763,},ContainerImage{Names:[cmk:v1.5.1],SizeBytes:727740703,},ContainerImage{Names:[localhost:30500/cmk@sha256:7227e64d78c2a9dd290de0ec1cbbaf536dad977fc3efca629dc87d6ffb97071e localhost:30500/cmk:v1.5.1],SizeBytes:727740703,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[aquasec/kube-hunter@sha256:2be6820bc1d7e0f57193a9a27d5a3e16b2fd93c53747b03ce8ca48c6fc323781 aquasec/kube-hunter:0.3.1],SizeBytes:347611549,},ContainerImage{Names:[sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 sirot/netperf-latest:latest],SizeBytes:282025213,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:702a992280fb7c3303e84a5801acbb4c9c7fcf48cffe0e9c8be3f0c60f74cf89 k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.4],SizeBytes:253371792,},ContainerImage{Names:[nginx@sha256:a05b0cdd4fc1be3b224ba9662ebdf98fe44c09c0c9215b45f84344c12867002e nginx:1.21.1],SizeBytes:133175493,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:125930239,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[k8s.gcr.io/nfd/node-feature-discovery@sha256:74a1cbd82354f148277e20cdce25d57816e355a896bc67f67a0f722164b16945 k8s.gcr.io/nfd/node-feature-discovery:v0.8.2],SizeBytes:108486428,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[localhost:30500/sriov-device-plugin@sha256:21d7abd21ac65aac7d19aaa2b1b05a71e496b7bf6251c76df58855be9c3aaa59 localhost:30500/sriov-device-plugin:v3.3.2],SizeBytes:42676189,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 aquasec/kube-bench:0.3.1],SizeBytes:19301876,},ContainerImage{Names:[prom/collectd-exporter@sha256:73fbda4d24421bff3b741c27efc36f1b6fbe7c57c378d56d4ff78101cd556654],SizeBytes:17463681,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:16032814,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:11443478,},ContainerImage{Names:[appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172cead6502fec989198b98d8bb appropriate/curl:edge],SizeBytes:5654234,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:1154361,},ContainerImage{Names:[busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 busybox:1.28],SizeBytes:1146369,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jun 17 22:02:10.105: INFO: Logging kubelet events for node node2 Jun 17 22:02:10.107: INFO: Logging pods the kubelet thinks is on node node2 Jun 17 22:02:10.119: INFO: kubernetes-metrics-scraper-5558854cb-w4nk8 started at 2022-06-17 20:02:19 +0000 UTC (0+1 container statuses recorded) Jun 17 22:02:10.119: INFO: Container kubernetes-metrics-scraper ready: true, restart count 1 Jun 17 22:02:10.119: INFO: externalsvc-mkq85 started at 2022-06-17 22:01:59 +0000 UTC (0+1 container statuses recorded) Jun 17 22:02:10.119: INFO: Container externalsvc ready: true, restart count 0 Jun 17 22:02:10.119: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-xr9c5 started at 2022-06-17 20:10:41 +0000 UTC (0+1 container statuses recorded) Jun 17 22:02:10.119: INFO: Container kube-sriovdp ready: true, restart count 0 Jun 17 22:02:10.119: INFO: node-exporter-xgz6d started at 2022-06-17 20:14:54 +0000 UTC (0+2 container statuses recorded) Jun 17 22:02:10.119: INFO: Container kube-rbac-proxy ready: true, restart count 0 Jun 17 22:02:10.119: INFO: Container node-exporter ready: true, restart count 0 Jun 17 22:02:10.119: INFO: test-webserver-c67c950f-e38b-4445-ab3b-ceabf4cf4f10 started at 2022-06-17 22:01:26 +0000 UTC (0+1 container statuses recorded) Jun 17 22:02:10.119: INFO: Container test-webserver ready: true, restart count 0 Jun 17 22:02:10.119: INFO: kube-flannel-plbl8 started at 2022-06-17 20:01:38 +0000 UTC (1+1 container statuses recorded) Jun 17 22:02:10.119: INFO: Init container install-cni ready: true, restart count 2 Jun 17 22:02:10.119: INFO: Container kube-flannel ready: true, restart count 2 Jun 17 22:02:10.119: INFO: node-feature-discovery-worker-82r46 started at 2022-06-17 20:09:28 +0000 UTC (0+1 container statuses recorded) Jun 17 22:02:10.119: INFO: Container nfd-worker ready: true, restart count 0 Jun 17 22:02:10.119: INFO: cmk-init-discover-node2-z2vgz started at 2022-06-17 20:13:25 +0000 UTC (0+3 container statuses recorded) Jun 17 22:02:10.119: INFO: Container discover ready: false, restart count 0 Jun 17 22:02:10.119: INFO: Container init ready: false, restart count 0 Jun 17 22:02:10.119: INFO: Container install ready: false, restart count 0 Jun 17 22:02:10.119: INFO: adopt-release-bklcl started at 2022-06-17 22:01:55 +0000 UTC (0+1 container statuses recorded) Jun 17 22:02:10.119: INFO: Container c ready: true, restart count 0 Jun 17 22:02:10.119: INFO: nodeport-test-kqgs5 started at 2022-06-17 22:00:13 +0000 UTC (0+1 container statuses recorded) Jun 17 22:02:10.119: INFO: Container nodeport-test ready: true, restart count 0 Jun 17 22:02:10.119: INFO: cmk-5gtjq started at 2022-06-17 20:13:52 +0000 UTC (0+2 container statuses recorded) Jun 17 22:02:10.119: INFO: Container nodereport ready: true, restart count 0 Jun 17 22:02:10.119: INFO: Container reconcile ready: true, restart count 0 Jun 17 22:02:10.119: INFO: collectd-6bcqz started at 2022-06-17 20:18:47 +0000 UTC (0+3 container statuses recorded) Jun 17 22:02:10.119: INFO: Container collectd ready: true, restart count 0 Jun 17 22:02:10.119: INFO: Container collectd-exporter ready: true, restart count 0 Jun 17 22:02:10.119: INFO: Container rbac-proxy ready: true, restart count 0 Jun 17 22:02:10.119: INFO: nginx-proxy-node2 started at 2022-06-17 20:00:37 +0000 UTC (0+1 container statuses recorded) Jun 17 22:02:10.119: INFO: Container nginx-proxy ready: true, restart count 2 Jun 17 22:02:10.119: INFO: kube-proxy-pvtj6 started at 2022-06-17 20:00:43 +0000 UTC (0+1 container statuses recorded) Jun 17 22:02:10.119: INFO: Container kube-proxy ready: true, restart count 2 Jun 17 22:02:10.119: INFO: kube-multus-ds-amd64-hblk4 started at 2022-06-17 20:01:47 +0000 UTC (0+1 container statuses recorded) Jun 17 22:02:10.119: INFO: Container kube-multus ready: true, restart count 1 Jun 17 22:02:10.119: INFO: send-events-26f1edeb-46d9-4567-b1b2-926258bc968b started at 2022-06-17 22:01:43 +0000 UTC (0+1 container statuses recorded) Jun 17 22:02:10.119: INFO: Container p ready: true, restart count 0 Jun 17 22:02:10.120: INFO: nodeport-test-l42bj started at 2022-06-17 22:00:13 +0000 UTC (0+1 container statuses recorded) Jun 17 22:02:10.120: INFO: Container nodeport-test ready: true, restart count 0 Jun 17 22:02:10.120: INFO: busybox-readonly-fs55b4e198-b27e-4308-bce4-4ea01442a336 started at 2022-06-17 22:01:40 +0000 UTC (0+1 container statuses recorded) Jun 17 22:02:10.120: INFO: Container busybox-readonly-fs55b4e198-b27e-4308-bce4-4ea01442a336 ready: true, restart count 0 Jun 17 22:02:10.120: INFO: externalsvc-pwcgs started at 2022-06-17 22:01:59 +0000 UTC (0+1 container statuses recorded) Jun 17 22:02:10.120: INFO: Container externalsvc ready: true, restart count 0 Jun 17 22:02:10.278: INFO: Latency metrics for node node2 Jun 17 22:02:10.278: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-3974" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750 • Failure [161.927 seconds] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance] [It] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Jun 17 22:01:56.123: Unexpected error: <*errors.errorString | 0xc000c607e0>: { s: "service is not reachable within 2m0s timeout on endpoint 10.10.190.207:30464 over TCP protocol", } service is not reachable within 2m0s timeout on endpoint 10.10.190.207:30464 over TCP protocol occurred /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:2576 ------------------------------ {"msg":"FAILED [sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","total":-1,"completed":1,"skipped":68,"failed":1,"failures":["[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]"]} SSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 17 22:02:10.322: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create secret due to empty secret key [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating projection with secret that has name secret-emptykey-test-b2cbe14a-d508-4a31-bfab-9332cbc98603 [AfterEach] [sig-node] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 17 22:02:10.348: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-7213" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Secrets should fail to create secret due to empty secret key [Conformance]","total":-1,"completed":2,"skipped":84,"failed":1,"failures":["[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 17 22:01:49.789: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:186 [It] should contain environment variables for services [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Jun 17 22:01:49.830: INFO: The status of Pod server-envvars-98b588af-2ad7-47f4-b319-3648a38c07a5 is Pending, waiting for it to be Running (with Ready = true) Jun 17 22:01:51.833: INFO: The status of Pod server-envvars-98b588af-2ad7-47f4-b319-3648a38c07a5 is Pending, waiting for it to be Running (with Ready = true) Jun 17 22:01:53.833: INFO: The status of Pod server-envvars-98b588af-2ad7-47f4-b319-3648a38c07a5 is Running (Ready = true) Jun 17 22:01:53.850: INFO: Waiting up to 5m0s for pod "client-envvars-38ddad96-0a9d-4c95-b902-5636348f134a" in namespace "pods-2051" to be "Succeeded or Failed" Jun 17 22:01:53.852: INFO: Pod "client-envvars-38ddad96-0a9d-4c95-b902-5636348f134a": Phase="Pending", Reason="", readiness=false. Elapsed: 1.775534ms Jun 17 22:01:55.856: INFO: Pod "client-envvars-38ddad96-0a9d-4c95-b902-5636348f134a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005212229s Jun 17 22:01:57.859: INFO: Pod "client-envvars-38ddad96-0a9d-4c95-b902-5636348f134a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.008831599s Jun 17 22:01:59.862: INFO: Pod "client-envvars-38ddad96-0a9d-4c95-b902-5636348f134a": Phase="Pending", Reason="", readiness=false. Elapsed: 6.011805258s Jun 17 22:02:01.866: INFO: Pod "client-envvars-38ddad96-0a9d-4c95-b902-5636348f134a": Phase="Pending", Reason="", readiness=false. Elapsed: 8.015241068s Jun 17 22:02:03.869: INFO: Pod "client-envvars-38ddad96-0a9d-4c95-b902-5636348f134a": Phase="Pending", Reason="", readiness=false. Elapsed: 10.018398826s Jun 17 22:02:05.873: INFO: Pod "client-envvars-38ddad96-0a9d-4c95-b902-5636348f134a": Phase="Pending", Reason="", readiness=false. Elapsed: 12.022751567s Jun 17 22:02:07.876: INFO: Pod "client-envvars-38ddad96-0a9d-4c95-b902-5636348f134a": Phase="Pending", Reason="", readiness=false. Elapsed: 14.025997639s Jun 17 22:02:09.880: INFO: Pod "client-envvars-38ddad96-0a9d-4c95-b902-5636348f134a": Phase="Pending", Reason="", readiness=false. Elapsed: 16.029028893s Jun 17 22:02:11.883: INFO: Pod "client-envvars-38ddad96-0a9d-4c95-b902-5636348f134a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 18.03230408s STEP: Saw pod success Jun 17 22:02:11.883: INFO: Pod "client-envvars-38ddad96-0a9d-4c95-b902-5636348f134a" satisfied condition "Succeeded or Failed" Jun 17 22:02:11.885: INFO: Trying to get logs from node node1 pod client-envvars-38ddad96-0a9d-4c95-b902-5636348f134a container env3cont: STEP: delete the pod Jun 17 22:02:11.897: INFO: Waiting for pod client-envvars-38ddad96-0a9d-4c95-b902-5636348f134a to disappear Jun 17 22:02:11.899: INFO: Pod client-envvars-38ddad96-0a9d-4c95-b902-5636348f134a no longer exists [AfterEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 17 22:02:11.899: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-2051" for this suite. • [SLOW TEST:22.118 seconds] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should contain environment variables for services [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Pods should contain environment variables for services [NodeConformance] [Conformance]","total":-1,"completed":15,"skipped":232,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 17 22:01:54.422: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/kubelet.go:38 [It] should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Jun 17 22:01:54.462: INFO: The status of Pod busybox-host-aliases00fe61d8-5d2e-435d-b09f-685c654a426e is Pending, waiting for it to be Running (with Ready = true) Jun 17 22:01:56.466: INFO: The status of Pod busybox-host-aliases00fe61d8-5d2e-435d-b09f-685c654a426e is Pending, waiting for it to be Running (with Ready = true) Jun 17 22:01:58.465: INFO: The status of Pod busybox-host-aliases00fe61d8-5d2e-435d-b09f-685c654a426e is Pending, waiting for it to be Running (with Ready = true) Jun 17 22:02:00.466: INFO: The status of Pod busybox-host-aliases00fe61d8-5d2e-435d-b09f-685c654a426e is Pending, waiting for it to be Running (with Ready = true) Jun 17 22:02:02.466: INFO: The status of Pod busybox-host-aliases00fe61d8-5d2e-435d-b09f-685c654a426e is Pending, waiting for it to be Running (with Ready = true) Jun 17 22:02:04.465: INFO: The status of Pod busybox-host-aliases00fe61d8-5d2e-435d-b09f-685c654a426e is Pending, waiting for it to be Running (with Ready = true) Jun 17 22:02:06.467: INFO: The status of Pod busybox-host-aliases00fe61d8-5d2e-435d-b09f-685c654a426e is Pending, waiting for it to be Running (with Ready = true) Jun 17 22:02:08.469: INFO: The status of Pod busybox-host-aliases00fe61d8-5d2e-435d-b09f-685c654a426e is Pending, waiting for it to be Running (with Ready = true) Jun 17 22:02:10.466: INFO: The status of Pod busybox-host-aliases00fe61d8-5d2e-435d-b09f-685c654a426e is Pending, waiting for it to be Running (with Ready = true) Jun 17 22:02:12.465: INFO: The status of Pod busybox-host-aliases00fe61d8-5d2e-435d-b09f-685c654a426e is Running (Ready = true) [AfterEach] [sig-node] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 17 22:02:12.475: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-2133" for this suite. • [SLOW TEST:18.061 seconds] [sig-node] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 when scheduling a busybox Pod with hostAliases /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/kubelet.go:137 should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":17,"skipped":260,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 17 22:02:11.968: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41 [It] should provide container's cpu request [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test downward API volume plugin Jun 17 22:02:12.003: INFO: Waiting up to 5m0s for pod "downwardapi-volume-6653b0b4-efd9-408d-9c1d-b2682b8687db" in namespace "downward-api-1093" to be "Succeeded or Failed" Jun 17 22:02:12.005: INFO: Pod "downwardapi-volume-6653b0b4-efd9-408d-9c1d-b2682b8687db": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016499ms Jun 17 22:02:14.010: INFO: Pod "downwardapi-volume-6653b0b4-efd9-408d-9c1d-b2682b8687db": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006182991s Jun 17 22:02:16.014: INFO: Pod "downwardapi-volume-6653b0b4-efd9-408d-9c1d-b2682b8687db": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011025866s STEP: Saw pod success Jun 17 22:02:16.015: INFO: Pod "downwardapi-volume-6653b0b4-efd9-408d-9c1d-b2682b8687db" satisfied condition "Succeeded or Failed" Jun 17 22:02:16.016: INFO: Trying to get logs from node node2 pod downwardapi-volume-6653b0b4-efd9-408d-9c1d-b2682b8687db container client-container: STEP: delete the pod Jun 17 22:02:16.029: INFO: Waiting for pod downwardapi-volume-6653b0b4-efd9-408d-9c1d-b2682b8687db to disappear Jun 17 22:02:16.031: INFO: Pod downwardapi-volume-6653b0b4-efd9-408d-9c1d-b2682b8687db no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 17 22:02:16.031: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-1093" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance]","total":-1,"completed":16,"skipped":262,"failed":0} SSSSS ------------------------------ [BeforeEach] [sig-apps] Job /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 17 22:01:55.525: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching orphans and release non-matching pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a job STEP: Ensuring active pods == parallelism STEP: Orphaning one of the Job's Pods Jun 17 22:02:14.068: INFO: Successfully updated pod "adopt-release-bklcl" STEP: Checking that the Job readopts the Pod Jun 17 22:02:14.068: INFO: Waiting up to 15m0s for pod "adopt-release-bklcl" in namespace "job-4677" to be "adopted" Jun 17 22:02:14.070: INFO: Pod "adopt-release-bklcl": Phase="Running", Reason="", readiness=true. Elapsed: 2.021586ms Jun 17 22:02:16.075: INFO: Pod "adopt-release-bklcl": Phase="Running", Reason="", readiness=true. Elapsed: 2.006326711s Jun 17 22:02:16.075: INFO: Pod "adopt-release-bklcl" satisfied condition "adopted" STEP: Removing the labels from the Job's Pod Jun 17 22:02:16.585: INFO: Successfully updated pod "adopt-release-bklcl" STEP: Checking that the Job releases the Pod Jun 17 22:02:16.585: INFO: Waiting up to 15m0s for pod "adopt-release-bklcl" in namespace "job-4677" to be "released" Jun 17 22:02:16.587: INFO: Pod "adopt-release-bklcl": Phase="Running", Reason="", readiness=true. Elapsed: 2.172458ms Jun 17 22:02:18.591: INFO: Pod "adopt-release-bklcl": Phase="Running", Reason="", readiness=true. Elapsed: 2.00648434s Jun 17 22:02:18.591: INFO: Pod "adopt-release-bklcl" satisfied condition "released" [AfterEach] [sig-apps] Job /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 17 22:02:18.591: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-4677" for this suite. • [SLOW TEST:23.074 seconds] [sig-apps] Job /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching orphans and release non-matching pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance]","total":-1,"completed":19,"skipped":158,"failed":0} SSSSSSSSSS ------------------------------ [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 17 22:02:10.408: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:86 [It] RollingUpdateDeployment should delete old pods and create new ones [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Jun 17 22:02:10.436: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted) Jun 17 22:02:10.442: INFO: Pod name sample-pod: Found 0 pods out of 1 Jun 17 22:02:15.447: INFO: Pod name sample-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Jun 17 22:02:15.447: INFO: Creating deployment "test-rolling-update-deployment" Jun 17 22:02:15.451: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has Jun 17 22:02:15.456: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created Jun 17 22:02:17.462: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected Jun 17 22:02:17.464: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63791100135, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63791100135, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63791100135, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63791100135, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-585b757574\" is progressing."}}, CollisionCount:(*int32)(nil)} Jun 17 22:02:19.467: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted) [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:80 Jun 17 22:02:19.474: INFO: Deployment "test-rolling-update-deployment": &Deployment{ObjectMeta:{test-rolling-update-deployment deployment-9221 4c5a0458-a560-4587-bd18-6c3e65adba96 36524 1 2022-06-17 22:02:15 +0000 UTC map[name:sample-pod] map[deployment.kubernetes.io/revision:3546343826724305833] [] [] [{e2e.test Update apps/v1 2022-06-17 22:02:15 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2022-06-17 22:02:18 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:availableReplicas":{},"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{},"f:updatedReplicas":{}}}}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.32 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc002c52ef8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2022-06-17 22:02:15 +0000 UTC,LastTransitionTime:2022-06-17 22:02:15 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rolling-update-deployment-585b757574" has successfully progressed.,LastUpdateTime:2022-06-17 22:02:18 +0000 UTC,LastTransitionTime:2022-06-17 22:02:15 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} Jun 17 22:02:19.477: INFO: New ReplicaSet "test-rolling-update-deployment-585b757574" of Deployment "test-rolling-update-deployment": &ReplicaSet{ObjectMeta:{test-rolling-update-deployment-585b757574 deployment-9221 4230c910-5e43-4a7b-943e-466fc361cf82 36510 1 2022-06-17 22:02:15 +0000 UTC map[name:sample-pod pod-template-hash:585b757574] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305833] [{apps/v1 Deployment test-rolling-update-deployment 4c5a0458-a560-4587-bd18-6c3e65adba96 0xc002c533a7 0xc002c533a8}] [] [{kube-controller-manager Update apps/v1 2022-06-17 22:02:18 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"4c5a0458-a560-4587-bd18-6c3e65adba96\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: 585b757574,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod pod-template-hash:585b757574] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.32 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc002c53438 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} Jun 17 22:02:19.477: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment": Jun 17 22:02:19.477: INFO: &ReplicaSet{ObjectMeta:{test-rolling-update-controller deployment-9221 d95a6e62-8a20-4b81-aa51-6f5108a61909 36523 2 2022-06-17 22:02:10 +0000 UTC map[name:sample-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305832] [{apps/v1 Deployment test-rolling-update-deployment 4c5a0458-a560-4587-bd18-6c3e65adba96 0xc002c53297 0xc002c53298}] [] [{e2e.test Update apps/v1 2022-06-17 22:02:10 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2022-06-17 22:02:18 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"4c5a0458-a560-4587-bd18-6c3e65adba96\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{}},"f:status":{"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod pod:httpd] map[] [] [] []} {[] [] [{httpd k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc002c53338 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Jun 17 22:02:19.480: INFO: Pod "test-rolling-update-deployment-585b757574-km778" is available: &Pod{ObjectMeta:{test-rolling-update-deployment-585b757574-km778 test-rolling-update-deployment-585b757574- deployment-9221 4176eb4e-f9fe-4201-8631-0d8920dbfc6d 36509 0 2022-06-17 22:02:15 +0000 UTC map[name:sample-pod pod-template-hash:585b757574] map[k8s.v1.cni.cncf.io/network-status:[{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.3.71" ], "mac": "7a:2c:9b:51:c8:9e", "default": true, "dns": {} }] k8s.v1.cni.cncf.io/networks-status:[{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.3.71" ], "mac": "7a:2c:9b:51:c8:9e", "default": true, "dns": {} }] kubernetes.io/psp:collectd] [{apps/v1 ReplicaSet test-rolling-update-deployment-585b757574 4230c910-5e43-4a7b-943e-466fc361cf82 0xc002c5384f 0xc002c53860}] [] [{kube-controller-manager Update v1 2022-06-17 22:02:15 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"4230c910-5e43-4a7b-943e-466fc361cf82\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {multus Update v1 2022-06-17 22:02:17 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:k8s.v1.cni.cncf.io/network-status":{},"f:k8s.v1.cni.cncf.io/networks-status":{}}}}} {kubelet Update v1 2022-06-17 22:02:18 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.3.71\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-95jkq,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:k8s.gcr.io/e2e-test-images/agnhost:2.32,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-95jkq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-06-17 22:02:15 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-06-17 22:02:18 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-06-17 22:02:18 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-06-17 22:02:15 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.10.190.208,PodIP:10.244.3.71,StartTime:2022-06-17 22:02:15 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2022-06-17 22:02:18 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/agnhost:2.32,ImageID:docker-pullable://k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1,ContainerID:docker://2066cebf5e2fca04701bdbd8deb9ed85b76ea35c99ace85493cd7f374a81f290,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.3.71,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 17 22:02:19.480: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-9221" for this suite. • [SLOW TEST:9.080 seconds] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 RollingUpdateDeployment should delete old pods and create new ones [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance]","total":-1,"completed":3,"skipped":118,"failed":1,"failures":["[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 17 22:02:16.051: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:86 [It] RecreateDeployment should delete old pods and create new ones [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Jun 17 22:02:16.076: INFO: Creating deployment "test-recreate-deployment" Jun 17 22:02:16.080: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1 Jun 17 22:02:16.085: INFO: deployment "test-recreate-deployment" doesn't have the required revision set Jun 17 22:02:18.092: INFO: Waiting deployment "test-recreate-deployment" to complete Jun 17 22:02:18.094: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63791100136, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63791100136, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63791100136, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63791100136, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-6cb8b65c46\" is progressing."}}, CollisionCount:(*int32)(nil)} Jun 17 22:02:20.097: INFO: Triggering a new rollout for deployment "test-recreate-deployment" Jun 17 22:02:20.103: INFO: Updating deployment test-recreate-deployment Jun 17 22:02:20.103: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:80 Jun 17 22:02:20.139: INFO: Deployment "test-recreate-deployment": &Deployment{ObjectMeta:{test-recreate-deployment deployment-3085 ad9e231a-3835-4b75-888c-2cfc462615bd 36597 2 2022-06-17 22:02:16 +0000 UTC map[name:sample-pod-3] map[deployment.kubernetes.io/revision:2] [] [] [{e2e.test Update apps/v1 2022-06-17 22:02:20 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{},"f:strategy":{"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2022-06-17 22:02:20 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:replicas":{},"f:unavailableReplicas":{},"f:updatedReplicas":{}}}}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3] map[] [] [] []} {[] [] [{httpd k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc006cb7de8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2022-06-17 22:02:20 +0000 UTC,LastTransitionTime:2022-06-17 22:02:20 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "test-recreate-deployment-85d47dcb4" is progressing.,LastUpdateTime:2022-06-17 22:02:20 +0000 UTC,LastTransitionTime:2022-06-17 22:02:16 +0000 UTC,},},ReadyReplicas:0,CollisionCount:nil,},} Jun 17 22:02:20.141: INFO: New ReplicaSet "test-recreate-deployment-85d47dcb4" of Deployment "test-recreate-deployment": &ReplicaSet{ObjectMeta:{test-recreate-deployment-85d47dcb4 deployment-3085 0fe87acb-3b0e-477b-8c68-b5900120f2f0 36596 1 2022-06-17 22:02:20 +0000 UTC map[name:sample-pod-3 pod-template-hash:85d47dcb4] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-recreate-deployment ad9e231a-3835-4b75-888c-2cfc462615bd 0xc006d445b0 0xc006d445b1}] [] [{kube-controller-manager Update apps/v1 2022-06-17 22:02:20 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ad9e231a-3835-4b75-888c-2cfc462615bd\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 85d47dcb4,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3 pod-template-hash:85d47dcb4] map[] [] [] []} {[] [] [{httpd k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc006d44628 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Jun 17 22:02:20.142: INFO: All old ReplicaSets of Deployment "test-recreate-deployment": Jun 17 22:02:20.142: INFO: &ReplicaSet{ObjectMeta:{test-recreate-deployment-6cb8b65c46 deployment-3085 36171d40-6641-42ef-862b-150643e1c63c 36585 2 2022-06-17 22:02:16 +0000 UTC map[name:sample-pod-3 pod-template-hash:6cb8b65c46] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-recreate-deployment ad9e231a-3835-4b75-888c-2cfc462615bd 0xc006d444b7 0xc006d444b8}] [] [{kube-controller-manager Update apps/v1 2022-06-17 22:02:20 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ad9e231a-3835-4b75-888c-2cfc462615bd\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 6cb8b65c46,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3 pod-template-hash:6cb8b65c46] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.32 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc006d44548 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Jun 17 22:02:20.145: INFO: Pod "test-recreate-deployment-85d47dcb4-p89h9" is not available: &Pod{ObjectMeta:{test-recreate-deployment-85d47dcb4-p89h9 test-recreate-deployment-85d47dcb4- deployment-3085 d12a785f-6a3a-4412-b19f-1474871b2941 36598 0 2022-06-17 22:02:20 +0000 UTC map[name:sample-pod-3 pod-template-hash:85d47dcb4] map[kubernetes.io/psp:collectd] [{apps/v1 ReplicaSet test-recreate-deployment-85d47dcb4 0fe87acb-3b0e-477b-8c68-b5900120f2f0 0xc006d44a5f 0xc006d44a70}] [] [{kube-controller-manager Update v1 2022-06-17 22:02:20 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0fe87acb-3b0e-477b-8c68-b5900120f2f0\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2022-06-17 22:02:20 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-759j6,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-759j6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node1,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-06-17 22:02:20 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-06-17 22:02:20 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-06-17 22:02:20 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-06-17 22:02:20 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.10.190.207,PodIP:,StartTime:2022-06-17 22:02:20 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 17 22:02:20.145: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-3085" for this suite. • ------------------------------ {"msg":"PASSED [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance]","total":-1,"completed":17,"skipped":267,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should release no longer matching pods [Conformance]","total":-1,"completed":15,"skipped":161,"failed":0} [BeforeEach] [sig-api-machinery] Aggregator /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 17 22:01:44.967: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename aggregator STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Aggregator /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:77 Jun 17 22:01:44.989: INFO: >>> kubeConfig: /root/.kube/config [It] Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Registering the sample API server. Jun 17 22:01:45.180: INFO: deployment "sample-apiserver-deployment" doesn't have the required revision set Jun 17 22:01:47.206: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63791100105, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63791100105, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63791100105, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63791100105, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-64f6b9dc99\" is progressing."}}, CollisionCount:(*int32)(nil)} Jun 17 22:01:49.209: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63791100105, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63791100105, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63791100105, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63791100105, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-64f6b9dc99\" is progressing."}}, CollisionCount:(*int32)(nil)} Jun 17 22:01:51.212: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63791100105, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63791100105, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63791100105, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63791100105, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-64f6b9dc99\" is progressing."}}, CollisionCount:(*int32)(nil)} Jun 17 22:01:53.212: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63791100105, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63791100105, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63791100105, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63791100105, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-64f6b9dc99\" is progressing."}}, CollisionCount:(*int32)(nil)} Jun 17 22:01:55.211: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63791100105, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63791100105, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63791100105, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63791100105, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-64f6b9dc99\" is progressing."}}, CollisionCount:(*int32)(nil)} Jun 17 22:01:57.212: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63791100105, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63791100105, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63791100105, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63791100105, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-64f6b9dc99\" is progressing."}}, CollisionCount:(*int32)(nil)} Jun 17 22:01:59.210: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63791100105, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63791100105, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63791100105, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63791100105, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-64f6b9dc99\" is progressing."}}, CollisionCount:(*int32)(nil)} Jun 17 22:02:01.211: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63791100105, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63791100105, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63791100105, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63791100105, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-64f6b9dc99\" is progressing."}}, CollisionCount:(*int32)(nil)} Jun 17 22:02:03.211: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63791100105, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63791100105, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63791100105, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63791100105, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-64f6b9dc99\" is progressing."}}, CollisionCount:(*int32)(nil)} Jun 17 22:02:05.210: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63791100105, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63791100105, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63791100105, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63791100105, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-64f6b9dc99\" is progressing."}}, CollisionCount:(*int32)(nil)} Jun 17 22:02:07.211: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63791100105, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63791100105, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63791100105, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63791100105, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-64f6b9dc99\" is progressing."}}, CollisionCount:(*int32)(nil)} Jun 17 22:02:09.210: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63791100105, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63791100105, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63791100105, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63791100105, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-64f6b9dc99\" is progressing."}}, CollisionCount:(*int32)(nil)} Jun 17 22:02:19.023: INFO: Waited 7.806959198s for the sample-apiserver to be ready to handle requests. STEP: Read Status for v1alpha1.wardle.example.com STEP: kubectl patch apiservice v1alpha1.wardle.example.com -p '{"spec":{"versionPriority": 400}}' STEP: List APIServices Jun 17 22:02:19.425: INFO: Found v1alpha1.wardle.example.com in APIServiceList [AfterEach] [sig-api-machinery] Aggregator /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:68 [AfterEach] [sig-api-machinery] Aggregator /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 17 22:02:20.217: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "aggregator-4367" for this suite. • [SLOW TEST:35.351 seconds] [sig-api-machinery] Aggregator /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ [BeforeEach] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 17 22:02:18.620: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod UID as env vars [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test downward api env vars Jun 17 22:02:18.653: INFO: Waiting up to 5m0s for pod "downward-api-ee24f3fd-63ba-418c-bc42-af0ed578c988" in namespace "downward-api-8697" to be "Succeeded or Failed" Jun 17 22:02:18.656: INFO: Pod "downward-api-ee24f3fd-63ba-418c-bc42-af0ed578c988": Phase="Pending", Reason="", readiness=false. Elapsed: 2.416085ms Jun 17 22:02:20.659: INFO: Pod "downward-api-ee24f3fd-63ba-418c-bc42-af0ed578c988": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005130445s Jun 17 22:02:22.662: INFO: Pod "downward-api-ee24f3fd-63ba-418c-bc42-af0ed578c988": Phase="Pending", Reason="", readiness=false. Elapsed: 4.009034045s Jun 17 22:02:24.666: INFO: Pod "downward-api-ee24f3fd-63ba-418c-bc42-af0ed578c988": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.012908166s STEP: Saw pod success Jun 17 22:02:24.666: INFO: Pod "downward-api-ee24f3fd-63ba-418c-bc42-af0ed578c988" satisfied condition "Succeeded or Failed" Jun 17 22:02:24.668: INFO: Trying to get logs from node node2 pod downward-api-ee24f3fd-63ba-418c-bc42-af0ed578c988 container dapi-container: STEP: delete the pod Jun 17 22:02:24.684: INFO: Waiting for pod downward-api-ee24f3fd-63ba-418c-bc42-af0ed578c988 to disappear Jun 17 22:02:24.687: INFO: Pod downward-api-ee24f3fd-63ba-418c-bc42-af0ed578c988 no longer exists [AfterEach] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 17 22:02:24.688: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-8697" for this suite. • [SLOW TEST:6.076 seconds] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should provide pod UID as env vars [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance]","total":-1,"completed":20,"skipped":168,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 17 22:02:19.644: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:241 [It] should check if kubectl describe prints relevant information for rc and pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Jun 17 22:02:19.668: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-8547 create -f -' Jun 17 22:02:20.040: INFO: stderr: "" Jun 17 22:02:20.040: INFO: stdout: "replicationcontroller/agnhost-primary created\n" Jun 17 22:02:20.040: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-8547 create -f -' Jun 17 22:02:20.410: INFO: stderr: "" Jun 17 22:02:20.410: INFO: stdout: "service/agnhost-primary created\n" STEP: Waiting for Agnhost primary to start. Jun 17 22:02:21.414: INFO: Selector matched 1 pods for map[app:agnhost] Jun 17 22:02:21.414: INFO: Found 0 / 1 Jun 17 22:02:22.414: INFO: Selector matched 1 pods for map[app:agnhost] Jun 17 22:02:22.414: INFO: Found 0 / 1 Jun 17 22:02:23.414: INFO: Selector matched 1 pods for map[app:agnhost] Jun 17 22:02:23.415: INFO: Found 0 / 1 Jun 17 22:02:24.413: INFO: Selector matched 1 pods for map[app:agnhost] Jun 17 22:02:24.413: INFO: Found 0 / 1 Jun 17 22:02:25.414: INFO: Selector matched 1 pods for map[app:agnhost] Jun 17 22:02:25.414: INFO: Found 1 / 1 Jun 17 22:02:25.414: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Jun 17 22:02:25.416: INFO: Selector matched 1 pods for map[app:agnhost] Jun 17 22:02:25.416: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Jun 17 22:02:25.416: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-8547 describe pod agnhost-primary-876jh' Jun 17 22:02:25.627: INFO: stderr: "" Jun 17 22:02:25.627: INFO: stdout: "Name: agnhost-primary-876jh\nNamespace: kubectl-8547\nPriority: 0\nNode: node2/10.10.190.208\nStart Time: Fri, 17 Jun 2022 22:02:20 +0000\nLabels: app=agnhost\n role=primary\nAnnotations: k8s.v1.cni.cncf.io/network-status:\n [{\n \"name\": \"default-cni-network\",\n \"interface\": \"eth0\",\n \"ips\": [\n \"10.244.3.74\"\n ],\n \"mac\": \"ae:70:f8:7a:94:4f\",\n \"default\": true,\n \"dns\": {}\n }]\n k8s.v1.cni.cncf.io/networks-status:\n [{\n \"name\": \"default-cni-network\",\n \"interface\": \"eth0\",\n \"ips\": [\n \"10.244.3.74\"\n ],\n \"mac\": \"ae:70:f8:7a:94:4f\",\n \"default\": true,\n \"dns\": {}\n }]\n kubernetes.io/psp: collectd\nStatus: Running\nIP: 10.244.3.74\nIPs:\n IP: 10.244.3.74\nControlled By: ReplicationController/agnhost-primary\nContainers:\n agnhost-primary:\n Container ID: docker://d8bfa47db2e81c515950a3fe6a5ebb1059bd43844348e0900b948d274ebf2091\n Image: k8s.gcr.io/e2e-test-images/agnhost:2.32\n Image ID: docker-pullable://k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1\n Port: 6379/TCP\n Host Port: 0/TCP\n State: Running\n Started: Fri, 17 Jun 2022 22:02:23 +0000\n Ready: True\n Restart Count: 0\n Environment: \n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-5bm46 (ro)\nConditions:\n Type Status\n Initialized True \n Ready True \n ContainersReady True \n PodScheduled True \nVolumes:\n kube-api-access-5bm46:\n Type: Projected (a volume that contains injected data from multiple sources)\n TokenExpirationSeconds: 3607\n ConfigMapName: kube-root-ca.crt\n ConfigMapOptional: \n DownwardAPI: true\nQoS Class: BestEffort\nNode-Selectors: \nTolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s\n node.kubernetes.io/unreachable:NoExecute op=Exists for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled 5s default-scheduler Successfully assigned kubectl-8547/agnhost-primary-876jh to node2\n Normal Pulling 3s kubelet Pulling image \"k8s.gcr.io/e2e-test-images/agnhost:2.32\"\n Normal Pulled 2s kubelet Successfully pulled image \"k8s.gcr.io/e2e-test-images/agnhost:2.32\" in 515.860659ms\n Normal Created 2s kubelet Created container agnhost-primary\n Normal Started 2s kubelet Started container agnhost-primary\n" Jun 17 22:02:25.628: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-8547 describe rc agnhost-primary' Jun 17 22:02:25.830: INFO: stderr: "" Jun 17 22:02:25.830: INFO: stdout: "Name: agnhost-primary\nNamespace: kubectl-8547\nSelector: app=agnhost,role=primary\nLabels: app=agnhost\n role=primary\nAnnotations: \nReplicas: 1 current / 1 desired\nPods Status: 1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n Labels: app=agnhost\n role=primary\n Containers:\n agnhost-primary:\n Image: k8s.gcr.io/e2e-test-images/agnhost:2.32\n Port: 6379/TCP\n Host Port: 0/TCP\n Environment: \n Mounts: \n Volumes: \nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal SuccessfulCreate 5s replication-controller Created pod: agnhost-primary-876jh\n" Jun 17 22:02:25.830: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-8547 describe service agnhost-primary' Jun 17 22:02:26.027: INFO: stderr: "" Jun 17 22:02:26.027: INFO: stdout: "Name: agnhost-primary\nNamespace: kubectl-8547\nLabels: app=agnhost\n role=primary\nAnnotations: \nSelector: app=agnhost,role=primary\nType: ClusterIP\nIP Family Policy: SingleStack\nIP Families: IPv4\nIP: 10.233.36.18\nIPs: 10.233.36.18\nPort: 6379/TCP\nTargetPort: agnhost-server/TCP\nEndpoints: 10.244.3.74:6379\nSession Affinity: None\nEvents: \n" Jun 17 22:02:26.032: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-8547 describe node master1' Jun 17 22:02:26.252: INFO: stderr: "" Jun 17 22:02:26.252: INFO: stdout: "Name: master1\nRoles: control-plane,master\nLabels: beta.kubernetes.io/arch=amd64\n beta.kubernetes.io/os=linux\n kubernetes.io/arch=amd64\n kubernetes.io/hostname=master1\n kubernetes.io/os=linux\n node-role.kubernetes.io/control-plane=\n node-role.kubernetes.io/master=\n node.kubernetes.io/exclude-from-external-load-balancers=\nAnnotations: flannel.alpha.coreos.com/backend-data: null\n flannel.alpha.coreos.com/backend-type: host-gw\n flannel.alpha.coreos.com/kube-subnet-manager: true\n flannel.alpha.coreos.com/public-ip: 10.10.190.202\n kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock\n node.alpha.kubernetes.io/ttl: 0\n volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp: Fri, 17 Jun 2022 19:59:00 +0000\nTaints: node-role.kubernetes.io/master:NoSchedule\nUnschedulable: false\nLease:\n HolderIdentity: master1\n AcquireTime: \n RenewTime: Fri, 17 Jun 2022 22:02:20 +0000\nConditions:\n Type Status LastHeartbeatTime LastTransitionTime Reason Message\n ---- ------ ----------------- ------------------ ------ -------\n NetworkUnavailable False Fri, 17 Jun 2022 20:04:36 +0000 Fri, 17 Jun 2022 20:04:36 +0000 FlannelIsUp Flannel is running on this node\n MemoryPressure False Fri, 17 Jun 2022 22:02:19 +0000 Fri, 17 Jun 2022 19:58:57 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available\n DiskPressure False Fri, 17 Jun 2022 22:02:19 +0000 Fri, 17 Jun 2022 19:58:57 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure\n PIDPressure False Fri, 17 Jun 2022 22:02:19 +0000 Fri, 17 Jun 2022 19:58:57 +0000 KubeletHasSufficientPID kubelet has sufficient PID available\n Ready True Fri, 17 Jun 2022 22:02:19 +0000 Fri, 17 Jun 2022 20:01:45 +0000 KubeletReady kubelet is posting ready status\nAddresses:\n InternalIP: 10.10.190.202\n Hostname: master1\nCapacity:\n cpu: 80\n ephemeral-storage: 440625980Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 196518300Ki\n pods: 110\nAllocatable:\n cpu: 79550m\n ephemeral-storage: 406080902496\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 195629468Ki\n pods: 110\nSystem Info:\n Machine ID: f59e69c8e0cc41ff966b02f015e9cf30\n System UUID: 00ACFB60-0631-E711-906E-0017A4403562\n Boot ID: 81e1dc93-cb0d-4bf9-b7c4-28e0b4aef603\n Kernel Version: 3.10.0-1160.66.1.el7.x86_64\n OS Image: CentOS Linux 7 (Core)\n Operating System: linux\n Architecture: amd64\n Container Runtime Version: docker://20.10.17\n Kubelet Version: v1.21.1\n Kube-Proxy Version: v1.21.1\nPodCIDR: 10.244.0.0/24\nPodCIDRs: 10.244.0.0/24\nNon-terminated Pods: (8 in total)\n Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits Age\n --------- ---- ------------ ---------- --------------- ------------- ---\n kube-system container-registry-65d7c44b96-hq7rp 0 (0%) 0 (0%) 0 (0%) 0 (0%) 116m\n kube-system kube-apiserver-master1 250m (0%) 0 (0%) 0 (0%) 0 (0%) 114m\n kube-system kube-controller-manager-master1 200m (0%) 0 (0%) 0 (0%) 0 (0%) 122m\n kube-system kube-flannel-z9nqz 150m (0%) 300m (0%) 64M (0%) 500M (0%) 120m\n kube-system kube-multus-ds-amd64-rqb4r 100m (0%) 100m (0%) 90Mi (0%) 90Mi (0%) 120m\n kube-system kube-proxy-b2xlr 0 (0%) 0 (0%) 0 (0%) 0 (0%) 121m\n kube-system kube-scheduler-master1 100m (0%) 0 (0%) 0 (0%) 0 (0%) 104m\n monitoring node-exporter-bts5h 112m (0%) 270m (0%) 200Mi (0%) 220Mi (0%) 107m\nAllocated resources:\n (Total limits may be over 100 percent, i.e., overcommitted.)\n Resource Requests Limits\n -------- -------- ------\n cpu 912m (1%) 670m (0%)\n memory 368087040 (0%) 825058560 (0%)\n ephemeral-storage 0 (0%) 0 (0%)\n hugepages-1Gi 0 (0%) 0 (0%)\n hugepages-2Mi 0 (0%) 0 (0%)\nEvents: \n" Jun 17 22:02:26.253: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-8547 describe namespace kubectl-8547' Jun 17 22:02:26.432: INFO: stderr: "" Jun 17 22:02:26.432: INFO: stdout: "Name: kubectl-8547\nLabels: e2e-framework=kubectl\n e2e-run=1bdd06fa-69ab-4b01-a8ee-33c90b03bca7\n kubernetes.io/metadata.name=kubectl-8547\nAnnotations: \nStatus: Active\n\nNo resource quota.\n\nNo LimitRange resource.\n" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 17 22:02:26.432: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8547" for this suite. • [SLOW TEST:6.800 seconds] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl describe /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1084 should check if kubectl describe prints relevant information for rc and pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance]","total":-1,"completed":4,"skipped":208,"failed":1,"failures":["[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 17 22:02:12.528: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/lifecycle_hook.go:52 STEP: create the container to handle the HTTPGet hook request. Jun 17 22:02:12.562: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Jun 17 22:02:14.566: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Jun 17 22:02:16.566: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Jun 17 22:02:18.569: INFO: The status of Pod pod-handle-http-request is Running (Ready = true) [It] should execute poststart exec hook properly [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: create the pod with lifecycle hook Jun 17 22:02:18.586: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jun 17 22:02:20.590: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jun 17 22:02:22.590: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = true) STEP: check poststart hook STEP: delete the pod with lifecycle hook Jun 17 22:02:22.601: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jun 17 22:02:22.604: INFO: Pod pod-with-poststart-exec-hook still exists Jun 17 22:02:24.605: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jun 17 22:02:24.607: INFO: Pod pod-with-poststart-exec-hook still exists Jun 17 22:02:26.606: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jun 17 22:02:26.609: INFO: Pod pod-with-poststart-exec-hook no longer exists [AfterEach] [sig-node] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 17 22:02:26.609: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-6837" for this suite. • [SLOW TEST:14.089 seconds] [sig-node] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 when create a pod with lifecycle hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/lifecycle_hook.go:43 should execute poststart exec hook properly [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]","total":-1,"completed":18,"skipped":283,"failed":0} SSS ------------------------------ {"msg":"PASSED [sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","total":-1,"completed":16,"skipped":161,"failed":0} [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 17 22:02:20.320: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating configMap with name cm-test-opt-del-3eaf1256-5a03-41b1-b2e9-34e79f7f92db STEP: Creating configMap with name cm-test-opt-upd-e9e2ef7f-38e5-4e39-81ba-3f57304e3898 STEP: Creating the pod Jun 17 22:02:20.365: INFO: The status of Pod pod-configmaps-ac082623-f0e5-441d-8065-c7e69716b5a3 is Pending, waiting for it to be Running (with Ready = true) Jun 17 22:02:22.368: INFO: The status of Pod pod-configmaps-ac082623-f0e5-441d-8065-c7e69716b5a3 is Pending, waiting for it to be Running (with Ready = true) Jun 17 22:02:24.369: INFO: The status of Pod pod-configmaps-ac082623-f0e5-441d-8065-c7e69716b5a3 is Pending, waiting for it to be Running (with Ready = true) Jun 17 22:02:26.368: INFO: The status of Pod pod-configmaps-ac082623-f0e5-441d-8065-c7e69716b5a3 is Running (Ready = true) STEP: Deleting configmap cm-test-opt-del-3eaf1256-5a03-41b1-b2e9-34e79f7f92db STEP: Updating configmap cm-test-opt-upd-e9e2ef7f-38e5-4e39-81ba-3f57304e3898 STEP: Creating configMap with name cm-test-opt-create-1a4a8692-349e-4e0d-8a9c-8cdef0c72dd8 STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 17 22:02:28.424: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-3813" for this suite. • [SLOW TEST:8.113 seconds] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":-1,"completed":17,"skipped":161,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 17 22:01:59.375: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746 [It] should be able to change the type from NodePort to ExternalName [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating a service nodeport-service with the type=NodePort in namespace services-8144 STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service STEP: creating service externalsvc in namespace services-8144 STEP: creating replication controller externalsvc in namespace services-8144 I0617 22:01:59.414324 32 runners.go:190] Created replication controller with name: externalsvc, namespace: services-8144, replica count: 2 I0617 22:02:02.465973 32 runners.go:190] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0617 22:02:05.467781 32 runners.go:190] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady STEP: changing the NodePort service to type=ExternalName Jun 17 22:02:05.482: INFO: Creating new exec pod Jun 17 22:02:13.498: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8144 exec execpodphlr7 -- /bin/sh -x -c nslookup nodeport-service.services-8144.svc.cluster.local' Jun 17 22:02:14.104: INFO: stderr: "+ nslookup nodeport-service.services-8144.svc.cluster.local\n" Jun 17 22:02:14.104: INFO: stdout: "Server:\t\t10.233.0.3\nAddress:\t10.233.0.3#53\n\nnodeport-service.services-8144.svc.cluster.local\tcanonical name = externalsvc.services-8144.svc.cluster.local.\nName:\texternalsvc.services-8144.svc.cluster.local\nAddress: 10.233.27.21\n\n" STEP: deleting ReplicationController externalsvc in namespace services-8144, will wait for the garbage collector to delete the pods Jun 17 22:02:14.163: INFO: Deleting ReplicationController externalsvc took: 4.99504ms Jun 17 22:02:14.263: INFO: Terminating ReplicationController externalsvc pods took: 100.352341ms Jun 17 22:02:28.472: INFO: Cleaning up the NodePort to ExternalName test service [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 17 22:02:28.478: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-8144" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750 • [SLOW TEST:29.111 seconds] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should be able to change the type from NodePort to ExternalName [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance]","total":-1,"completed":1,"skipped":24,"failed":1,"failures":["[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]"]} S ------------------------------ {"msg":"PASSED [sig-network] IngressClass API should support creating IngressClass API operations [Conformance]","total":-1,"completed":6,"skipped":97,"failed":0} [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 17 22:00:13.851: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746 [It] should be able to create a functioning NodePort service [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating service nodeport-test with type=NodePort in namespace services-9559 STEP: creating replication controller nodeport-test in namespace services-9559 I0617 22:00:13.880280 37 runners.go:190] Created replication controller with name: nodeport-test, namespace: services-9559, replica count: 2 I0617 22:00:16.932117 37 runners.go:190] nodeport-test Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0617 22:00:19.933324 37 runners.go:190] nodeport-test Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jun 17 22:00:19.933: INFO: Creating new exec pod Jun 17 22:00:24.957: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9559 exec execpod8gpkx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 nodeport-test 80' Jun 17 22:00:25.267: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 nodeport-test 80\nConnection to nodeport-test 80 port [tcp/http] succeeded!\n" Jun 17 22:00:25.267: INFO: stdout: "nodeport-test-l42bj" Jun 17 22:00:25.267: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9559 exec execpod8gpkx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.233.17.145 80' Jun 17 22:00:25.705: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 10.233.17.145 80\nConnection to 10.233.17.145 80 port [tcp/http] succeeded!\n" Jun 17 22:00:25.705: INFO: stdout: "nodeport-test-kqgs5" Jun 17 22:00:25.705: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9559 exec execpod8gpkx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30948' Jun 17 22:00:26.328: INFO: rc: 1 Jun 17 22:00:26.328: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9559 exec execpod8gpkx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30948: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30948 nc: connect to 10.10.190.207 port 30948 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:00:27.328: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9559 exec execpod8gpkx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30948' Jun 17 22:00:27.807: INFO: rc: 1 Jun 17 22:00:27.807: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9559 exec execpod8gpkx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30948: Command stdout: stderr: + + ncecho -v -t hostName -w 2 10.10.190.207 30948 nc: connect to 10.10.190.207 port 30948 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:00:28.329: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9559 exec execpod8gpkx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30948' Jun 17 22:00:28.950: INFO: rc: 1 Jun 17 22:00:28.950: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9559 exec execpod8gpkx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30948: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30948 nc: connect to 10.10.190.207 port 30948 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:00:29.328: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9559 exec execpod8gpkx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30948' Jun 17 22:00:29.930: INFO: rc: 1 Jun 17 22:00:29.930: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9559 exec execpod8gpkx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30948: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30948 nc: connect to 10.10.190.207 port 30948 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:00:30.329: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9559 exec execpod8gpkx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30948' Jun 17 22:00:30.779: INFO: rc: 1 Jun 17 22:00:30.779: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9559 exec execpod8gpkx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30948: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30948 nc: connect to 10.10.190.207 port 30948 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:00:31.329: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9559 exec execpod8gpkx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30948' Jun 17 22:00:31.600: INFO: rc: 1 Jun 17 22:00:31.600: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9559 exec execpod8gpkx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30948: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30948 nc: connect to 10.10.190.207 port 30948 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:00:32.328: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9559 exec execpod8gpkx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30948' Jun 17 22:00:32.603: INFO: rc: 1 Jun 17 22:00:32.603: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9559 exec execpod8gpkx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30948: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30948 nc: connect to 10.10.190.207 port 30948 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:00:33.329: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9559 exec execpod8gpkx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30948' Jun 17 22:00:33.856: INFO: rc: 1 Jun 17 22:00:33.856: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9559 exec execpod8gpkx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30948: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30948 nc: connect to 10.10.190.207 port 30948 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:00:34.329: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9559 exec execpod8gpkx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30948' Jun 17 22:00:34.945: INFO: rc: 1 Jun 17 22:00:34.945: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9559 exec execpod8gpkx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30948: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30948 nc: connect to 10.10.190.207 port 30948 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:00:35.329: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9559 exec execpod8gpkx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30948' Jun 17 22:00:35.609: INFO: rc: 1 Jun 17 22:00:35.609: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9559 exec execpod8gpkx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30948: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30948 nc: connect to 10.10.190.207 port 30948 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:00:36.328: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9559 exec execpod8gpkx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30948' Jun 17 22:00:36.625: INFO: rc: 1 Jun 17 22:00:36.625: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9559 exec execpod8gpkx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30948: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30948 nc: connect to 10.10.190.207 port 30948 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:00:37.329: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9559 exec execpod8gpkx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30948' Jun 17 22:00:37.638: INFO: rc: 1 Jun 17 22:00:37.638: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9559 exec execpod8gpkx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30948: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30948 nc: connect to 10.10.190.207 port 30948 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:00:38.328: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9559 exec execpod8gpkx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30948' Jun 17 22:00:38.918: INFO: rc: 1 Jun 17 22:00:38.918: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9559 exec execpod8gpkx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30948: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30948 nc: connect to 10.10.190.207 port 30948 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:00:39.329: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9559 exec execpod8gpkx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30948' Jun 17 22:00:39.609: INFO: rc: 1 Jun 17 22:00:39.609: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9559 exec execpod8gpkx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30948: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30948 nc: connect to 10.10.190.207 port 30948 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:00:40.329: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9559 exec execpod8gpkx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30948' Jun 17 22:00:40.627: INFO: rc: 1 Jun 17 22:00:40.627: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9559 exec execpod8gpkx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30948: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30948 nc: connect to 10.10.190.207 port 30948 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:00:41.329: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9559 exec execpod8gpkx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30948' Jun 17 22:00:41.654: INFO: rc: 1 Jun 17 22:00:41.654: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9559 exec execpod8gpkx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30948: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30948 nc: connect to 10.10.190.207 port 30948 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:00:42.328: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9559 exec execpod8gpkx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30948' Jun 17 22:00:42.569: INFO: rc: 1 Jun 17 22:00:42.569: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9559 exec execpod8gpkx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30948: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30948 nc: connect to 10.10.190.207 port 30948 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:00:43.329: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9559 exec execpod8gpkx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30948' Jun 17 22:00:43.606: INFO: rc: 1 Jun 17 22:00:43.606: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9559 exec execpod8gpkx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30948: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30948 nc: connect to 10.10.190.207 port 30948 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:00:44.328: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9559 exec execpod8gpkx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30948' Jun 17 22:00:45.516: INFO: rc: 1 Jun 17 22:00:45.516: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9559 exec execpod8gpkx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30948: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30948 nc: connect to 10.10.190.207 port 30948 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:00:46.329: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9559 exec execpod8gpkx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30948' Jun 17 22:00:46.583: INFO: rc: 1 Jun 17 22:00:46.583: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9559 exec execpod8gpkx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30948: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30948 nc: connect to 10.10.190.207 port 30948 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:00:47.328: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9559 exec execpod8gpkx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30948' Jun 17 22:00:47.706: INFO: rc: 1 Jun 17 22:00:47.706: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9559 exec execpod8gpkx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30948: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30948 nc: connect to 10.10.190.207 port 30948 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:00:48.329: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9559 exec execpod8gpkx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30948' Jun 17 22:00:48.605: INFO: rc: 1 Jun 17 22:00:48.605: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9559 exec execpod8gpkx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30948: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30948 nc: connect to 10.10.190.207 port 30948 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:00:49.328: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9559 exec execpod8gpkx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30948' Jun 17 22:00:49.585: INFO: rc: 1 Jun 17 22:00:49.585: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9559 exec execpod8gpkx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30948: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30948 nc: connect to 10.10.190.207 port 30948 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:00:50.329: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9559 exec execpod8gpkx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30948' Jun 17 22:00:50.580: INFO: rc: 1 Jun 17 22:00:50.580: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9559 exec execpod8gpkx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30948: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30948 nc: connect to 10.10.190.207 port 30948 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:00:51.329: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9559 exec execpod8gpkx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30948' Jun 17 22:00:51.593: INFO: rc: 1 Jun 17 22:00:51.593: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9559 exec execpod8gpkx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30948: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30948 nc: connect to 10.10.190.207 port 30948 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:00:52.328: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9559 exec execpod8gpkx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30948' Jun 17 22:00:52.750: INFO: rc: 1 Jun 17 22:00:52.750: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9559 exec execpod8gpkx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30948: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30948 nc: connect to 10.10.190.207 port 30948 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:00:53.329: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9559 exec execpod8gpkx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30948' Jun 17 22:00:53.714: INFO: rc: 1 Jun 17 22:00:53.714: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9559 exec execpod8gpkx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30948: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30948 nc: connect to 10.10.190.207 port 30948 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:00:54.328: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9559 exec execpod8gpkx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30948' Jun 17 22:00:54.580: INFO: rc: 1 Jun 17 22:00:54.580: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9559 exec execpod8gpkx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30948: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30948 nc: connect to 10.10.190.207 port 30948 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:00:55.328: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9559 exec execpod8gpkx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30948' Jun 17 22:00:55.583: INFO: rc: 1 Jun 17 22:00:55.583: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9559 exec execpod8gpkx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30948: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30948 nc: connect to 10.10.190.207 port 30948 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:00:56.329: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9559 exec execpod8gpkx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30948' Jun 17 22:00:56.581: INFO: rc: 1 Jun 17 22:00:56.581: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9559 exec execpod8gpkx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30948: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30948 nc: connect to 10.10.190.207 port 30948 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:00:57.328: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9559 exec execpod8gpkx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30948' Jun 17 22:00:57.582: INFO: rc: 1 Jun 17 22:00:57.582: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9559 exec execpod8gpkx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30948: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30948 nc: connect to 10.10.190.207 port 30948 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:00:58.329: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9559 exec execpod8gpkx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30948' Jun 17 22:00:58.579: INFO: rc: 1 Jun 17 22:00:58.579: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9559 exec execpod8gpkx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30948: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30948 nc: connect to 10.10.190.207 port 30948 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:00:59.328: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9559 exec execpod8gpkx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30948' Jun 17 22:00:59.581: INFO: rc: 1 Jun 17 22:00:59.581: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9559 exec execpod8gpkx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30948: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30948 nc: connect to 10.10.190.207 port 30948 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:01:00.329: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9559 exec execpod8gpkx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30948' Jun 17 22:01:00.665: INFO: rc: 1 Jun 17 22:01:00.665: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9559 exec execpod8gpkx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30948: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30948 nc: connect to 10.10.190.207 port 30948 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:01:01.328: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9559 exec execpod8gpkx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30948' Jun 17 22:01:01.618: INFO: rc: 1 Jun 17 22:01:01.618: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9559 exec execpod8gpkx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30948: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30948 + echo hostName nc: connect to 10.10.190.207 port 30948 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:01:02.328: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9559 exec execpod8gpkx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30948' Jun 17 22:01:02.545: INFO: rc: 1 Jun 17 22:01:02.545: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9559 exec execpod8gpkx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30948: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30948 nc: connect to 10.10.190.207 port 30948 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:01:03.328: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9559 exec execpod8gpkx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30948' Jun 17 22:01:03.639: INFO: rc: 1 Jun 17 22:01:03.639: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9559 exec execpod8gpkx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30948: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30948 nc: connect to 10.10.190.207 port 30948 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:01:04.329: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9559 exec execpod8gpkx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30948' Jun 17 22:01:04.576: INFO: rc: 1 Jun 17 22:01:04.576: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9559 exec execpod8gpkx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30948: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30948 nc: connect to 10.10.190.207 port 30948 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:01:05.329: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9559 exec execpod8gpkx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30948' Jun 17 22:01:05.579: INFO: rc: 1 Jun 17 22:01:05.579: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9559 exec execpod8gpkx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30948: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30948 nc: connect to 10.10.190.207 port 30948 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:01:06.328: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9559 exec execpod8gpkx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30948' Jun 17 22:01:06.578: INFO: rc: 1 Jun 17 22:01:06.578: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9559 exec execpod8gpkx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30948: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30948 nc: connect to 10.10.190.207 port 30948 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:01:07.329: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9559 exec execpod8gpkx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30948' Jun 17 22:01:07.567: INFO: rc: 1 Jun 17 22:01:07.567: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9559 exec execpod8gpkx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30948: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30948 nc: connect to 10.10.190.207 port 30948 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:01:08.329: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9559 exec execpod8gpkx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30948' Jun 17 22:01:08.826: INFO: rc: 1 Jun 17 22:01:08.826: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9559 exec execpod8gpkx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30948: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30948 nc: connect to 10.10.190.207 port 30948 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:01:09.329: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9559 exec execpod8gpkx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30948' Jun 17 22:01:09.561: INFO: rc: 1 Jun 17 22:01:09.561: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9559 exec execpod8gpkx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30948: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30948 nc: connect to 10.10.190.207 port 30948 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:01:10.330: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9559 exec execpod8gpkx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30948' Jun 17 22:01:10.580: INFO: rc: 1 Jun 17 22:01:10.580: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9559 exec execpod8gpkx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30948: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30948 + echo hostName nc: connect to 10.10.190.207 port 30948 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:01:11.328: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9559 exec execpod8gpkx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30948' Jun 17 22:01:11.578: INFO: rc: 1 Jun 17 22:01:11.578: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9559 exec execpod8gpkx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30948: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30948 nc: connect to 10.10.190.207 port 30948 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:01:12.329: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9559 exec execpod8gpkx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30948' Jun 17 22:01:12.611: INFO: rc: 1 Jun 17 22:01:12.611: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9559 exec execpod8gpkx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30948: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30948 nc: connect to 10.10.190.207 port 30948 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:01:13.329: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9559 exec execpod8gpkx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30948' Jun 17 22:01:13.598: INFO: rc: 1 Jun 17 22:01:13.598: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9559 exec execpod8gpkx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30948: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30948 nc: connect to 10.10.190.207 port 30948 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:01:14.329: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9559 exec execpod8gpkx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30948' Jun 17 22:01:15.267: INFO: rc: 1 Jun 17 22:01:15.267: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9559 exec execpod8gpkx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30948: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30948 nc: connect to 10.10.190.207 port 30948 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:01:15.328: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9559 exec execpod8gpkx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30948' Jun 17 22:01:15.612: INFO: rc: 1 Jun 17 22:01:15.612: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9559 exec execpod8gpkx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30948: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30948 nc: connect to 10.10.190.207 port 30948 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:01:16.330: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9559 exec execpod8gpkx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30948' Jun 17 22:01:16.587: INFO: rc: 1 Jun 17 22:01:16.587: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9559 exec execpod8gpkx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30948: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30948 nc: connect to 10.10.190.207 port 30948 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:01:17.328: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9559 exec execpod8gpkx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30948' Jun 17 22:01:17.683: INFO: rc: 1 Jun 17 22:01:17.683: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9559 exec execpod8gpkx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30948: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30948 nc: connect to 10.10.190.207 port 30948 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:01:18.330: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9559 exec execpod8gpkx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30948' Jun 17 22:01:18.960: INFO: rc: 1 Jun 17 22:01:18.960: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9559 exec execpod8gpkx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30948: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30948 nc: connect to 10.10.190.207 port 30948 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:01:19.328: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9559 exec execpod8gpkx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30948' Jun 17 22:01:19.595: INFO: rc: 1 Jun 17 22:01:19.595: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9559 exec execpod8gpkx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30948: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30948 nc: connect to 10.10.190.207 port 30948 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:01:20.330: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9559 exec execpod8gpkx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30948' Jun 17 22:01:20.598: INFO: rc: 1 Jun 17 22:01:20.598: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9559 exec execpod8gpkx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30948: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30948 nc: connect to 10.10.190.207 port 30948 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:01:21.328: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9559 exec execpod8gpkx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30948' Jun 17 22:01:21.601: INFO: rc: 1 Jun 17 22:01:21.601: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9559 exec execpod8gpkx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30948: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30948 nc: connect to 10.10.190.207 port 30948 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:01:22.329: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9559 exec execpod8gpkx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30948' Jun 17 22:01:22.807: INFO: rc: 1 Jun 17 22:01:22.807: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9559 exec execpod8gpkx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30948: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30948 nc: connect to 10.10.190.207 port 30948 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:01:23.328: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9559 exec execpod8gpkx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30948' Jun 17 22:01:23.575: INFO: rc: 1 Jun 17 22:01:23.576: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9559 exec execpod8gpkx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30948: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30948 nc: connect to 10.10.190.207 port 30948 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:01:24.329: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9559 exec execpod8gpkx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30948' Jun 17 22:01:24.667: INFO: rc: 1 Jun 17 22:01:24.667: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9559 exec execpod8gpkx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30948: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30948 nc: connect to 10.10.190.207 port 30948 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:01:25.330: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9559 exec execpod8gpkx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30948' Jun 17 22:01:25.608: INFO: rc: 1 Jun 17 22:01:25.608: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9559 exec execpod8gpkx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30948: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30948 nc: connect to 10.10.190.207 port 30948 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:01:26.329: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9559 exec execpod8gpkx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30948' Jun 17 22:01:26.588: INFO: rc: 1 Jun 17 22:01:26.588: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9559 exec execpod8gpkx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30948: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30948 nc: connect to 10.10.190.207 port 30948 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:01:27.328: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9559 exec execpod8gpkx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30948' Jun 17 22:01:27.558: INFO: rc: 1 Jun 17 22:01:27.558: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9559 exec execpod8gpkx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30948: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30948 nc: connect to 10.10.190.207 port 30948 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:01:28.329: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9559 exec execpod8gpkx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30948' Jun 17 22:01:28.582: INFO: rc: 1 Jun 17 22:01:28.582: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9559 exec execpod8gpkx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30948: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30948 nc: connect to 10.10.190.207 port 30948 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:01:29.328: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9559 exec execpod8gpkx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30948' Jun 17 22:01:29.574: INFO: rc: 1 Jun 17 22:01:29.574: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9559 exec execpod8gpkx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30948: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30948 nc: connect to 10.10.190.207 port 30948 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:01:30.329: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9559 exec execpod8gpkx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30948' Jun 17 22:01:30.634: INFO: rc: 1 Jun 17 22:01:30.634: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9559 exec execpod8gpkx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30948: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30948 nc: connect to 10.10.190.207 port 30948 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:01:31.328: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9559 exec execpod8gpkx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30948' Jun 17 22:01:31.566: INFO: rc: 1 Jun 17 22:01:31.566: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9559 exec execpod8gpkx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30948: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30948 nc: connect to 10.10.190.207 port 30948 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:01:32.328: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9559 exec execpod8gpkx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30948' Jun 17 22:01:32.588: INFO: rc: 1 Jun 17 22:01:32.588: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9559 exec execpod8gpkx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30948: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30948 nc: connect to 10.10.190.207 port 30948 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:01:33.328: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9559 exec execpod8gpkx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30948' Jun 17 22:01:33.589: INFO: rc: 1 Jun 17 22:01:33.589: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9559 exec execpod8gpkx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30948: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30948 nc: connect to 10.10.190.207 port 30948 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:01:34.329: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9559 exec execpod8gpkx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30948' Jun 17 22:01:34.557: INFO: rc: 1 Jun 17 22:01:34.557: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9559 exec execpod8gpkx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30948: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30948 nc: connect to 10.10.190.207 port 30948 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:01:35.329: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9559 exec execpod8gpkx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30948' Jun 17 22:01:35.759: INFO: rc: 1 Jun 17 22:01:35.759: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9559 exec execpod8gpkx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30948: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30948 nc: connect to 10.10.190.207 port 30948 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:01:36.329: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9559 exec execpod8gpkx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30948' Jun 17 22:01:36.604: INFO: rc: 1 Jun 17 22:01:36.604: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9559 exec execpod8gpkx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30948: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30948 nc: connect to 10.10.190.207 port 30948 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:01:37.328: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9559 exec execpod8gpkx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30948' Jun 17 22:01:37.596: INFO: rc: 1 Jun 17 22:01:37.596: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9559 exec execpod8gpkx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30948: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30948 nc: connect to 10.10.190.207 port 30948 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:01:38.329: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9559 exec execpod8gpkx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30948' Jun 17 22:01:38.749: INFO: rc: 1 Jun 17 22:01:38.749: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9559 exec execpod8gpkx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30948: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30948 nc: connect to 10.10.190.207 port 30948 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:01:39.329: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9559 exec execpod8gpkx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30948' Jun 17 22:01:39.565: INFO: rc: 1 Jun 17 22:01:39.565: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9559 exec execpod8gpkx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30948: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30948 nc: connect to 10.10.190.207 port 30948 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:01:40.328: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9559 exec execpod8gpkx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30948' Jun 17 22:01:40.617: INFO: rc: 1 Jun 17 22:01:40.617: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9559 exec execpod8gpkx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30948: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30948 nc: connect to 10.10.190.207 port 30948 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:01:41.329: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9559 exec execpod8gpkx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30948' Jun 17 22:01:42.377: INFO: rc: 1 Jun 17 22:01:42.377: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9559 exec execpod8gpkx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30948: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30948 nc: connect to 10.10.190.207 port 30948 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:01:43.330: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9559 exec execpod8gpkx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30948' Jun 17 22:01:44.008: INFO: rc: 1 Jun 17 22:01:44.008: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9559 exec execpod8gpkx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30948: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30948 nc: connect to 10.10.190.207 port 30948 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:01:44.329: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9559 exec execpod8gpkx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30948' Jun 17 22:01:44.620: INFO: rc: 1 Jun 17 22:01:44.620: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9559 exec execpod8gpkx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30948: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30948 nc: connect to 10.10.190.207 port 30948 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:01:45.330: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9559 exec execpod8gpkx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30948' Jun 17 22:01:45.924: INFO: rc: 1 Jun 17 22:01:45.924: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9559 exec execpod8gpkx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30948: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30948 nc: connect to 10.10.190.207 port 30948 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:01:46.329: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9559 exec execpod8gpkx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30948' Jun 17 22:01:47.143: INFO: rc: 1 Jun 17 22:01:47.143: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9559 exec execpod8gpkx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30948: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30948 nc: connect to 10.10.190.207 port 30948 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:01:47.328: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9559 exec execpod8gpkx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30948' Jun 17 22:01:47.813: INFO: rc: 1 Jun 17 22:01:47.813: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9559 exec execpod8gpkx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30948: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30948 nc: connect to 10.10.190.207 port 30948 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:01:48.329: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9559 exec execpod8gpkx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30948' Jun 17 22:01:48.718: INFO: rc: 1 Jun 17 22:01:48.718: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9559 exec execpod8gpkx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30948: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30948 nc: connect to 10.10.190.207 port 30948 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:01:49.328: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9559 exec execpod8gpkx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30948' Jun 17 22:01:49.569: INFO: rc: 1 Jun 17 22:01:49.569: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9559 exec execpod8gpkx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30948: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30948 nc: connect to 10.10.190.207 port 30948 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:01:50.330: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9559 exec execpod8gpkx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30948' Jun 17 22:01:50.588: INFO: rc: 1 Jun 17 22:01:50.588: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9559 exec execpod8gpkx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30948: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30948 nc: connect to 10.10.190.207 port 30948 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:01:51.330: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9559 exec execpod8gpkx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30948' Jun 17 22:01:51.589: INFO: rc: 1 Jun 17 22:01:51.589: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9559 exec execpod8gpkx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30948: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30948 nc: connect to 10.10.190.207 port 30948 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:01:52.328: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9559 exec execpod8gpkx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30948' Jun 17 22:01:52.592: INFO: rc: 1 Jun 17 22:01:52.592: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9559 exec execpod8gpkx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30948: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30948 nc: connect to 10.10.190.207 port 30948 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:01:53.330: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9559 exec execpod8gpkx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30948' Jun 17 22:01:53.596: INFO: rc: 1 Jun 17 22:01:53.596: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9559 exec execpod8gpkx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30948: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30948 nc: connect to 10.10.190.207 port 30948 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:01:54.328: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9559 exec execpod8gpkx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30948' Jun 17 22:01:54.610: INFO: rc: 1 Jun 17 22:01:54.610: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9559 exec execpod8gpkx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30948: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30948 nc: connect to 10.10.190.207 port 30948 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:01:55.328: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9559 exec execpod8gpkx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30948' Jun 17 22:01:56.014: INFO: rc: 1 Jun 17 22:01:56.015: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9559 exec execpod8gpkx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30948: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30948 nc: connect to 10.10.190.207 port 30948 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:01:56.328: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9559 exec execpod8gpkx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30948' Jun 17 22:01:56.711: INFO: rc: 1 Jun 17 22:01:56.711: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9559 exec execpod8gpkx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30948: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30948 nc: connect to 10.10.190.207 port 30948 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:01:57.328: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9559 exec execpod8gpkx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30948' Jun 17 22:01:57.762: INFO: rc: 1 Jun 17 22:01:57.762: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9559 exec execpod8gpkx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30948: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30948 nc: connect to 10.10.190.207 port 30948 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:01:58.329: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9559 exec execpod8gpkx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30948' Jun 17 22:01:58.590: INFO: rc: 1 Jun 17 22:01:58.590: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9559 exec execpod8gpkx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30948: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30948 nc: connect to 10.10.190.207 port 30948 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:01:59.329: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9559 exec execpod8gpkx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30948' Jun 17 22:01:59.588: INFO: rc: 1 Jun 17 22:01:59.588: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9559 exec execpod8gpkx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30948: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30948 nc: connect to 10.10.190.207 port 30948 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:02:00.330: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9559 exec execpod8gpkx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30948' Jun 17 22:02:00.595: INFO: rc: 1 Jun 17 22:02:00.595: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9559 exec execpod8gpkx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30948: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30948 nc: connect to 10.10.190.207 port 30948 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:02:01.328: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9559 exec execpod8gpkx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30948' Jun 17 22:02:01.625: INFO: rc: 1 Jun 17 22:02:01.625: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9559 exec execpod8gpkx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30948: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30948 nc: connect to 10.10.190.207 port 30948 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:02:02.328: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9559 exec execpod8gpkx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30948' Jun 17 22:02:02.593: INFO: rc: 1 Jun 17 22:02:02.593: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9559 exec execpod8gpkx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30948: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30948 nc: connect to 10.10.190.207 port 30948 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:02:03.328: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9559 exec execpod8gpkx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30948' Jun 17 22:02:03.594: INFO: rc: 1 Jun 17 22:02:03.594: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9559 exec execpod8gpkx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30948: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30948 nc: connect to 10.10.190.207 port 30948 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:02:04.328: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9559 exec execpod8gpkx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30948' Jun 17 22:02:04.590: INFO: rc: 1 Jun 17 22:02:04.590: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9559 exec execpod8gpkx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30948: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30948 nc: connect to 10.10.190.207 port 30948 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:02:05.328: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9559 exec execpod8gpkx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30948' Jun 17 22:02:05.591: INFO: rc: 1 Jun 17 22:02:05.591: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9559 exec execpod8gpkx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30948: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30948 nc: connect to 10.10.190.207 port 30948 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:02:06.328: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9559 exec execpod8gpkx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30948' Jun 17 22:02:06.587: INFO: rc: 1 Jun 17 22:02:06.587: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9559 exec execpod8gpkx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30948: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30948 nc: connect to 10.10.190.207 port 30948 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:02:07.328: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9559 exec execpod8gpkx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30948' Jun 17 22:02:07.686: INFO: rc: 1 Jun 17 22:02:07.686: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9559 exec execpod8gpkx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30948: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30948 nc: connect to 10.10.190.207 port 30948 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:02:08.329: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9559 exec execpod8gpkx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30948' Jun 17 22:02:08.620: INFO: rc: 1 Jun 17 22:02:08.620: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9559 exec execpod8gpkx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30948: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30948 nc: connect to 10.10.190.207 port 30948 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:02:09.328: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9559 exec execpod8gpkx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30948' Jun 17 22:02:09.617: INFO: rc: 1 Jun 17 22:02:09.617: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9559 exec execpod8gpkx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30948: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30948 nc: connect to 10.10.190.207 port 30948 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:02:10.328: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9559 exec execpod8gpkx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30948' Jun 17 22:02:10.632: INFO: rc: 1 Jun 17 22:02:10.632: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9559 exec execpod8gpkx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30948: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30948 nc: connect to 10.10.190.207 port 30948 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:02:11.328: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9559 exec execpod8gpkx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30948' Jun 17 22:02:11.631: INFO: rc: 1 Jun 17 22:02:11.631: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9559 exec execpod8gpkx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30948: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30948 nc: connect to 10.10.190.207 port 30948 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:02:12.329: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9559 exec execpod8gpkx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30948' Jun 17 22:02:12.574: INFO: rc: 1 Jun 17 22:02:12.574: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9559 exec execpod8gpkx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30948: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30948 nc: connect to 10.10.190.207 port 30948 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:02:13.329: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9559 exec execpod8gpkx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30948' Jun 17 22:02:14.096: INFO: rc: 1 Jun 17 22:02:14.097: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9559 exec execpod8gpkx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30948: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30948 nc: connect to 10.10.190.207 port 30948 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:02:14.328: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9559 exec execpod8gpkx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30948' Jun 17 22:02:15.641: INFO: rc: 1 Jun 17 22:02:15.642: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9559 exec execpod8gpkx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30948: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30948 nc: connect to 10.10.190.207 port 30948 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:02:16.329: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9559 exec execpod8gpkx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30948' Jun 17 22:02:16.603: INFO: rc: 1 Jun 17 22:02:16.603: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9559 exec execpod8gpkx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30948: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30948 nc: connect to 10.10.190.207 port 30948 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:02:17.328: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9559 exec execpod8gpkx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30948' Jun 17 22:02:17.795: INFO: rc: 1 Jun 17 22:02:17.795: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9559 exec execpod8gpkx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30948: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30948 nc: connect to 10.10.190.207 port 30948 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:02:18.329: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9559 exec execpod8gpkx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30948' Jun 17 22:02:18.688: INFO: rc: 1 Jun 17 22:02:18.688: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9559 exec execpod8gpkx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30948: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30948 nc: connect to 10.10.190.207 port 30948 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:02:19.328: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9559 exec execpod8gpkx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30948' Jun 17 22:02:19.686: INFO: rc: 1 Jun 17 22:02:19.686: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9559 exec execpod8gpkx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30948: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30948 nc: connect to 10.10.190.207 port 30948 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:02:20.329: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9559 exec execpod8gpkx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30948' Jun 17 22:02:20.632: INFO: rc: 1 Jun 17 22:02:20.632: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9559 exec execpod8gpkx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30948: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30948 nc: connect to 10.10.190.207 port 30948 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:02:21.329: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9559 exec execpod8gpkx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30948' Jun 17 22:02:21.589: INFO: rc: 1 Jun 17 22:02:21.589: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9559 exec execpod8gpkx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30948: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30948 nc: connect to 10.10.190.207 port 30948 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:02:22.329: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9559 exec execpod8gpkx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30948' Jun 17 22:02:22.608: INFO: rc: 1 Jun 17 22:02:22.608: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9559 exec execpod8gpkx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30948: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30948 nc: connect to 10.10.190.207 port 30948 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:02:23.330: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9559 exec execpod8gpkx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30948' Jun 17 22:02:23.573: INFO: rc: 1 Jun 17 22:02:23.573: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9559 exec execpod8gpkx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30948: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30948 nc: connect to 10.10.190.207 port 30948 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:02:24.329: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9559 exec execpod8gpkx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30948' Jun 17 22:02:24.688: INFO: rc: 1 Jun 17 22:02:24.688: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9559 exec execpod8gpkx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30948: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30948 nc: connect to 10.10.190.207 port 30948 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:02:25.328: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9559 exec execpod8gpkx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30948' Jun 17 22:02:25.596: INFO: rc: 1 Jun 17 22:02:25.596: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9559 exec execpod8gpkx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30948: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30948 nc: connect to 10.10.190.207 port 30948 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:02:26.328: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9559 exec execpod8gpkx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30948' Jun 17 22:02:26.570: INFO: rc: 1 Jun 17 22:02:26.570: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9559 exec execpod8gpkx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30948: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30948 nc: connect to 10.10.190.207 port 30948 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:02:26.571: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9559 exec execpod8gpkx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30948' Jun 17 22:02:26.822: INFO: rc: 1 Jun 17 22:02:26.822: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9559 exec execpod8gpkx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30948: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30948 nc: connect to 10.10.190.207 port 30948 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:02:26.823: FAIL: Unexpected error: <*errors.errorString | 0xc003e470a0>: { s: "service is not reachable within 2m0s timeout on endpoint 10.10.190.207:30948 over TCP protocol", } service is not reachable within 2m0s timeout on endpoint 10.10.190.207:30948 over TCP protocol occurred Full Stack Trace k8s.io/kubernetes/test/e2e/network.glob..func24.11() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1169 +0x265 k8s.io/kubernetes/test/e2e.RunE2ETests(0xc00178a780) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x36c k8s.io/kubernetes/test/e2e.TestE2E(0xc00178a780) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:144 +0x2b testing.tRunner(0xc00178a780, 0x70f99e8) /usr/local/go/src/testing/testing.go:1193 +0xef created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1238 +0x2b3 [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 STEP: Collecting events from namespace "services-9559". STEP: Found 17 events. Jun 17 22:02:26.838: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for execpod8gpkx: { } Scheduled: Successfully assigned services-9559/execpod8gpkx to node1 Jun 17 22:02:26.838: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for nodeport-test-kqgs5: { } Scheduled: Successfully assigned services-9559/nodeport-test-kqgs5 to node2 Jun 17 22:02:26.838: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for nodeport-test-l42bj: { } Scheduled: Successfully assigned services-9559/nodeport-test-l42bj to node2 Jun 17 22:02:26.838: INFO: At 2022-06-17 22:00:13 +0000 UTC - event for nodeport-test: {replication-controller } SuccessfulCreate: Created pod: nodeport-test-kqgs5 Jun 17 22:02:26.838: INFO: At 2022-06-17 22:00:13 +0000 UTC - event for nodeport-test: {replication-controller } SuccessfulCreate: Created pod: nodeport-test-l42bj Jun 17 22:02:26.838: INFO: At 2022-06-17 22:00:15 +0000 UTC - event for nodeport-test-kqgs5: {kubelet node2} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/agnhost:2.32" Jun 17 22:02:26.838: INFO: At 2022-06-17 22:00:16 +0000 UTC - event for nodeport-test-kqgs5: {kubelet node2} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/agnhost:2.32" in 246.523542ms Jun 17 22:02:26.838: INFO: At 2022-06-17 22:00:16 +0000 UTC - event for nodeport-test-kqgs5: {kubelet node2} Created: Created container nodeport-test Jun 17 22:02:26.838: INFO: At 2022-06-17 22:00:17 +0000 UTC - event for nodeport-test-kqgs5: {kubelet node2} Started: Started container nodeport-test Jun 17 22:02:26.838: INFO: At 2022-06-17 22:00:17 +0000 UTC - event for nodeport-test-l42bj: {kubelet node2} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/agnhost:2.32" Jun 17 22:02:26.838: INFO: At 2022-06-17 22:00:17 +0000 UTC - event for nodeport-test-l42bj: {kubelet node2} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/agnhost:2.32" in 248.401382ms Jun 17 22:02:26.838: INFO: At 2022-06-17 22:00:17 +0000 UTC - event for nodeport-test-l42bj: {kubelet node2} Created: Created container nodeport-test Jun 17 22:02:26.838: INFO: At 2022-06-17 22:00:18 +0000 UTC - event for nodeport-test-l42bj: {kubelet node2} Started: Started container nodeport-test Jun 17 22:02:26.838: INFO: At 2022-06-17 22:00:21 +0000 UTC - event for execpod8gpkx: {kubelet node1} Started: Started container agnhost-container Jun 17 22:02:26.838: INFO: At 2022-06-17 22:00:21 +0000 UTC - event for execpod8gpkx: {kubelet node1} Created: Created container agnhost-container Jun 17 22:02:26.838: INFO: At 2022-06-17 22:00:21 +0000 UTC - event for execpod8gpkx: {kubelet node1} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/agnhost:2.32" Jun 17 22:02:26.838: INFO: At 2022-06-17 22:00:21 +0000 UTC - event for execpod8gpkx: {kubelet node1} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/agnhost:2.32" in 269.635463ms Jun 17 22:02:26.840: INFO: POD NODE PHASE GRACE CONDITIONS Jun 17 22:02:26.840: INFO: execpod8gpkx node1 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-06-17 22:00:19 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2022-06-17 22:00:22 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-06-17 22:00:22 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-06-17 22:00:19 +0000 UTC }] Jun 17 22:02:26.841: INFO: nodeport-test-kqgs5 node2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-06-17 22:00:13 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2022-06-17 22:00:19 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-06-17 22:00:19 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-06-17 22:00:13 +0000 UTC }] Jun 17 22:02:26.841: INFO: nodeport-test-l42bj node2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-06-17 22:00:13 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2022-06-17 22:00:19 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-06-17 22:00:19 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-06-17 22:00:13 +0000 UTC }] Jun 17 22:02:26.841: INFO: Jun 17 22:02:26.845: INFO: Logging node info for node master1 Jun 17 22:02:26.847: INFO: Node Info: &Node{ObjectMeta:{master1 47691bb2-4ee9-4386-8bec-0f9db1917afd 36566 0 2022-06-17 19:59:00 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master1 kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.202 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2022-06-17 19:59:03 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}}} {flanneld Update v1 2022-06-17 20:01:43 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2022-06-17 20:01:45 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.0.0/24\"":{}},"f:taints":{}}}} {kubelet Update v1 2022-06-17 20:06:30 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{451201003520 0} {} 440625980Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234739200 0} {} 196518300Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{406080902496 0} {} 406080902496 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324575232 0} {} 195629468Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-06-17 20:04:36 +0000 UTC,LastTransitionTime:2022-06-17 20:04:36 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-06-17 22:02:19 +0000 UTC,LastTransitionTime:2022-06-17 19:58:57 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-06-17 22:02:19 +0000 UTC,LastTransitionTime:2022-06-17 19:58:57 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-06-17 22:02:19 +0000 UTC,LastTransitionTime:2022-06-17 19:58:57 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-06-17 22:02:19 +0000 UTC,LastTransitionTime:2022-06-17 20:01:45 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.202,},NodeAddress{Type:Hostname,Address:master1,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:f59e69c8e0cc41ff966b02f015e9cf30,SystemUUID:00ACFB60-0631-E711-906E-0017A4403562,BootID:81e1dc93-cb0d-4bf9-b7c4-28e0b4aef603,KernelVersion:3.10.0-1160.66.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.17,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:727740703,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 sirot/netperf-latest:latest],SizeBytes:282025213,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e k8s.gcr.io/coredns/coredns:v1.8.0],SizeBytes:42454755,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[localhost:30500/tasextender@sha256:a226a9c613b9eeed89115dd78ba697306e50d1b4466033c8415371714720c861 tasextender:latest localhost:30500/tasextender:0.4],SizeBytes:28910791,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[registry@sha256:1cd9409a311350c3072fe510b52046f104416376c126a479cef9a4dfe692cf57 registry:2.7.0],SizeBytes:24191168,},ContainerImage{Names:[nginx@sha256:b92d3b942c8b84da889ac3dc6e83bd20ffb8cd2d8298eba92c8b0bf88d52f03e nginx:1.20.1-alpine],SizeBytes:22721538,},ContainerImage{Names:[@ :],SizeBytes:5577654,},ContainerImage{Names:[alpine@sha256:c0e9560cda118f9ec63ddefb4a173a2b2a0347082d7dff7dc14272e7841a5b5a alpine:3.12.1],SizeBytes:5573013,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jun 17 22:02:26.848: INFO: Logging kubelet events for node master1 Jun 17 22:02:26.850: INFO: Logging pods the kubelet thinks is on node master1 Jun 17 22:02:26.869: INFO: kube-apiserver-master1 started at 2022-06-17 20:00:04 +0000 UTC (0+1 container statuses recorded) Jun 17 22:02:26.869: INFO: Container kube-apiserver ready: true, restart count 0 Jun 17 22:02:26.869: INFO: kube-controller-manager-master1 started at 2022-06-17 20:08:08 +0000 UTC (0+1 container statuses recorded) Jun 17 22:02:26.869: INFO: Container kube-controller-manager ready: true, restart count 2 Jun 17 22:02:26.869: INFO: kube-flannel-z9nqz started at 2022-06-17 20:01:38 +0000 UTC (1+1 container statuses recorded) Jun 17 22:02:26.869: INFO: Init container install-cni ready: true, restart count 2 Jun 17 22:02:26.869: INFO: Container kube-flannel ready: true, restart count 2 Jun 17 22:02:26.869: INFO: kube-multus-ds-amd64-rqb4r started at 2022-06-17 20:01:47 +0000 UTC (0+1 container statuses recorded) Jun 17 22:02:26.869: INFO: Container kube-multus ready: true, restart count 1 Jun 17 22:02:26.869: INFO: kube-scheduler-master1 started at 2022-06-17 20:08:08 +0000 UTC (0+1 container statuses recorded) Jun 17 22:02:26.869: INFO: Container kube-scheduler ready: true, restart count 0 Jun 17 22:02:26.869: INFO: kube-proxy-b2xlr started at 2022-06-17 20:00:43 +0000 UTC (0+1 container statuses recorded) Jun 17 22:02:26.869: INFO: Container kube-proxy ready: true, restart count 2 Jun 17 22:02:26.869: INFO: container-registry-65d7c44b96-hq7rp started at 2022-06-17 20:06:17 +0000 UTC (0+2 container statuses recorded) Jun 17 22:02:26.869: INFO: Container docker-registry ready: true, restart count 0 Jun 17 22:02:26.869: INFO: Container nginx ready: true, restart count 0 Jun 17 22:02:26.869: INFO: node-exporter-bts5h started at 2022-06-17 20:14:54 +0000 UTC (0+2 container statuses recorded) Jun 17 22:02:26.869: INFO: Container kube-rbac-proxy ready: true, restart count 0 Jun 17 22:02:26.869: INFO: Container node-exporter ready: true, restart count 0 Jun 17 22:02:26.954: INFO: Latency metrics for node master1 Jun 17 22:02:26.954: INFO: Logging node info for node master2 Jun 17 22:02:26.956: INFO: Node Info: &Node{ObjectMeta:{master2 71ab7827-6f85-4ecf-82ce-5b27d8ba1a11 36465 0 2022-06-17 19:59:29 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master2 kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.203 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/master.version:v0.8.2 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2022-06-17 19:59:30 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}}} {flanneld Update v1 2022-06-17 20:01:42 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2022-06-17 20:01:45 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}},"f:taints":{}}}} {nfd-master Update v1 2022-06-17 20:09:34 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/master.version":{}}}}} {kubelet Update v1 2022-06-17 20:09:40 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:capacity":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.1.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{451201003520 0} {} 440625980Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234743296 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{406080902496 0} {} 406080902496 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324579328 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-06-17 20:04:35 +0000 UTC,LastTransitionTime:2022-06-17 20:04:35 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-06-17 22:02:17 +0000 UTC,LastTransitionTime:2022-06-17 19:59:29 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-06-17 22:02:17 +0000 UTC,LastTransitionTime:2022-06-17 19:59:29 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-06-17 22:02:17 +0000 UTC,LastTransitionTime:2022-06-17 19:59:29 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-06-17 22:02:17 +0000 UTC,LastTransitionTime:2022-06-17 20:01:45 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.203,},NodeAddress{Type:Hostname,Address:master2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:ba0363db4fd2476098c500989c8b1fd5,SystemUUID:00A0DE53-E51D-E711-906E-0017A4403562,BootID:cafb2298-e9e8-4bc9-82ab-0feb6c416066,KernelVersion:3.10.0-1160.66.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.17,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:727740703,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 sirot/netperf-latest:latest],SizeBytes:282025213,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[k8s.gcr.io/nfd/node-feature-discovery@sha256:74a1cbd82354f148277e20cdce25d57816e355a896bc67f67a0f722164b16945 k8s.gcr.io/nfd/node-feature-discovery:v0.8.2],SizeBytes:108486428,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e k8s.gcr.io/coredns/coredns:v1.8.0],SizeBytes:42454755,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 aquasec/kube-bench:0.3.1],SizeBytes:19301876,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jun 17 22:02:26.956: INFO: Logging kubelet events for node master2 Jun 17 22:02:26.958: INFO: Logging pods the kubelet thinks is on node master2 Jun 17 22:02:26.968: INFO: kube-apiserver-master2 started at 2022-06-17 20:00:05 +0000 UTC (0+1 container statuses recorded) Jun 17 22:02:26.968: INFO: Container kube-apiserver ready: true, restart count 0 Jun 17 22:02:26.968: INFO: kube-proxy-52p78 started at 2022-06-17 20:00:43 +0000 UTC (0+1 container statuses recorded) Jun 17 22:02:26.968: INFO: Container kube-proxy ready: true, restart count 1 Jun 17 22:02:26.968: INFO: kube-multus-ds-amd64-spg7h started at 2022-06-17 20:01:47 +0000 UTC (0+1 container statuses recorded) Jun 17 22:02:26.968: INFO: Container kube-multus ready: true, restart count 1 Jun 17 22:02:26.968: INFO: coredns-8474476ff8-55pd7 started at 2022-06-17 20:02:14 +0000 UTC (0+1 container statuses recorded) Jun 17 22:02:26.968: INFO: Container coredns ready: true, restart count 1 Jun 17 22:02:26.968: INFO: dns-autoscaler-7df78bfcfb-ml447 started at 2022-06-17 20:02:16 +0000 UTC (0+1 container statuses recorded) Jun 17 22:02:26.968: INFO: Container autoscaler ready: true, restart count 1 Jun 17 22:02:26.968: INFO: kube-controller-manager-master2 started at 2022-06-17 20:08:05 +0000 UTC (0+1 container statuses recorded) Jun 17 22:02:26.968: INFO: Container kube-controller-manager ready: true, restart count 2 Jun 17 22:02:26.968: INFO: kube-scheduler-master2 started at 2022-06-17 20:00:05 +0000 UTC (0+1 container statuses recorded) Jun 17 22:02:26.968: INFO: Container kube-scheduler ready: true, restart count 2 Jun 17 22:02:26.968: INFO: kube-flannel-kmc7f started at 2022-06-17 20:01:38 +0000 UTC (1+1 container statuses recorded) Jun 17 22:02:26.968: INFO: Init container install-cni ready: true, restart count 2 Jun 17 22:02:26.968: INFO: Container kube-flannel ready: true, restart count 2 Jun 17 22:02:26.968: INFO: node-feature-discovery-controller-cff799f9f-zlzkd started at 2022-06-17 20:09:28 +0000 UTC (0+1 container statuses recorded) Jun 17 22:02:26.968: INFO: Container nfd-controller ready: true, restart count 0 Jun 17 22:02:26.968: INFO: node-exporter-ccmb2 started at 2022-06-17 20:14:54 +0000 UTC (0+2 container statuses recorded) Jun 17 22:02:26.968: INFO: Container kube-rbac-proxy ready: true, restart count 0 Jun 17 22:02:26.968: INFO: Container node-exporter ready: true, restart count 0 Jun 17 22:02:27.053: INFO: Latency metrics for node master2 Jun 17 22:02:27.053: INFO: Logging node info for node master3 Jun 17 22:02:27.055: INFO: Node Info: &Node{ObjectMeta:{master3 4495d2b3-3dc7-45fa-93e4-2ad5ef91730e 36780 0 2022-06-17 19:59:37 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master3 kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.204 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2022-06-17 19:59:39 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}}} {kube-controller-manager Update v1 2022-06-17 20:00:32 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.2.0/24\"":{}},"f:taints":{}}}} {flanneld Update v1 2022-06-17 20:01:42 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kubelet Update v1 2022-06-17 20:12:12 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:capacity":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.2.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{451201003520 0} {} 440625980Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234743296 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{406080902496 0} {} 406080902496 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324579328 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-06-17 20:04:33 +0000 UTC,LastTransitionTime:2022-06-17 20:04:33 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-06-17 22:02:25 +0000 UTC,LastTransitionTime:2022-06-17 19:59:37 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-06-17 22:02:25 +0000 UTC,LastTransitionTime:2022-06-17 19:59:37 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-06-17 22:02:25 +0000 UTC,LastTransitionTime:2022-06-17 19:59:37 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-06-17 22:02:25 +0000 UTC,LastTransitionTime:2022-06-17 20:01:45 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.204,},NodeAddress{Type:Hostname,Address:master3,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:e420146228b341cbbaf470c338ef023e,SystemUUID:008B1444-141E-E711-906E-0017A4403562,BootID:88e9c5d2-4324-4e63-8acf-ee80e9511e70,KernelVersion:3.10.0-1160.66.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.17,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:727740703,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[quay.io/prometheus-operator/prometheus-operator@sha256:850c86bfeda4389bc9c757a9fd17ca5a090ea6b424968178d4467492cfa13921 quay.io/prometheus-operator/prometheus-operator:v0.44.1],SizeBytes:42617274,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e k8s.gcr.io/coredns/coredns:v1.8.0],SizeBytes:42454755,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jun 17 22:02:27.055: INFO: Logging kubelet events for node master3 Jun 17 22:02:27.057: INFO: Logging pods the kubelet thinks is on node master3 Jun 17 22:02:27.063: INFO: kube-controller-manager-master3 started at 2022-06-17 20:08:07 +0000 UTC (0+1 container statuses recorded) Jun 17 22:02:27.063: INFO: Container kube-controller-manager ready: true, restart count 2 Jun 17 22:02:27.063: INFO: coredns-8474476ff8-plfdq started at 2022-06-17 20:02:18 +0000 UTC (0+1 container statuses recorded) Jun 17 22:02:27.063: INFO: Container coredns ready: true, restart count 1 Jun 17 22:02:27.063: INFO: prometheus-operator-585ccfb458-kz9ss started at 2022-06-17 20:14:47 +0000 UTC (0+2 container statuses recorded) Jun 17 22:02:27.063: INFO: Container kube-rbac-proxy ready: true, restart count 0 Jun 17 22:02:27.063: INFO: Container prometheus-operator ready: true, restart count 0 Jun 17 22:02:27.063: INFO: node-exporter-tv8q4 started at 2022-06-17 20:14:54 +0000 UTC (0+2 container statuses recorded) Jun 17 22:02:27.063: INFO: Container kube-rbac-proxy ready: true, restart count 0 Jun 17 22:02:27.063: INFO: Container node-exporter ready: true, restart count 0 Jun 17 22:02:27.063: INFO: kube-apiserver-master3 started at 2022-06-17 20:00:05 +0000 UTC (0+1 container statuses recorded) Jun 17 22:02:27.063: INFO: Container kube-apiserver ready: true, restart count 0 Jun 17 22:02:27.063: INFO: kube-scheduler-master3 started at 2022-06-17 20:08:07 +0000 UTC (0+1 container statuses recorded) Jun 17 22:02:27.063: INFO: Container kube-scheduler ready: true, restart count 2 Jun 17 22:02:27.063: INFO: kube-proxy-qw2lh started at 2022-06-17 20:00:43 +0000 UTC (0+1 container statuses recorded) Jun 17 22:02:27.063: INFO: Container kube-proxy ready: true, restart count 1 Jun 17 22:02:27.063: INFO: kube-flannel-7sp2w started at 2022-06-17 20:01:38 +0000 UTC (1+1 container statuses recorded) Jun 17 22:02:27.063: INFO: Init container install-cni ready: true, restart count 0 Jun 17 22:02:27.063: INFO: Container kube-flannel ready: true, restart count 2 Jun 17 22:02:27.063: INFO: kube-multus-ds-amd64-vtvhp started at 2022-06-17 20:01:47 +0000 UTC (0+1 container statuses recorded) Jun 17 22:02:27.063: INFO: Container kube-multus ready: true, restart count 1 Jun 17 22:02:27.137: INFO: Latency metrics for node master3 Jun 17 22:02:27.137: INFO: Logging node info for node node1 Jun 17 22:02:27.139: INFO: Node Info: &Node{ObjectMeta:{node1 2db3a28c-448f-4511-9db8-4ef739b681b1 36660 0 2022-06-17 20:00:39 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux cmk.intel.com/cmk-node:true feature.node.kubernetes.io/cpu-cpuid.ADX:true feature.node.kubernetes.io/cpu-cpuid.AESNI:true feature.node.kubernetes.io/cpu-cpuid.AVX:true feature.node.kubernetes.io/cpu-cpuid.AVX2:true feature.node.kubernetes.io/cpu-cpuid.AVX512BW:true feature.node.kubernetes.io/cpu-cpuid.AVX512CD:true feature.node.kubernetes.io/cpu-cpuid.AVX512DQ:true feature.node.kubernetes.io/cpu-cpuid.AVX512F:true feature.node.kubernetes.io/cpu-cpuid.AVX512VL:true feature.node.kubernetes.io/cpu-cpuid.FMA3:true feature.node.kubernetes.io/cpu-cpuid.HLE:true feature.node.kubernetes.io/cpu-cpuid.IBPB:true feature.node.kubernetes.io/cpu-cpuid.MPX:true feature.node.kubernetes.io/cpu-cpuid.RTM:true feature.node.kubernetes.io/cpu-cpuid.SSE4:true feature.node.kubernetes.io/cpu-cpuid.SSE42:true feature.node.kubernetes.io/cpu-cpuid.STIBP:true feature.node.kubernetes.io/cpu-cpuid.VMX:true feature.node.kubernetes.io/cpu-cstate.enabled:true feature.node.kubernetes.io/cpu-hardware_multithreading:true feature.node.kubernetes.io/cpu-pstate.status:active feature.node.kubernetes.io/cpu-pstate.turbo:true feature.node.kubernetes.io/cpu-rdt.RDTCMT:true feature.node.kubernetes.io/cpu-rdt.RDTL3CA:true feature.node.kubernetes.io/cpu-rdt.RDTMBA:true feature.node.kubernetes.io/cpu-rdt.RDTMBM:true feature.node.kubernetes.io/cpu-rdt.RDTMON:true feature.node.kubernetes.io/kernel-config.NO_HZ:true feature.node.kubernetes.io/kernel-config.NO_HZ_FULL:true feature.node.kubernetes.io/kernel-selinux.enabled:true feature.node.kubernetes.io/kernel-version.full:3.10.0-1160.66.1.el7.x86_64 feature.node.kubernetes.io/kernel-version.major:3 feature.node.kubernetes.io/kernel-version.minor:10 feature.node.kubernetes.io/kernel-version.revision:0 feature.node.kubernetes.io/memory-numa:true feature.node.kubernetes.io/network-sriov.capable:true feature.node.kubernetes.io/network-sriov.configured:true feature.node.kubernetes.io/pci-0300_1a03.present:true feature.node.kubernetes.io/storage-nonrotationaldisk:true feature.node.kubernetes.io/system-os_release.ID:centos feature.node.kubernetes.io/system-os_release.VERSION_ID:7 feature.node.kubernetes.io/system-os_release.VERSION_ID.major:7 kubernetes.io/arch:amd64 kubernetes.io/hostname:node1 kubernetes.io/os:linux] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.207 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/extended-resources: nfd.node.kubernetes.io/feature-labels:cpu-cpuid.ADX,cpu-cpuid.AESNI,cpu-cpuid.AVX,cpu-cpuid.AVX2,cpu-cpuid.AVX512BW,cpu-cpuid.AVX512CD,cpu-cpuid.AVX512DQ,cpu-cpuid.AVX512F,cpu-cpuid.AVX512VL,cpu-cpuid.FMA3,cpu-cpuid.HLE,cpu-cpuid.IBPB,cpu-cpuid.MPX,cpu-cpuid.RTM,cpu-cpuid.SSE4,cpu-cpuid.SSE42,cpu-cpuid.STIBP,cpu-cpuid.VMX,cpu-cstate.enabled,cpu-hardware_multithreading,cpu-pstate.status,cpu-pstate.turbo,cpu-rdt.RDTCMT,cpu-rdt.RDTL3CA,cpu-rdt.RDTMBA,cpu-rdt.RDTMBM,cpu-rdt.RDTMON,kernel-config.NO_HZ,kernel-config.NO_HZ_FULL,kernel-selinux.enabled,kernel-version.full,kernel-version.major,kernel-version.minor,kernel-version.revision,memory-numa,network-sriov.capable,network-sriov.configured,pci-0300_1a03.present,storage-nonrotationaldisk,system-os_release.ID,system-os_release.VERSION_ID,system-os_release.VERSION_ID.major nfd.node.kubernetes.io/worker.version:v0.8.2 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2022-06-17 20:00:39 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.4.0/24\"":{}}}}} {kubeadm Update v1 2022-06-17 20:00:40 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {flanneld Update v1 2022-06-17 20:01:43 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {nfd-master Update v1 2022-06-17 20:09:35 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/extended-resources":{},"f:nfd.node.kubernetes.io/feature-labels":{},"f:nfd.node.kubernetes.io/worker.version":{}},"f:labels":{"f:feature.node.kubernetes.io/cpu-cpuid.ADX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AESNI":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX2":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512BW":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512CD":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512DQ":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512F":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512VL":{},"f:feature.node.kubernetes.io/cpu-cpuid.FMA3":{},"f:feature.node.kubernetes.io/cpu-cpuid.HLE":{},"f:feature.node.kubernetes.io/cpu-cpuid.IBPB":{},"f:feature.node.kubernetes.io/cpu-cpuid.MPX":{},"f:feature.node.kubernetes.io/cpu-cpuid.RTM":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE4":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE42":{},"f:feature.node.kubernetes.io/cpu-cpuid.STIBP":{},"f:feature.node.kubernetes.io/cpu-cpuid.VMX":{},"f:feature.node.kubernetes.io/cpu-cstate.enabled":{},"f:feature.node.kubernetes.io/cpu-hardware_multithreading":{},"f:feature.node.kubernetes.io/cpu-pstate.status":{},"f:feature.node.kubernetes.io/cpu-pstate.turbo":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTCMT":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTL3CA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBM":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMON":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ_FULL":{},"f:feature.node.kubernetes.io/kernel-selinux.enabled":{},"f:feature.node.kubernetes.io/kernel-version.full":{},"f:feature.node.kubernetes.io/kernel-version.major":{},"f:feature.node.kubernetes.io/kernel-version.minor":{},"f:feature.node.kubernetes.io/kernel-version.revision":{},"f:feature.node.kubernetes.io/memory-numa":{},"f:feature.node.kubernetes.io/network-sriov.capable":{},"f:feature.node.kubernetes.io/network-sriov.configured":{},"f:feature.node.kubernetes.io/pci-0300_1a03.present":{},"f:feature.node.kubernetes.io/storage-nonrotationaldisk":{},"f:feature.node.kubernetes.io/system-os_release.ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID.major":{}}}}} {Swagger-Codegen Update v1 2022-06-17 20:13:06 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:cmk.intel.com/cmk-node":{}}},"f:status":{"f:capacity":{"f:cmk.intel.com/exclusive-cores":{}}}}} {kubelet Update v1 2022-06-17 20:13:08 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:cmk.intel.com/exclusive-cores":{},"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:capacity":{"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.4.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.4.0/24],},Status:NodeStatus{Capacity:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{451201003520 0} {} 440625980Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{201269608448 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{77 0} {} 77 DecimalSI},ephemeral-storage: {{406080902496 0} {} 406080902496 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{178884608000 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-06-17 20:04:34 +0000 UTC,LastTransitionTime:2022-06-17 20:04:34 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-06-17 22:02:22 +0000 UTC,LastTransitionTime:2022-06-17 20:00:39 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-06-17 22:02:22 +0000 UTC,LastTransitionTime:2022-06-17 20:00:39 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-06-17 22:02:22 +0000 UTC,LastTransitionTime:2022-06-17 20:00:39 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-06-17 22:02:22 +0000 UTC,LastTransitionTime:2022-06-17 20:01:49 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.207,},NodeAddress{Type:Hostname,Address:node1,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:b4b206100a5d45e9959c4a79c836676a,SystemUUID:00CDA902-D022-E711-906E-0017A4403562,BootID:5a19e1a7-8d9a-4724-83a4-bd77b1a0f8f4,KernelVersion:3.10.0-1160.66.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.17,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[opnfv/barometer-collectd@sha256:f30e965aa6195e6ac4ca2410f5a15e3704c92e4afa5208178ca22a7911975d66],SizeBytes:1075575763,},ContainerImage{Names:[@ :],SizeBytes:1007077455,},ContainerImage{Names:[localhost:30500/cmk@sha256:7227e64d78c2a9dd290de0ec1cbbaf536dad977fc3efca629dc87d6ffb97071e cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:727740703,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[golang@sha256:db2475a1dbb2149508e5db31d7d77a75e6600d54be645f37681f03f2762169ba golang:alpine3.12],SizeBytes:301186719,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[k8s.gcr.io/etcd@sha256:4ad90a11b55313b182afc186b9876c8e891531b8db4c9bf1541953021618d0e2 k8s.gcr.io/etcd:3.4.13-0],SizeBytes:253392289,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[grafana/grafana@sha256:ba39bf5131dcc0464134a3ff0e26e8c6380415249fa725e5f619176601255172 grafana/grafana:7.5.4],SizeBytes:203572842,},ContainerImage{Names:[quay.io/prometheus/prometheus@sha256:b899dbd1b9017b9a379f76ce5b40eead01a62762c4f2057eacef945c3c22d210 quay.io/prometheus/prometheus:v2.22.1],SizeBytes:168344243,},ContainerImage{Names:[nginx@sha256:a05b0cdd4fc1be3b224ba9662ebdf98fe44c09c0c9215b45f84344c12867002e nginx:1.21.1],SizeBytes:133175493,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:125930239,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[k8s.gcr.io/nfd/node-feature-discovery@sha256:74a1cbd82354f148277e20cdce25d57816e355a896bc67f67a0f722164b16945 k8s.gcr.io/nfd/node-feature-discovery:v0.8.2],SizeBytes:108486428,},ContainerImage{Names:[directxman12/k8s-prometheus-adapter@sha256:2b09a571757a12c0245f2f1a74db4d1b9386ff901cf57f5ce48a0a682bd0e3af directxman12/k8s-prometheus-adapter:v0.8.2],SizeBytes:68230450,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/sample-apiserver@sha256:e7fddbaac4c3451da2365ab90bad149d32f11409738034e41e0f460927f7c276 k8s.gcr.io/e2e-test-images/sample-apiserver:1.17.4],SizeBytes:58172101,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[localhost:30500/sriov-device-plugin@sha256:21d7abd21ac65aac7d19aaa2b1b05a71e496b7bf6251c76df58855be9c3aaa59 nfvpe/sriov-device-plugin:latest localhost:30500/sriov-device-plugin:v3.3.2],SizeBytes:42676189,},ContainerImage{Names:[localhost:30500/tasextender@sha256:a226a9c613b9eeed89115dd78ba697306e50d1b4466033c8415371714720c861 localhost:30500/tasextender:0.4],SizeBytes:28910791,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[prom/collectd-exporter@sha256:73fbda4d24421bff3b741c27efc36f1b6fbe7c57c378d56d4ff78101cd556654],SizeBytes:17463681,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:16032814,},ContainerImage{Names:[quay.io/prometheus-operator/prometheus-config-reloader@sha256:4dee0fcf1820355ddd6986c1317b555693776c731315544a99d6cc59a7e34ce9 quay.io/prometheus-operator/prometheus-config-reloader:v0.44.1],SizeBytes:13433274,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:11443478,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonewprivs@sha256:8ac1264691820febacf3aea5d152cbde6d10685731ec14966a9401c6f47a68ac k8s.gcr.io/e2e-test-images/nonewprivs:1.3],SizeBytes:7107254,},ContainerImage{Names:[appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172cead6502fec989198b98d8bb appropriate/curl:edge],SizeBytes:5654234,},ContainerImage{Names:[alpine@sha256:c75ac27b49326926b803b9ed43bf088bc220d22556de1bc5f72d742c91398f69 alpine:3.12],SizeBytes:5581590,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:1154361,},ContainerImage{Names:[busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 busybox:1.28],SizeBytes:1146369,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jun 17 22:02:27.139: INFO: Logging kubelet events for node node1 Jun 17 22:02:27.141: INFO: Logging pods the kubelet thinks is on node node1 Jun 17 22:02:27.158: INFO: collectd-5src2 started at 2022-06-17 20:18:47 +0000 UTC (0+3 container statuses recorded) Jun 17 22:02:27.158: INFO: Container collectd ready: true, restart count 0 Jun 17 22:02:27.158: INFO: Container collectd-exporter ready: true, restart count 0 Jun 17 22:02:27.158: INFO: Container rbac-proxy ready: true, restart count 0 Jun 17 22:02:27.158: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-whtq2 started at 2022-06-17 20:10:41 +0000 UTC (0+1 container statuses recorded) Jun 17 22:02:27.158: INFO: Container kube-sriovdp ready: true, restart count 0 Jun 17 22:02:27.158: INFO: var-expansion-8fe100b7-cb54-442a-8a73-a4b304daf912 started at 2022-06-17 21:59:39 +0000 UTC (0+1 container statuses recorded) Jun 17 22:02:27.158: INFO: Container dapi-container ready: false, restart count 0 Jun 17 22:02:27.158: INFO: liveness-1ef21f78-cbf3-441b-b64f-26a3531eb236 started at 2022-06-17 22:02:24 +0000 UTC (0+1 container statuses recorded) Jun 17 22:02:27.158: INFO: Container agnhost-container ready: false, restart count 0 Jun 17 22:02:27.158: INFO: kube-flannel-wqcwq started at 2022-06-17 20:01:38 +0000 UTC (1+1 container statuses recorded) Jun 17 22:02:27.158: INFO: Init container install-cni ready: true, restart count 2 Jun 17 22:02:27.158: INFO: Container kube-flannel ready: true, restart count 2 Jun 17 22:02:27.158: INFO: node-exporter-8ftgl started at 2022-06-17 20:14:54 +0000 UTC (0+2 container statuses recorded) Jun 17 22:02:27.158: INFO: Container kube-rbac-proxy ready: true, restart count 0 Jun 17 22:02:27.158: INFO: Container node-exporter ready: true, restart count 0 Jun 17 22:02:27.158: INFO: cmk-init-discover-node1-bvmrv started at 2022-06-17 20:13:02 +0000 UTC (0+3 container statuses recorded) Jun 17 22:02:27.158: INFO: Container discover ready: false, restart count 0 Jun 17 22:02:27.158: INFO: Container init ready: false, restart count 0 Jun 17 22:02:27.158: INFO: Container install ready: false, restart count 0 Jun 17 22:02:27.158: INFO: replace-27591722-kb5sh started at 2022-06-17 22:02:00 +0000 UTC (0+1 container statuses recorded) Jun 17 22:02:27.158: INFO: Container c ready: true, restart count 0 Jun 17 22:02:27.158: INFO: busybox-1d7e38a9-ea32-4597-ac48-fc08f0d0407d started at 2022-06-17 22:02:26 +0000 UTC (0+1 container statuses recorded) Jun 17 22:02:27.158: INFO: Container busybox ready: false, restart count 0 Jun 17 22:02:27.158: INFO: cmk-webhook-6c9d5f8578-qcmrd started at 2022-06-17 20:13:52 +0000 UTC (0+1 container statuses recorded) Jun 17 22:02:27.158: INFO: Container cmk-webhook ready: true, restart count 0 Jun 17 22:02:27.158: INFO: cmk-xh247 started at 2022-06-17 20:13:51 +0000 UTC (0+2 container statuses recorded) Jun 17 22:02:27.158: INFO: Container nodereport ready: true, restart count 0 Jun 17 22:02:27.158: INFO: Container reconcile ready: true, restart count 0 Jun 17 22:02:27.158: INFO: kube-proxy-t4lqk started at 2022-06-17 20:00:43 +0000 UTC (0+1 container statuses recorded) Jun 17 22:02:27.158: INFO: Container kube-proxy ready: true, restart count 2 Jun 17 22:02:27.158: INFO: kube-multus-ds-amd64-m6vf8 started at 2022-06-17 20:01:47 +0000 UTC (0+1 container statuses recorded) Jun 17 22:02:27.158: INFO: Container kube-multus ready: true, restart count 1 Jun 17 22:02:27.158: INFO: adopt-release-7bxds started at 2022-06-17 22:02:16 +0000 UTC (0+1 container statuses recorded) Jun 17 22:02:27.158: INFO: Container c ready: true, restart count 0 Jun 17 22:02:27.158: INFO: adopt-release-p7hdj started at 2022-06-17 22:01:55 +0000 UTC (0+1 container statuses recorded) Jun 17 22:02:27.158: INFO: Container c ready: true, restart count 0 Jun 17 22:02:27.158: INFO: execpodphlr7 started at 2022-06-17 22:02:05 +0000 UTC (0+1 container statuses recorded) Jun 17 22:02:27.158: INFO: Container agnhost-container ready: true, restart count 0 Jun 17 22:02:27.158: INFO: nginx-proxy-node1 started at 2022-06-17 20:00:39 +0000 UTC (0+1 container statuses recorded) Jun 17 22:02:27.158: INFO: Container nginx-proxy ready: true, restart count 2 Jun 17 22:02:27.158: INFO: tas-telemetry-aware-scheduling-84ff454dfb-tbvjv started at 2022-06-17 20:17:57 +0000 UTC (0+1 container statuses recorded) Jun 17 22:02:27.158: INFO: Container tas-extender ready: true, restart count 0 Jun 17 22:02:27.158: INFO: execpod8gpkx started at 2022-06-17 22:00:19 +0000 UTC (0+1 container statuses recorded) Jun 17 22:02:27.158: INFO: Container agnhost-container ready: true, restart count 0 Jun 17 22:02:27.158: INFO: server-envvars-98b588af-2ad7-47f4-b319-3648a38c07a5 started at 2022-06-17 22:01:49 +0000 UTC (0+1 container statuses recorded) Jun 17 22:02:27.158: INFO: Container srv ready: false, restart count 0 Jun 17 22:02:27.158: INFO: busybox-host-aliases00fe61d8-5d2e-435d-b09f-685c654a426e started at 2022-06-17 22:01:54 +0000 UTC (0+1 container statuses recorded) Jun 17 22:02:27.158: INFO: Container busybox-host-aliases00fe61d8-5d2e-435d-b09f-685c654a426e ready: true, restart count 0 Jun 17 22:02:27.158: INFO: pod-handle-http-request started at 2022-06-17 22:02:12 +0000 UTC (0+1 container statuses recorded) Jun 17 22:02:27.158: INFO: Container agnhost-container ready: true, restart count 0 Jun 17 22:02:27.158: INFO: kubernetes-dashboard-785dcbb76d-26kg6 started at 2022-06-17 20:02:19 +0000 UTC (0+1 container statuses recorded) Jun 17 22:02:27.158: INFO: Container kubernetes-dashboard ready: true, restart count 2 Jun 17 22:02:27.158: INFO: prometheus-k8s-0 started at 2022-06-17 20:14:56 +0000 UTC (0+4 container statuses recorded) Jun 17 22:02:27.158: INFO: Container config-reloader ready: true, restart count 0 Jun 17 22:02:27.158: INFO: Container custom-metrics-apiserver ready: true, restart count 0 Jun 17 22:02:27.158: INFO: Container grafana ready: true, restart count 0 Jun 17 22:02:27.158: INFO: Container prometheus ready: true, restart count 1 Jun 17 22:02:27.158: INFO: node-feature-discovery-worker-dgp4b started at 2022-06-17 20:09:28 +0000 UTC (0+1 container statuses recorded) Jun 17 22:02:27.158: INFO: Container nfd-worker ready: true, restart count 0 Jun 17 22:02:28.409: INFO: Latency metrics for node node1 Jun 17 22:02:28.409: INFO: Logging node info for node node2 Jun 17 22:02:28.412: INFO: Node Info: &Node{ObjectMeta:{node2 467d2582-10f7-475b-9f20-5b7c2e46267a 36668 0 2022-06-17 20:00:37 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux cmk.intel.com/cmk-node:true feature.node.kubernetes.io/cpu-cpuid.ADX:true feature.node.kubernetes.io/cpu-cpuid.AESNI:true feature.node.kubernetes.io/cpu-cpuid.AVX:true feature.node.kubernetes.io/cpu-cpuid.AVX2:true feature.node.kubernetes.io/cpu-cpuid.AVX512BW:true feature.node.kubernetes.io/cpu-cpuid.AVX512CD:true feature.node.kubernetes.io/cpu-cpuid.AVX512DQ:true feature.node.kubernetes.io/cpu-cpuid.AVX512F:true feature.node.kubernetes.io/cpu-cpuid.AVX512VL:true feature.node.kubernetes.io/cpu-cpuid.FMA3:true feature.node.kubernetes.io/cpu-cpuid.HLE:true feature.node.kubernetes.io/cpu-cpuid.IBPB:true feature.node.kubernetes.io/cpu-cpuid.MPX:true feature.node.kubernetes.io/cpu-cpuid.RTM:true feature.node.kubernetes.io/cpu-cpuid.SSE4:true feature.node.kubernetes.io/cpu-cpuid.SSE42:true feature.node.kubernetes.io/cpu-cpuid.STIBP:true feature.node.kubernetes.io/cpu-cpuid.VMX:true feature.node.kubernetes.io/cpu-cstate.enabled:true feature.node.kubernetes.io/cpu-hardware_multithreading:true feature.node.kubernetes.io/cpu-pstate.status:active feature.node.kubernetes.io/cpu-pstate.turbo:true feature.node.kubernetes.io/cpu-rdt.RDTCMT:true feature.node.kubernetes.io/cpu-rdt.RDTL3CA:true feature.node.kubernetes.io/cpu-rdt.RDTMBA:true feature.node.kubernetes.io/cpu-rdt.RDTMBM:true feature.node.kubernetes.io/cpu-rdt.RDTMON:true feature.node.kubernetes.io/kernel-config.NO_HZ:true feature.node.kubernetes.io/kernel-config.NO_HZ_FULL:true feature.node.kubernetes.io/kernel-selinux.enabled:true feature.node.kubernetes.io/kernel-version.full:3.10.0-1160.66.1.el7.x86_64 feature.node.kubernetes.io/kernel-version.major:3 feature.node.kubernetes.io/kernel-version.minor:10 feature.node.kubernetes.io/kernel-version.revision:0 feature.node.kubernetes.io/memory-numa:true feature.node.kubernetes.io/network-sriov.capable:true feature.node.kubernetes.io/network-sriov.configured:true feature.node.kubernetes.io/pci-0300_1a03.present:true feature.node.kubernetes.io/storage-nonrotationaldisk:true feature.node.kubernetes.io/system-os_release.ID:centos feature.node.kubernetes.io/system-os_release.VERSION_ID:7 feature.node.kubernetes.io/system-os_release.VERSION_ID.major:7 kubernetes.io/arch:amd64 kubernetes.io/hostname:node2 kubernetes.io/os:linux] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.208 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/extended-resources: nfd.node.kubernetes.io/feature-labels:cpu-cpuid.ADX,cpu-cpuid.AESNI,cpu-cpuid.AVX,cpu-cpuid.AVX2,cpu-cpuid.AVX512BW,cpu-cpuid.AVX512CD,cpu-cpuid.AVX512DQ,cpu-cpuid.AVX512F,cpu-cpuid.AVX512VL,cpu-cpuid.FMA3,cpu-cpuid.HLE,cpu-cpuid.IBPB,cpu-cpuid.MPX,cpu-cpuid.RTM,cpu-cpuid.SSE4,cpu-cpuid.SSE42,cpu-cpuid.STIBP,cpu-cpuid.VMX,cpu-cstate.enabled,cpu-hardware_multithreading,cpu-pstate.status,cpu-pstate.turbo,cpu-rdt.RDTCMT,cpu-rdt.RDTL3CA,cpu-rdt.RDTMBA,cpu-rdt.RDTMBM,cpu-rdt.RDTMON,kernel-config.NO_HZ,kernel-config.NO_HZ_FULL,kernel-selinux.enabled,kernel-version.full,kernel-version.major,kernel-version.minor,kernel-version.revision,memory-numa,network-sriov.capable,network-sriov.configured,pci-0300_1a03.present,storage-nonrotationaldisk,system-os_release.ID,system-os_release.VERSION_ID,system-os_release.VERSION_ID.major nfd.node.kubernetes.io/worker.version:v0.8.2 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2022-06-17 20:00:37 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.3.0/24\"":{}}}}} {kubeadm Update v1 2022-06-17 20:00:40 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {flanneld Update v1 2022-06-17 20:01:43 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {nfd-master Update v1 2022-06-17 20:09:34 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/extended-resources":{},"f:nfd.node.kubernetes.io/feature-labels":{},"f:nfd.node.kubernetes.io/worker.version":{}},"f:labels":{"f:feature.node.kubernetes.io/cpu-cpuid.ADX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AESNI":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX2":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512BW":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512CD":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512DQ":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512F":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512VL":{},"f:feature.node.kubernetes.io/cpu-cpuid.FMA3":{},"f:feature.node.kubernetes.io/cpu-cpuid.HLE":{},"f:feature.node.kubernetes.io/cpu-cpuid.IBPB":{},"f:feature.node.kubernetes.io/cpu-cpuid.MPX":{},"f:feature.node.kubernetes.io/cpu-cpuid.RTM":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE4":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE42":{},"f:feature.node.kubernetes.io/cpu-cpuid.STIBP":{},"f:feature.node.kubernetes.io/cpu-cpuid.VMX":{},"f:feature.node.kubernetes.io/cpu-cstate.enabled":{},"f:feature.node.kubernetes.io/cpu-hardware_multithreading":{},"f:feature.node.kubernetes.io/cpu-pstate.status":{},"f:feature.node.kubernetes.io/cpu-pstate.turbo":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTCMT":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTL3CA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBM":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMON":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ_FULL":{},"f:feature.node.kubernetes.io/kernel-selinux.enabled":{},"f:feature.node.kubernetes.io/kernel-version.full":{},"f:feature.node.kubernetes.io/kernel-version.major":{},"f:feature.node.kubernetes.io/kernel-version.minor":{},"f:feature.node.kubernetes.io/kernel-version.revision":{},"f:feature.node.kubernetes.io/memory-numa":{},"f:feature.node.kubernetes.io/network-sriov.capable":{},"f:feature.node.kubernetes.io/network-sriov.configured":{},"f:feature.node.kubernetes.io/pci-0300_1a03.present":{},"f:feature.node.kubernetes.io/storage-nonrotationaldisk":{},"f:feature.node.kubernetes.io/system-os_release.ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID.major":{}}}}} {Swagger-Codegen Update v1 2022-06-17 20:13:31 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:cmk.intel.com/cmk-node":{}}},"f:status":{"f:capacity":{"f:cmk.intel.com/exclusive-cores":{}}}}} {kubelet Update v1 2022-06-17 20:13:37 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:cmk.intel.com/exclusive-cores":{},"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:capacity":{"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.3.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.3.0/24],},Status:NodeStatus{Capacity:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{451201003520 0} {} 440625980Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{201269604352 0} {} 196552348Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{77 0} {} 77 DecimalSI},ephemeral-storage: {{406080902496 0} {} 406080902496 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{178884603904 0} {} 174691996Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-06-17 20:04:33 +0000 UTC,LastTransitionTime:2022-06-17 20:04:33 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-06-17 22:02:22 +0000 UTC,LastTransitionTime:2022-06-17 20:00:37 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-06-17 22:02:22 +0000 UTC,LastTransitionTime:2022-06-17 20:00:37 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-06-17 22:02:22 +0000 UTC,LastTransitionTime:2022-06-17 20:00:37 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-06-17 22:02:22 +0000 UTC,LastTransitionTime:2022-06-17 20:04:30 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.208,},NodeAddress{Type:Hostname,Address:node2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:3b9e31fbb30d4e48b9ac063755a76deb,SystemUUID:80B3CD56-852F-E711-906E-0017A4403562,BootID:5cd4c1a7-c6ca-496c-9122-4f944da708e6,KernelVersion:3.10.0-1160.66.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.17,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[opnfv/barometer-collectd@sha256:f30e965aa6195e6ac4ca2410f5a15e3704c92e4afa5208178ca22a7911975d66],SizeBytes:1075575763,},ContainerImage{Names:[cmk:v1.5.1],SizeBytes:727740703,},ContainerImage{Names:[localhost:30500/cmk@sha256:7227e64d78c2a9dd290de0ec1cbbaf536dad977fc3efca629dc87d6ffb97071e localhost:30500/cmk:v1.5.1],SizeBytes:727740703,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[aquasec/kube-hunter@sha256:2be6820bc1d7e0f57193a9a27d5a3e16b2fd93c53747b03ce8ca48c6fc323781 aquasec/kube-hunter:0.3.1],SizeBytes:347611549,},ContainerImage{Names:[sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 sirot/netperf-latest:latest],SizeBytes:282025213,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:702a992280fb7c3303e84a5801acbb4c9c7fcf48cffe0e9c8be3f0c60f74cf89 k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.4],SizeBytes:253371792,},ContainerImage{Names:[nginx@sha256:a05b0cdd4fc1be3b224ba9662ebdf98fe44c09c0c9215b45f84344c12867002e nginx:1.21.1],SizeBytes:133175493,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:125930239,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[k8s.gcr.io/nfd/node-feature-discovery@sha256:74a1cbd82354f148277e20cdce25d57816e355a896bc67f67a0f722164b16945 k8s.gcr.io/nfd/node-feature-discovery:v0.8.2],SizeBytes:108486428,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[localhost:30500/sriov-device-plugin@sha256:21d7abd21ac65aac7d19aaa2b1b05a71e496b7bf6251c76df58855be9c3aaa59 localhost:30500/sriov-device-plugin:v3.3.2],SizeBytes:42676189,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 aquasec/kube-bench:0.3.1],SizeBytes:19301876,},ContainerImage{Names:[prom/collectd-exporter@sha256:73fbda4d24421bff3b741c27efc36f1b6fbe7c57c378d56d4ff78101cd556654],SizeBytes:17463681,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:16032814,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:11443478,},ContainerImage{Names:[appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172cead6502fec989198b98d8bb appropriate/curl:edge],SizeBytes:5654234,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:1154361,},ContainerImage{Names:[busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 busybox:1.28],SizeBytes:1146369,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jun 17 22:02:28.413: INFO: Logging kubelet events for node node2 Jun 17 22:02:28.414: INFO: Logging pods the kubelet thinks is on node node2 Jun 17 22:02:28.428: INFO: kube-flannel-plbl8 started at 2022-06-17 20:01:38 +0000 UTC (1+1 container statuses recorded) Jun 17 22:02:28.428: INFO: Init container install-cni ready: true, restart count 2 Jun 17 22:02:28.428: INFO: Container kube-flannel ready: true, restart count 2 Jun 17 22:02:28.428: INFO: adopt-release-bklcl started at 2022-06-17 22:01:55 +0000 UTC (0+1 container statuses recorded) Jun 17 22:02:28.428: INFO: Container c ready: true, restart count 0 Jun 17 22:02:28.428: INFO: pod-configmaps-1dd908bd-e43c-4944-a10d-8b9be464df11 started at 2022-06-17 22:02:26 +0000 UTC (0+1 container statuses recorded) Jun 17 22:02:28.428: INFO: Container agnhost-container ready: false, restart count 0 Jun 17 22:02:28.428: INFO: node-feature-discovery-worker-82r46 started at 2022-06-17 20:09:28 +0000 UTC (0+1 container statuses recorded) Jun 17 22:02:28.428: INFO: Container nfd-worker ready: true, restart count 0 Jun 17 22:02:28.428: INFO: cmk-init-discover-node2-z2vgz started at 2022-06-17 20:13:25 +0000 UTC (0+3 container statuses recorded) Jun 17 22:02:28.428: INFO: Container discover ready: false, restart count 0 Jun 17 22:02:28.428: INFO: Container init ready: false, restart count 0 Jun 17 22:02:28.428: INFO: Container install ready: false, restart count 0 Jun 17 22:02:28.428: INFO: nodeport-test-kqgs5 started at 2022-06-17 22:00:13 +0000 UTC (0+1 container statuses recorded) Jun 17 22:02:28.428: INFO: Container nodeport-test ready: true, restart count 0 Jun 17 22:02:28.428: INFO: kube-multus-ds-amd64-hblk4 started at 2022-06-17 20:01:47 +0000 UTC (0+1 container statuses recorded) Jun 17 22:02:28.428: INFO: Container kube-multus ready: true, restart count 1 Jun 17 22:02:28.428: INFO: cmk-5gtjq started at 2022-06-17 20:13:52 +0000 UTC (0+2 container statuses recorded) Jun 17 22:02:28.428: INFO: Container nodereport ready: true, restart count 0 Jun 17 22:02:28.428: INFO: Container reconcile ready: true, restart count 0 Jun 17 22:02:28.428: INFO: collectd-6bcqz started at 2022-06-17 20:18:47 +0000 UTC (0+3 container statuses recorded) Jun 17 22:02:28.428: INFO: Container collectd ready: true, restart count 0 Jun 17 22:02:28.428: INFO: Container collectd-exporter ready: true, restart count 0 Jun 17 22:02:28.428: INFO: Container rbac-proxy ready: true, restart count 0 Jun 17 22:02:28.428: INFO: pod-configmaps-ac082623-f0e5-441d-8065-c7e69716b5a3 started at 2022-06-17 22:02:20 +0000 UTC (0+3 container statuses recorded) Jun 17 22:02:28.428: INFO: Container createcm-volume-test ready: true, restart count 0 Jun 17 22:02:28.428: INFO: Container delcm-volume-test ready: true, restart count 0 Jun 17 22:02:28.428: INFO: Container updcm-volume-test ready: true, restart count 0 Jun 17 22:02:28.428: INFO: nginx-proxy-node2 started at 2022-06-17 20:00:37 +0000 UTC (0+1 container statuses recorded) Jun 17 22:02:28.428: INFO: Container nginx-proxy ready: true, restart count 2 Jun 17 22:02:28.428: INFO: kube-proxy-pvtj6 started at 2022-06-17 20:00:43 +0000 UTC (0+1 container statuses recorded) Jun 17 22:02:28.428: INFO: Container kube-proxy ready: true, restart count 2 Jun 17 22:02:28.428: INFO: send-events-26f1edeb-46d9-4567-b1b2-926258bc968b started at 2022-06-17 22:01:43 +0000 UTC (0+1 container statuses recorded) Jun 17 22:02:28.428: INFO: Container p ready: false, restart count 0 Jun 17 22:02:28.428: INFO: nodeport-test-l42bj started at 2022-06-17 22:00:13 +0000 UTC (0+1 container statuses recorded) Jun 17 22:02:28.428: INFO: Container nodeport-test ready: true, restart count 0 Jun 17 22:02:28.428: INFO: agnhost-primary-876jh started at 2022-06-17 22:02:20 +0000 UTC (0+1 container statuses recorded) Jun 17 22:02:28.428: INFO: Container agnhost-primary ready: true, restart count 0 Jun 17 22:02:28.428: INFO: kubernetes-metrics-scraper-5558854cb-w4nk8 started at 2022-06-17 20:02:19 +0000 UTC (0+1 container statuses recorded) Jun 17 22:02:28.428: INFO: Container kubernetes-metrics-scraper ready: true, restart count 1 Jun 17 22:02:28.428: INFO: test-webserver-c67c950f-e38b-4445-ab3b-ceabf4cf4f10 started at 2022-06-17 22:01:26 +0000 UTC (0+1 container statuses recorded) Jun 17 22:02:28.428: INFO: Container test-webserver ready: true, restart count 0 Jun 17 22:02:28.429: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-xr9c5 started at 2022-06-17 20:10:41 +0000 UTC (0+1 container statuses recorded) Jun 17 22:02:28.429: INFO: Container kube-sriovdp ready: true, restart count 0 Jun 17 22:02:28.429: INFO: node-exporter-xgz6d started at 2022-06-17 20:14:54 +0000 UTC (0+2 container statuses recorded) Jun 17 22:02:28.429: INFO: Container kube-rbac-proxy ready: true, restart count 0 Jun 17 22:02:28.429: INFO: Container node-exporter ready: true, restart count 0 Jun 17 22:02:28.696: INFO: Latency metrics for node node2 Jun 17 22:02:28.696: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-9559" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750 • Failure [134.853 seconds] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should be able to create a functioning NodePort service [Conformance] [It] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Jun 17 22:02:26.823: Unexpected error: <*errors.errorString | 0xc003e470a0>: { s: "service is not reachable within 2m0s timeout on endpoint 10.10.190.207:30948 over TCP protocol", } service is not reachable within 2m0s timeout on endpoint 10.10.190.207:30948 over TCP protocol occurred /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1169 ------------------------------ {"msg":"FAILED [sig-network] Services should be able to create a functioning NodePort service [Conformance]","total":-1,"completed":6,"skipped":97,"failed":1,"failures":["[sig-network] Services should be able to create a functioning NodePort service [Conformance]"]} SSSSSSS ------------------------------ [BeforeEach] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 17 21:59:39.496: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should verify that a failing subpath expansion can be modified during the lifecycle of a container [Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating the pod with failed condition STEP: updating the pod Jun 17 22:01:40.048: INFO: Successfully updated pod "var-expansion-8fe100b7-cb54-442a-8a73-a4b304daf912" STEP: waiting for pod running STEP: deleting the pod gracefully Jun 17 22:01:44.054: INFO: Deleting pod "var-expansion-8fe100b7-cb54-442a-8a73-a4b304daf912" in namespace "var-expansion-7433" Jun 17 22:01:44.059: INFO: Wait up to 5m0s for pod "var-expansion-8fe100b7-cb54-442a-8a73-a4b304daf912" to be fully deleted [AfterEach] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 17 22:02:30.065: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-7433" for this suite. • [SLOW TEST:170.578 seconds] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should verify that a failing subpath expansion can be modified during the lifecycle of a container [Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Variable Expansion should verify that a failing subpath expansion can be modified during the lifecycle of a container [Slow] [Conformance]","total":-1,"completed":2,"skipped":13,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-apps] DisruptionController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 17 22:02:28.489: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename disruption STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] DisruptionController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/disruption.go:69 [It] should create a PodDisruptionBudget [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating the pdb STEP: Waiting for the pdb to be processed STEP: updating the pdb STEP: Waiting for the pdb to be processed STEP: patching the pdb STEP: Waiting for the pdb to be processed STEP: Waiting for the pdb to be deleted [AfterEach] [sig-apps] DisruptionController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 17 22:02:30.554: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "disruption-9315" for this suite. • ------------------------------ [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 17 22:02:26.491: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating configMap with name configmap-test-volume-map-61cac71d-aa26-4ff8-87fb-7f926b8a3487 STEP: Creating a pod to test consume configMaps Jun 17 22:02:26.524: INFO: Waiting up to 5m0s for pod "pod-configmaps-1dd908bd-e43c-4944-a10d-8b9be464df11" in namespace "configmap-8317" to be "Succeeded or Failed" Jun 17 22:02:26.526: INFO: Pod "pod-configmaps-1dd908bd-e43c-4944-a10d-8b9be464df11": Phase="Pending", Reason="", readiness=false. Elapsed: 1.833161ms Jun 17 22:02:28.531: INFO: Pod "pod-configmaps-1dd908bd-e43c-4944-a10d-8b9be464df11": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006761789s Jun 17 22:02:30.536: INFO: Pod "pod-configmaps-1dd908bd-e43c-4944-a10d-8b9be464df11": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012373166s STEP: Saw pod success Jun 17 22:02:30.536: INFO: Pod "pod-configmaps-1dd908bd-e43c-4944-a10d-8b9be464df11" satisfied condition "Succeeded or Failed" Jun 17 22:02:30.539: INFO: Trying to get logs from node node2 pod pod-configmaps-1dd908bd-e43c-4944-a10d-8b9be464df11 container agnhost-container: STEP: delete the pod Jun 17 22:02:30.551: INFO: Waiting for pod pod-configmaps-1dd908bd-e43c-4944-a10d-8b9be464df11 to disappear Jun 17 22:02:30.553: INFO: Pod pod-configmaps-1dd908bd-e43c-4944-a10d-8b9be464df11 no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 17 22:02:30.554: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-8317" for this suite. • ------------------------------ {"msg":"PASSED [sig-apps] DisruptionController should create a PodDisruptionBudget [Conformance]","total":-1,"completed":2,"skipped":25,"failed":1,"failures":["[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]"]} S ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":-1,"completed":5,"skipped":234,"failed":1,"failures":["[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]"]} SSSSSSSS ------------------------------ [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 17 22:02:28.466: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context STEP: Waiting for a default service account to be provisioned in namespace [It] should support container.SecurityContext.RunAsUser And container.SecurityContext.RunAsGroup [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test pod.Spec.SecurityContext.RunAsUser Jun 17 22:02:28.501: INFO: Waiting up to 5m0s for pod "security-context-da449df5-c0ba-49de-ae47-6970190beb16" in namespace "security-context-717" to be "Succeeded or Failed" Jun 17 22:02:28.504: INFO: Pod "security-context-da449df5-c0ba-49de-ae47-6970190beb16": Phase="Pending", Reason="", readiness=false. Elapsed: 3.211645ms Jun 17 22:02:30.508: INFO: Pod "security-context-da449df5-c0ba-49de-ae47-6970190beb16": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007155305s Jun 17 22:02:32.510: INFO: Pod "security-context-da449df5-c0ba-49de-ae47-6970190beb16": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.009841295s STEP: Saw pod success Jun 17 22:02:32.511: INFO: Pod "security-context-da449df5-c0ba-49de-ae47-6970190beb16" satisfied condition "Succeeded or Failed" Jun 17 22:02:32.513: INFO: Trying to get logs from node node1 pod security-context-da449df5-c0ba-49de-ae47-6970190beb16 container test-container: STEP: delete the pod Jun 17 22:02:32.526: INFO: Waiting for pod security-context-da449df5-c0ba-49de-ae47-6970190beb16 to disappear Jun 17 22:02:32.528: INFO: Pod security-context-da449df5-c0ba-49de-ae47-6970190beb16 no longer exists [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 17 22:02:32.528: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-717" for this suite. • ------------------------------ [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 17 22:02:28.726: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test downward API volume plugin Jun 17 22:02:28.758: INFO: Waiting up to 5m0s for pod "downwardapi-volume-3a1dc1b3-a3dc-4d68-b1ff-ff25486e4a54" in namespace "downward-api-743" to be "Succeeded or Failed" Jun 17 22:02:28.764: INFO: Pod "downwardapi-volume-3a1dc1b3-a3dc-4d68-b1ff-ff25486e4a54": Phase="Pending", Reason="", readiness=false. Elapsed: 5.72965ms Jun 17 22:02:30.767: INFO: Pod "downwardapi-volume-3a1dc1b3-a3dc-4d68-b1ff-ff25486e4a54": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009135023s Jun 17 22:02:32.771: INFO: Pod "downwardapi-volume-3a1dc1b3-a3dc-4d68-b1ff-ff25486e4a54": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012903579s STEP: Saw pod success Jun 17 22:02:32.771: INFO: Pod "downwardapi-volume-3a1dc1b3-a3dc-4d68-b1ff-ff25486e4a54" satisfied condition "Succeeded or Failed" Jun 17 22:02:32.774: INFO: Trying to get logs from node node2 pod downwardapi-volume-3a1dc1b3-a3dc-4d68-b1ff-ff25486e4a54 container client-container: STEP: delete the pod Jun 17 22:02:32.786: INFO: Waiting for pod downwardapi-volume-3a1dc1b3-a3dc-4d68-b1ff-ff25486e4a54 to disappear Jun 17 22:02:32.787: INFO: Pod downwardapi-volume-3a1dc1b3-a3dc-4d68-b1ff-ff25486e4a54 no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 17 22:02:32.787: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-743" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":7,"skipped":104,"failed":1,"failures":["[sig-network] Services should be able to create a functioning NodePort service [Conformance]"]} SSSSSSSSSSS ------------------------------ [BeforeEach] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 17 22:02:32.820: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should mount an API token into pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: getting the auto-created API token STEP: reading a file in the container Jun 17 22:02:43.369: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-1935 pod-service-account-cb1c9061-4062-451a-96c6-1400f2485ea9 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/token' STEP: reading a file in the container Jun 17 22:02:43.630: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-1935 pod-service-account-cb1c9061-4062-451a-96c6-1400f2485ea9 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/ca.crt' STEP: reading a file in the container Jun 17 22:02:43.866: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-1935 pod-service-account-cb1c9061-4062-451a-96c6-1400f2485ea9 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/namespace' [AfterEach] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 17 22:02:44.127: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-1935" for this suite. • [SLOW TEST:11.315 seconds] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23 should mount an API token into pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-auth] ServiceAccounts should mount an API token into pods [Conformance]","total":-1,"completed":8,"skipped":115,"failed":1,"failures":["[sig-network] Services should be able to create a functioning NodePort service [Conformance]"]} SSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 17 22:02:20.184: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] updates the published spec when one version gets renamed [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: set up a multi version CRD Jun 17 22:02:20.206: INFO: >>> kubeConfig: /root/.kube/config STEP: rename a version STEP: check the new version name is served STEP: check the old version name is removed STEP: check the other version is not changed [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 17 22:02:45.465: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-7434" for this suite. • [SLOW TEST:25.299 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 updates the published spec when one version gets renamed [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance]","total":-1,"completed":18,"skipped":283,"failed":0} SSSSSS ------------------------------ [BeforeEach] [sig-node] Pods Extended /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 17 22:02:45.498: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Pods Set QOS Class /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:150 [It] should be set on Pods with matching resource requests and limits for memory and cpu [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying QOS class is set on the pod [AfterEach] [sig-node] Pods Extended /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 17 22:02:45.535: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-8751" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Pods Extended Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]","total":-1,"completed":19,"skipped":289,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 17 22:02:24.726: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:54 [It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating pod liveness-1ef21f78-cbf3-441b-b64f-26a3531eb236 in namespace container-probe-1238 Jun 17 22:02:28.766: INFO: Started pod liveness-1ef21f78-cbf3-441b-b64f-26a3531eb236 in namespace container-probe-1238 STEP: checking the pod's current state and verifying that restartCount is present Jun 17 22:02:28.769: INFO: Initial restart count of pod liveness-1ef21f78-cbf3-441b-b64f-26a3531eb236 is 0 Jun 17 22:02:46.810: INFO: Restart count of pod container-probe-1238/liveness-1ef21f78-cbf3-441b-b64f-26a3531eb236 is now 1 (18.040454158s elapsed) STEP: deleting the pod [AfterEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 17 22:02:46.816: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-1238" for this suite. • [SLOW TEST:22.099 seconds] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":-1,"completed":21,"skipped":183,"failed":0} [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 17 22:02:46.827: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] should include custom resource definition resources in discovery documents [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: fetching the /apis discovery document STEP: finding the apiextensions.k8s.io API group in the /apis discovery document STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis discovery document STEP: fetching the /apis/apiextensions.k8s.io discovery document STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis/apiextensions.k8s.io discovery document STEP: fetching the /apis/apiextensions.k8s.io/v1 discovery document STEP: finding customresourcedefinitions resources in the /apis/apiextensions.k8s.io/v1 discovery document [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 17 22:02:46.853: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-3957" for this suite. • ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance]","total":-1,"completed":22,"skipped":183,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ {"msg":"PASSED [sig-node] Security Context should support container.SecurityContext.RunAsUser And container.SecurityContext.RunAsGroup [LinuxOnly] [Conformance]","total":-1,"completed":18,"skipped":182,"failed":0} [BeforeEach] [sig-node] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 17 22:02:32.541: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/lifecycle_hook.go:52 STEP: create the container to handle the HTTPGet hook request. Jun 17 22:02:32.577: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Jun 17 22:02:34.580: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Jun 17 22:02:36.580: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Jun 17 22:02:38.579: INFO: The status of Pod pod-handle-http-request is Running (Ready = true) [It] should execute prestop http hook properly [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: create the pod with lifecycle hook Jun 17 22:02:38.596: INFO: The status of Pod pod-with-prestop-http-hook is Pending, waiting for it to be Running (with Ready = true) Jun 17 22:02:40.603: INFO: The status of Pod pod-with-prestop-http-hook is Pending, waiting for it to be Running (with Ready = true) Jun 17 22:02:42.601: INFO: The status of Pod pod-with-prestop-http-hook is Running (Ready = true) STEP: delete the pod with lifecycle hook Jun 17 22:02:42.608: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Jun 17 22:02:42.611: INFO: Pod pod-with-prestop-http-hook still exists Jun 17 22:02:44.612: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Jun 17 22:02:44.615: INFO: Pod pod-with-prestop-http-hook still exists Jun 17 22:02:46.615: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Jun 17 22:02:46.619: INFO: Pod pod-with-prestop-http-hook still exists Jun 17 22:02:48.612: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Jun 17 22:02:48.614: INFO: Pod pod-with-prestop-http-hook no longer exists STEP: check prestop hook [AfterEach] [sig-node] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 17 22:02:48.620: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-9891" for this suite. • [SLOW TEST:16.088 seconds] [sig-node] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 when create a pod with lifecycle hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/lifecycle_hook.go:43 should execute prestop http hook properly [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance]","total":-1,"completed":19,"skipped":182,"failed":0} SSSSSS ------------------------------ [BeforeEach] [sig-scheduling] LimitRange /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 17 22:02:44.166: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename limitrange STEP: Waiting for a default service account to be provisioned in namespace [It] should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a LimitRange STEP: Setting up watch STEP: Submitting a LimitRange Jun 17 22:02:44.194: INFO: observed the limitRanges list STEP: Verifying LimitRange creation was observed STEP: Fetching the LimitRange to ensure it has proper values Jun 17 22:02:44.198: INFO: Verifying requests: expected map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] with actual map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] Jun 17 22:02:44.199: INFO: Verifying limits: expected map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] with actual map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] STEP: Creating a Pod with no resource requirements STEP: Ensuring Pod has resource requirements applied from LimitRange Jun 17 22:02:44.216: INFO: Verifying requests: expected map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] with actual map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] Jun 17 22:02:44.216: INFO: Verifying limits: expected map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] with actual map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] STEP: Creating a Pod with partial resource requirements STEP: Ensuring Pod has merged resource requirements applied from LimitRange Jun 17 22:02:44.231: INFO: Verifying requests: expected map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{161061273600 0} {} 150Gi BinarySI} memory:{{157286400 0} {} 150Mi BinarySI}] with actual map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{161061273600 0} {} 150Gi BinarySI} memory:{{157286400 0} {} 150Mi BinarySI}] Jun 17 22:02:44.231: INFO: Verifying limits: expected map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] with actual map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] STEP: Failing to create a Pod with less than min resources STEP: Failing to create a Pod with more than max resources STEP: Updating a LimitRange STEP: Verifying LimitRange updating is effective STEP: Creating a Pod with less than former min resources STEP: Failing to create a Pod with more than max resources STEP: Deleting a LimitRange STEP: Verifying the LimitRange was deleted Jun 17 22:02:51.278: INFO: limitRange is already deleted STEP: Creating a Pod with more than former max resources [AfterEach] [sig-scheduling] LimitRange /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 17 22:02:51.289: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "limitrange-3730" for this suite. • [SLOW TEST:7.130 seconds] [sig-scheduling] LimitRange /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-scheduling] LimitRange should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance]","total":-1,"completed":9,"skipped":132,"failed":1,"failures":["[sig-network] Services should be able to create a functioning NodePort service [Conformance]"]} SSSSSSSS ------------------------------ [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 17 22:02:51.315: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746 [It] should find a service from listing all namespaces [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: fetching services [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 17 22:02:51.338: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-9217" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750 • ------------------------------ {"msg":"PASSED [sig-network] Services should find a service from listing all namespaces [Conformance]","total":-1,"completed":10,"skipped":140,"failed":1,"failures":["[sig-network] Services should be able to create a functioning NodePort service [Conformance]"]} SSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 17 22:02:46.940: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] getting/updating/patching custom resource definition status sub-resource works [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Jun 17 22:02:46.959: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 17 22:02:51.991: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-8861" for this suite. • [SLOW TEST:5.059 seconds] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Simple CustomResourceDefinition /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:48 getting/updating/patching custom resource definition status sub-resource works [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works [Conformance]","total":-1,"completed":23,"skipped":221,"failed":0} SSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 17 22:02:45.651: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/kubelet.go:38 [BeforeEach] when scheduling a busybox command that always fails in a pod /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/kubelet.go:82 [It] should have an terminated reason [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [AfterEach] [sig-node] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 17 22:02:53.694: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-3767" for this suite. • [SLOW TEST:8.051 seconds] [sig-node] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 when scheduling a busybox command that always fails in a pod /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/kubelet.go:79 should have an terminated reason [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance]","total":-1,"completed":20,"skipped":340,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 17 22:02:53.754: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be immutable if `immutable` field is set [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 17 22:02:53.797: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-1026" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be immutable if `immutable` field is set [Conformance]","total":-1,"completed":21,"skipped":367,"failed":0} S ------------------------------ [BeforeEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 17 22:02:52.023: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating configMap with name projected-configmap-test-volume-25e85115-4fc3-42e6-a442-4665882723a7 STEP: Creating a pod to test consume configMaps Jun 17 22:02:52.063: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-601f7a15-c290-41f8-ac57-cf7b37724a84" in namespace "projected-1199" to be "Succeeded or Failed" Jun 17 22:02:52.065: INFO: Pod "pod-projected-configmaps-601f7a15-c290-41f8-ac57-cf7b37724a84": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020574ms Jun 17 22:02:54.068: INFO: Pod "pod-projected-configmaps-601f7a15-c290-41f8-ac57-cf7b37724a84": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005034827s Jun 17 22:02:56.073: INFO: Pod "pod-projected-configmaps-601f7a15-c290-41f8-ac57-cf7b37724a84": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010117496s STEP: Saw pod success Jun 17 22:02:56.073: INFO: Pod "pod-projected-configmaps-601f7a15-c290-41f8-ac57-cf7b37724a84" satisfied condition "Succeeded or Failed" Jun 17 22:02:56.075: INFO: Trying to get logs from node node2 pod pod-projected-configmaps-601f7a15-c290-41f8-ac57-cf7b37724a84 container agnhost-container: STEP: delete the pod Jun 17 22:02:56.091: INFO: Waiting for pod pod-projected-configmaps-601f7a15-c290-41f8-ac57-cf7b37724a84 to disappear Jun 17 22:02:56.093: INFO: Pod pod-projected-configmaps-601f7a15-c290-41f8-ac57-cf7b37724a84 no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 17 22:02:56.093: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1199" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":-1,"completed":24,"skipped":233,"failed":0} SSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 17 22:02:51.364: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test emptydir 0666 on node default medium Jun 17 22:02:51.398: INFO: Waiting up to 5m0s for pod "pod-6f733c0f-250b-4e34-b58b-85afd9111c2f" in namespace "emptydir-3049" to be "Succeeded or Failed" Jun 17 22:02:51.399: INFO: Pod "pod-6f733c0f-250b-4e34-b58b-85afd9111c2f": Phase="Pending", Reason="", readiness=false. Elapsed: 1.766016ms Jun 17 22:02:53.404: INFO: Pod "pod-6f733c0f-250b-4e34-b58b-85afd9111c2f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006334314s Jun 17 22:02:55.411: INFO: Pod "pod-6f733c0f-250b-4e34-b58b-85afd9111c2f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.013689826s Jun 17 22:02:57.416: INFO: Pod "pod-6f733c0f-250b-4e34-b58b-85afd9111c2f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.018395596s STEP: Saw pod success Jun 17 22:02:57.416: INFO: Pod "pod-6f733c0f-250b-4e34-b58b-85afd9111c2f" satisfied condition "Succeeded or Failed" Jun 17 22:02:57.419: INFO: Trying to get logs from node node1 pod pod-6f733c0f-250b-4e34-b58b-85afd9111c2f container test-container: STEP: delete the pod Jun 17 22:02:57.432: INFO: Waiting for pod pod-6f733c0f-250b-4e34-b58b-85afd9111c2f to disappear Jun 17 22:02:57.434: INFO: Pod pod-6f733c0f-250b-4e34-b58b-85afd9111c2f no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 17 22:02:57.434: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-3049" for this suite. • [SLOW TEST:6.078 seconds] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":11,"skipped":149,"failed":1,"failures":["[sig-network] Services should be able to create a functioning NodePort service [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 17 22:02:30.579: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746 [It] should be able to change the type from ClusterIP to ExternalName [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating a service clusterip-service with the type=ClusterIP in namespace services-6873 STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service STEP: creating service externalsvc in namespace services-6873 STEP: creating replication controller externalsvc in namespace services-6873 I0617 22:02:30.623096 29 runners.go:190] Created replication controller with name: externalsvc, namespace: services-6873, replica count: 2 I0617 22:02:33.674007 29 runners.go:190] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0617 22:02:36.674769 29 runners.go:190] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady STEP: changing the ClusterIP service to type=ExternalName Jun 17 22:02:36.688: INFO: Creating new exec pod Jun 17 22:02:40.706: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6873 exec execpodbglsf -- /bin/sh -x -c nslookup clusterip-service.services-6873.svc.cluster.local' Jun 17 22:02:40.980: INFO: stderr: "+ nslookup clusterip-service.services-6873.svc.cluster.local\n" Jun 17 22:02:40.980: INFO: stdout: "Server:\t\t10.233.0.3\nAddress:\t10.233.0.3#53\n\nclusterip-service.services-6873.svc.cluster.local\tcanonical name = externalsvc.services-6873.svc.cluster.local.\nName:\texternalsvc.services-6873.svc.cluster.local\nAddress: 10.233.12.205\n\n" STEP: deleting ReplicationController externalsvc in namespace services-6873, will wait for the garbage collector to delete the pods Jun 17 22:02:41.037: INFO: Deleting ReplicationController externalsvc took: 3.671657ms Jun 17 22:02:41.138: INFO: Terminating ReplicationController externalsvc pods took: 100.18405ms Jun 17 22:02:59.350: INFO: Cleaning up the ClusterIP to ExternalName test service [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 17 22:02:59.357: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-6873" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750 • [SLOW TEST:28.786 seconds] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should be able to change the type from ClusterIP to ExternalName [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance]","total":-1,"completed":6,"skipped":241,"failed":1,"failures":["[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]"]} SSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 17 22:02:53.809: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41 [It] should provide container's memory limit [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test downward API volume plugin Jun 17 22:02:53.855: INFO: Waiting up to 5m0s for pod "downwardapi-volume-c61fd9f0-b279-4047-bd31-881532b52b7d" in namespace "downward-api-2122" to be "Succeeded or Failed" Jun 17 22:02:53.857: INFO: Pod "downwardapi-volume-c61fd9f0-b279-4047-bd31-881532b52b7d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.269017ms Jun 17 22:02:55.862: INFO: Pod "downwardapi-volume-c61fd9f0-b279-4047-bd31-881532b52b7d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007014962s Jun 17 22:02:57.865: INFO: Pod "downwardapi-volume-c61fd9f0-b279-4047-bd31-881532b52b7d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.010378838s Jun 17 22:02:59.870: INFO: Pod "downwardapi-volume-c61fd9f0-b279-4047-bd31-881532b52b7d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.014632035s STEP: Saw pod success Jun 17 22:02:59.870: INFO: Pod "downwardapi-volume-c61fd9f0-b279-4047-bd31-881532b52b7d" satisfied condition "Succeeded or Failed" Jun 17 22:02:59.872: INFO: Trying to get logs from node node1 pod downwardapi-volume-c61fd9f0-b279-4047-bd31-881532b52b7d container client-container: STEP: delete the pod Jun 17 22:02:59.884: INFO: Waiting for pod downwardapi-volume-c61fd9f0-b279-4047-bd31-881532b52b7d to disappear Jun 17 22:02:59.886: INFO: Pod downwardapi-volume-c61fd9f0-b279-4047-bd31-881532b52b7d no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 17 22:02:59.886: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-2122" for this suite. • [SLOW TEST:6.086 seconds] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should provide container's memory limit [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance]","total":-1,"completed":22,"skipped":368,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-apps] CronJob /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 17 22:01:59.092: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename cronjob STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] CronJob /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/cronjob.go:63 W0617 22:01:59.116190 35 warnings.go:70] batch/v1beta1 CronJob is deprecated in v1.21+, unavailable in v1.25+; use batch/v1 CronJob [It] should replace jobs when ReplaceConcurrent [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a ReplaceConcurrent cronjob STEP: Ensuring a job is scheduled STEP: Ensuring exactly one is scheduled STEP: Ensuring exactly one running job exists by listing jobs explicitly STEP: Ensuring the job is replaced with a new one STEP: Removing cronjob [AfterEach] [sig-apps] CronJob /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 17 22:03:01.138: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "cronjob-1740" for this suite. • [SLOW TEST:62.057 seconds] [sig-apps] CronJob /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should replace jobs when ReplaceConcurrent [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","total":-1,"completed":4,"skipped":91,"failed":1,"failures":["[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]"]} SSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 17 22:02:30.567: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:54 [It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Jun 17 22:02:30.601: INFO: The status of Pod test-webserver-96bdae8c-f029-4b27-8e50-3cf51fe013c6 is Pending, waiting for it to be Running (with Ready = true) Jun 17 22:02:32.606: INFO: The status of Pod test-webserver-96bdae8c-f029-4b27-8e50-3cf51fe013c6 is Pending, waiting for it to be Running (with Ready = true) Jun 17 22:02:34.606: INFO: The status of Pod test-webserver-96bdae8c-f029-4b27-8e50-3cf51fe013c6 is Pending, waiting for it to be Running (with Ready = true) Jun 17 22:02:36.605: INFO: The status of Pod test-webserver-96bdae8c-f029-4b27-8e50-3cf51fe013c6 is Pending, waiting for it to be Running (with Ready = true) Jun 17 22:02:38.606: INFO: The status of Pod test-webserver-96bdae8c-f029-4b27-8e50-3cf51fe013c6 is Pending, waiting for it to be Running (with Ready = true) Jun 17 22:02:40.607: INFO: The status of Pod test-webserver-96bdae8c-f029-4b27-8e50-3cf51fe013c6 is Pending, waiting for it to be Running (with Ready = true) Jun 17 22:02:42.606: INFO: The status of Pod test-webserver-96bdae8c-f029-4b27-8e50-3cf51fe013c6 is Pending, waiting for it to be Running (with Ready = true) Jun 17 22:02:44.605: INFO: The status of Pod test-webserver-96bdae8c-f029-4b27-8e50-3cf51fe013c6 is Running (Ready = false) Jun 17 22:02:46.606: INFO: The status of Pod test-webserver-96bdae8c-f029-4b27-8e50-3cf51fe013c6 is Running (Ready = false) Jun 17 22:02:48.606: INFO: The status of Pod test-webserver-96bdae8c-f029-4b27-8e50-3cf51fe013c6 is Running (Ready = false) Jun 17 22:02:50.605: INFO: The status of Pod test-webserver-96bdae8c-f029-4b27-8e50-3cf51fe013c6 is Running (Ready = false) Jun 17 22:02:52.605: INFO: The status of Pod test-webserver-96bdae8c-f029-4b27-8e50-3cf51fe013c6 is Running (Ready = false) Jun 17 22:02:54.605: INFO: The status of Pod test-webserver-96bdae8c-f029-4b27-8e50-3cf51fe013c6 is Running (Ready = false) Jun 17 22:02:56.612: INFO: The status of Pod test-webserver-96bdae8c-f029-4b27-8e50-3cf51fe013c6 is Running (Ready = false) Jun 17 22:02:58.605: INFO: The status of Pod test-webserver-96bdae8c-f029-4b27-8e50-3cf51fe013c6 is Running (Ready = false) Jun 17 22:03:00.608: INFO: The status of Pod test-webserver-96bdae8c-f029-4b27-8e50-3cf51fe013c6 is Running (Ready = false) Jun 17 22:03:02.605: INFO: The status of Pod test-webserver-96bdae8c-f029-4b27-8e50-3cf51fe013c6 is Running (Ready = true) Jun 17 22:03:02.607: INFO: Container started at 2022-06-17 22:02:42 +0000 UTC, pod became ready at 2022-06-17 22:03:00 +0000 UTC [AfterEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 17 22:03:02.607: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-6894" for this suite. • [SLOW TEST:32.047 seconds] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","total":-1,"completed":3,"skipped":27,"failed":1,"failures":["[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]"]} SSSSSSS ------------------------------ [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 17 22:03:02.630: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:186 [It] should delete a collection of pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Create set of pods Jun 17 22:03:02.662: INFO: created test-pod-1 Jun 17 22:03:02.673: INFO: created test-pod-2 Jun 17 22:03:02.684: INFO: created test-pod-3 STEP: waiting for all 3 pods to be located STEP: waiting for all pods to be deleted [AfterEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 17 22:03:02.708: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-2113" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Pods should delete a collection of pods [Conformance]","total":-1,"completed":4,"skipped":34,"failed":1,"failures":["[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]"]} SSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 17 22:03:02.747: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/rc.go:54 [It] should surface a failure condition on a common issue like exceeded quota [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Jun 17 22:03:02.767: INFO: Creating quota "condition-test" that allows only two pods to run in the current namespace STEP: Creating rc "condition-test" that asks for more than the allowed pod quota STEP: Checking rc "condition-test" has the desired failure condition set STEP: Scaling down rc "condition-test" to satisfy pod quota Jun 17 22:03:03.788: INFO: Updating replication controller "condition-test" STEP: Checking rc "condition-test" has no failure condition set [AfterEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 17 22:03:04.793: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-9084" for this suite. • ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance]","total":-1,"completed":5,"skipped":49,"failed":1,"failures":["[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]"]} SSS ------------------------------ [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 17 22:02:56.126: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test downward API volume plugin Jun 17 22:02:56.162: INFO: Waiting up to 5m0s for pod "downwardapi-volume-efdf85b5-2498-4f1c-9be2-586a01f57c22" in namespace "projected-5476" to be "Succeeded or Failed" Jun 17 22:02:56.164: INFO: Pod "downwardapi-volume-efdf85b5-2498-4f1c-9be2-586a01f57c22": Phase="Pending", Reason="", readiness=false. Elapsed: 2.314517ms Jun 17 22:02:58.168: INFO: Pod "downwardapi-volume-efdf85b5-2498-4f1c-9be2-586a01f57c22": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005872228s Jun 17 22:03:00.171: INFO: Pod "downwardapi-volume-efdf85b5-2498-4f1c-9be2-586a01f57c22": Phase="Pending", Reason="", readiness=false. Elapsed: 4.00867992s Jun 17 22:03:02.175: INFO: Pod "downwardapi-volume-efdf85b5-2498-4f1c-9be2-586a01f57c22": Phase="Pending", Reason="", readiness=false. Elapsed: 6.012771484s Jun 17 22:03:04.178: INFO: Pod "downwardapi-volume-efdf85b5-2498-4f1c-9be2-586a01f57c22": Phase="Pending", Reason="", readiness=false. Elapsed: 8.016185793s Jun 17 22:03:06.183: INFO: Pod "downwardapi-volume-efdf85b5-2498-4f1c-9be2-586a01f57c22": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.020840388s STEP: Saw pod success Jun 17 22:03:06.183: INFO: Pod "downwardapi-volume-efdf85b5-2498-4f1c-9be2-586a01f57c22" satisfied condition "Succeeded or Failed" Jun 17 22:03:06.185: INFO: Trying to get logs from node node2 pod downwardapi-volume-efdf85b5-2498-4f1c-9be2-586a01f57c22 container client-container: STEP: delete the pod Jun 17 22:03:06.196: INFO: Waiting for pod downwardapi-volume-efdf85b5-2498-4f1c-9be2-586a01f57c22 to disappear Jun 17 22:03:06.198: INFO: Pod downwardapi-volume-efdf85b5-2498-4f1c-9be2-586a01f57c22 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 17 22:03:06.198: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5476" for this suite. • [SLOW TEST:10.081 seconds] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":-1,"completed":25,"skipped":246,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 17 22:02:48.645: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:241 [BeforeEach] Kubectl replace /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1548 [It] should update a single-container pod's image [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: running the image k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 Jun 17 22:02:48.668: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-2886 run e2e-test-httpd-pod --image=k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 --labels=run=e2e-test-httpd-pod' Jun 17 22:02:48.824: INFO: stderr: "" Jun 17 22:02:48.824: INFO: stdout: "pod/e2e-test-httpd-pod created\n" STEP: verifying the pod e2e-test-httpd-pod is running STEP: verifying the pod e2e-test-httpd-pod was created Jun 17 22:02:53.877: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-2886 get pod e2e-test-httpd-pod -o json' Jun 17 22:02:54.049: INFO: stderr: "" Jun 17 22:02:54.049: INFO: stdout: "{\n \"apiVersion\": \"v1\",\n \"kind\": \"Pod\",\n \"metadata\": {\n \"annotations\": {\n \"k8s.v1.cni.cncf.io/network-status\": \"[{\\n \\\"name\\\": \\\"default-cni-network\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.244.3.86\\\"\\n ],\\n \\\"mac\\\": \\\"16:59:4c:6a:70:1d\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\n \"k8s.v1.cni.cncf.io/networks-status\": \"[{\\n \\\"name\\\": \\\"default-cni-network\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.244.3.86\\\"\\n ],\\n \\\"mac\\\": \\\"16:59:4c:6a:70:1d\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\n \"kubernetes.io/psp\": \"collectd\"\n },\n \"creationTimestamp\": \"2022-06-17T22:02:48Z\",\n \"labels\": {\n \"run\": \"e2e-test-httpd-pod\"\n },\n \"name\": \"e2e-test-httpd-pod\",\n \"namespace\": \"kubectl-2886\",\n \"resourceVersion\": \"37599\",\n \"uid\": \"73c28624-3e46-426a-b661-00ca36272fbe\"\n },\n \"spec\": {\n \"containers\": [\n {\n \"image\": \"k8s.gcr.io/e2e-test-images/httpd:2.4.38-1\",\n \"imagePullPolicy\": \"Always\",\n \"name\": \"e2e-test-httpd-pod\",\n \"resources\": {},\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"volumeMounts\": [\n {\n \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n \"name\": \"kube-api-access-8v5h4\",\n \"readOnly\": true\n }\n ]\n }\n ],\n \"dnsPolicy\": \"ClusterFirst\",\n \"enableServiceLinks\": true,\n \"nodeName\": \"node2\",\n \"preemptionPolicy\": \"PreemptLowerPriority\",\n \"priority\": 0,\n \"restartPolicy\": \"Always\",\n \"schedulerName\": \"default-scheduler\",\n \"securityContext\": {},\n \"serviceAccount\": \"default\",\n \"serviceAccountName\": \"default\",\n \"terminationGracePeriodSeconds\": 30,\n \"tolerations\": [\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/not-ready\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n },\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/unreachable\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n }\n ],\n \"volumes\": [\n {\n \"name\": \"kube-api-access-8v5h4\",\n \"projected\": {\n \"defaultMode\": 420,\n \"sources\": [\n {\n \"serviceAccountToken\": {\n \"expirationSeconds\": 3607,\n \"path\": \"token\"\n }\n },\n {\n \"configMap\": {\n \"items\": [\n {\n \"key\": \"ca.crt\",\n \"path\": \"ca.crt\"\n }\n ],\n \"name\": \"kube-root-ca.crt\"\n }\n },\n {\n \"downwardAPI\": {\n \"items\": [\n {\n \"fieldRef\": {\n \"apiVersion\": \"v1\",\n \"fieldPath\": \"metadata.namespace\"\n },\n \"path\": \"namespace\"\n }\n ]\n }\n }\n ]\n }\n }\n ]\n },\n \"status\": {\n \"conditions\": [\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2022-06-17T22:02:48Z\",\n \"status\": \"True\",\n \"type\": \"Initialized\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2022-06-17T22:02:52Z\",\n \"status\": \"True\",\n \"type\": \"Ready\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2022-06-17T22:02:52Z\",\n \"status\": \"True\",\n \"type\": \"ContainersReady\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2022-06-17T22:02:48Z\",\n \"status\": \"True\",\n \"type\": \"PodScheduled\"\n }\n ],\n \"containerStatuses\": [\n {\n \"containerID\": \"docker://dea2d57a69aa55dfc8dcee64a0e7862022a3b94e9ef9e79840e4c7c7bcd10287\",\n \"image\": \"k8s.gcr.io/e2e-test-images/httpd:2.4.38-1\",\n \"imageID\": \"docker-pullable://k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50\",\n \"lastState\": {},\n \"name\": \"e2e-test-httpd-pod\",\n \"ready\": true,\n \"restartCount\": 0,\n \"started\": true,\n \"state\": {\n \"running\": {\n \"startedAt\": \"2022-06-17T22:02:51Z\"\n }\n }\n }\n ],\n \"hostIP\": \"10.10.190.208\",\n \"phase\": \"Running\",\n \"podIP\": \"10.244.3.86\",\n \"podIPs\": [\n {\n \"ip\": \"10.244.3.86\"\n }\n ],\n \"qosClass\": \"BestEffort\",\n \"startTime\": \"2022-06-17T22:02:48Z\"\n }\n}\n" STEP: replace the image in the pod Jun 17 22:02:54.049: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-2886 replace -f -' Jun 17 22:02:54.443: INFO: stderr: "" Jun 17 22:02:54.443: INFO: stdout: "pod/e2e-test-httpd-pod replaced\n" STEP: verifying the pod e2e-test-httpd-pod has the right image k8s.gcr.io/e2e-test-images/busybox:1.29-1 [AfterEach] Kubectl replace /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1552 Jun 17 22:02:54.446: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-2886 delete pods e2e-test-httpd-pod' Jun 17 22:03:09.502: INFO: stderr: "" Jun 17 22:03:09.502: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 17 22:03:09.502: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2886" for this suite. • [SLOW TEST:20.866 seconds] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl replace /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1545 should update a single-container pod's image [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image [Conformance]","total":-1,"completed":20,"skipped":188,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 17 22:02:59.974: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test emptydir 0644 on node default medium Jun 17 22:03:00.007: INFO: Waiting up to 5m0s for pod "pod-6b89c07b-2de6-47a7-9660-f0cc961ecf68" in namespace "emptydir-6956" to be "Succeeded or Failed" Jun 17 22:03:00.010: INFO: Pod "pod-6b89c07b-2de6-47a7-9660-f0cc961ecf68": Phase="Pending", Reason="", readiness=false. Elapsed: 2.295174ms Jun 17 22:03:02.013: INFO: Pod "pod-6b89c07b-2de6-47a7-9660-f0cc961ecf68": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005900061s Jun 17 22:03:04.017: INFO: Pod "pod-6b89c07b-2de6-47a7-9660-f0cc961ecf68": Phase="Pending", Reason="", readiness=false. Elapsed: 4.009121602s Jun 17 22:03:06.022: INFO: Pod "pod-6b89c07b-2de6-47a7-9660-f0cc961ecf68": Phase="Pending", Reason="", readiness=false. Elapsed: 6.014872489s Jun 17 22:03:08.026: INFO: Pod "pod-6b89c07b-2de6-47a7-9660-f0cc961ecf68": Phase="Pending", Reason="", readiness=false. Elapsed: 8.018277825s Jun 17 22:03:10.031: INFO: Pod "pod-6b89c07b-2de6-47a7-9660-f0cc961ecf68": Phase="Pending", Reason="", readiness=false. Elapsed: 10.023709531s Jun 17 22:03:12.035: INFO: Pod "pod-6b89c07b-2de6-47a7-9660-f0cc961ecf68": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.0274787s STEP: Saw pod success Jun 17 22:03:12.035: INFO: Pod "pod-6b89c07b-2de6-47a7-9660-f0cc961ecf68" satisfied condition "Succeeded or Failed" Jun 17 22:03:12.038: INFO: Trying to get logs from node node2 pod pod-6b89c07b-2de6-47a7-9660-f0cc961ecf68 container test-container: STEP: delete the pod Jun 17 22:03:12.049: INFO: Waiting for pod pod-6b89c07b-2de6-47a7-9660-f0cc961ecf68 to disappear Jun 17 22:03:12.050: INFO: Pod pod-6b89c07b-2de6-47a7-9660-f0cc961ecf68 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 17 22:03:12.050: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-6956" for this suite. • [SLOW TEST:12.085 seconds] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":23,"skipped":410,"failed":0} SSS ------------------------------ [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 17 22:02:59.381: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746 [It] should serve a basic endpoint from pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating service endpoint-test2 in namespace services-2878 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-2878 to expose endpoints map[] Jun 17 22:02:59.415: INFO: successfully validated that service endpoint-test2 in namespace services-2878 exposes endpoints map[] STEP: Creating pod pod1 in namespace services-2878 Jun 17 22:02:59.432: INFO: The status of Pod pod1 is Pending, waiting for it to be Running (with Ready = true) Jun 17 22:03:01.436: INFO: The status of Pod pod1 is Pending, waiting for it to be Running (with Ready = true) Jun 17 22:03:03.436: INFO: The status of Pod pod1 is Pending, waiting for it to be Running (with Ready = true) Jun 17 22:03:05.437: INFO: The status of Pod pod1 is Pending, waiting for it to be Running (with Ready = true) Jun 17 22:03:07.436: INFO: The status of Pod pod1 is Pending, waiting for it to be Running (with Ready = true) Jun 17 22:03:09.435: INFO: The status of Pod pod1 is Running (Ready = true) STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-2878 to expose endpoints map[pod1:[80]] Jun 17 22:03:09.446: INFO: successfully validated that service endpoint-test2 in namespace services-2878 exposes endpoints map[pod1:[80]] STEP: Creating pod pod2 in namespace services-2878 Jun 17 22:03:09.459: INFO: The status of Pod pod2 is Pending, waiting for it to be Running (with Ready = true) Jun 17 22:03:11.465: INFO: The status of Pod pod2 is Pending, waiting for it to be Running (with Ready = true) Jun 17 22:03:13.462: INFO: The status of Pod pod2 is Pending, waiting for it to be Running (with Ready = true) Jun 17 22:03:15.463: INFO: The status of Pod pod2 is Pending, waiting for it to be Running (with Ready = true) Jun 17 22:03:17.463: INFO: The status of Pod pod2 is Running (Ready = true) STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-2878 to expose endpoints map[pod1:[80] pod2:[80]] Jun 17 22:03:17.476: INFO: successfully validated that service endpoint-test2 in namespace services-2878 exposes endpoints map[pod1:[80] pod2:[80]] STEP: Deleting pod pod1 in namespace services-2878 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-2878 to expose endpoints map[pod2:[80]] Jun 17 22:03:17.490: INFO: successfully validated that service endpoint-test2 in namespace services-2878 exposes endpoints map[pod2:[80]] STEP: Deleting pod pod2 in namespace services-2878 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-2878 to expose endpoints map[] Jun 17 22:03:17.500: INFO: successfully validated that service endpoint-test2 in namespace services-2878 exposes endpoints map[] [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 17 22:03:17.509: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-2878" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750 • [SLOW TEST:18.135 seconds] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should serve a basic endpoint from pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-network] Services should serve a basic endpoint from pods [Conformance]","total":-1,"completed":7,"skipped":249,"failed":1,"failures":["[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 17 22:03:04.810: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746 [It] should be able to change the type from ExternalName to ClusterIP [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating a service externalname-service with the type=ExternalName in namespace services-4627 STEP: changing the ExternalName service to type=ClusterIP STEP: creating replication controller externalname-service in namespace services-4627 I0617 22:03:04.846817 32 runners.go:190] Created replication controller with name: externalname-service, namespace: services-4627, replica count: 2 I0617 22:03:07.898136 32 runners.go:190] externalname-service Pods: 2 out of 2 created, 1 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0617 22:03:10.900188 32 runners.go:190] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jun 17 22:03:10.900: INFO: Creating new exec pod Jun 17 22:03:17.926: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4627 exec execpodb6sdf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80' Jun 17 22:03:18.203: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 externalname-service 80\nConnection to externalname-service 80 port [tcp/http] succeeded!\n" Jun 17 22:03:18.203: INFO: stdout: "externalname-service-99zmm" Jun 17 22:03:18.204: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4627 exec execpodb6sdf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.233.2.55 80' Jun 17 22:03:18.521: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 10.233.2.55 80\nConnection to 10.233.2.55 80 port [tcp/http] succeeded!\n" Jun 17 22:03:18.521: INFO: stdout: "externalname-service-rsn97" Jun 17 22:03:18.521: INFO: Cleaning up the ExternalName to ClusterIP test service [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 17 22:03:18.533: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-4627" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750 • [SLOW TEST:13.733 seconds] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should be able to change the type from ExternalName to ClusterIP [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","total":-1,"completed":6,"skipped":52,"failed":1,"failures":["[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 17 22:03:12.066: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated [AfterEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 17 22:03:19.100: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-559" for this suite. • [SLOW TEST:7.042 seconds] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]","total":-1,"completed":24,"skipped":413,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 17 22:03:19.140: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for deployment deletion to see if the garbage collector mistakenly deletes the rs STEP: Gathering metrics Jun 17 22:03:20.218: INFO: The status of Pod kube-controller-manager-master3 is Running (Ready = true) Jun 17 22:03:20.280: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 17 22:03:20.280: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-4240" for this suite. • ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]","total":-1,"completed":25,"skipped":432,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 17 22:03:09.552: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:241 [It] should create services for rc [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating Agnhost RC Jun 17 22:03:09.576: INFO: namespace kubectl-8102 Jun 17 22:03:09.576: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-8102 create -f -' Jun 17 22:03:09.996: INFO: stderr: "" Jun 17 22:03:09.996: INFO: stdout: "replicationcontroller/agnhost-primary created\n" STEP: Waiting for Agnhost primary to start. Jun 17 22:03:11.001: INFO: Selector matched 1 pods for map[app:agnhost] Jun 17 22:03:11.001: INFO: Found 0 / 1 Jun 17 22:03:12.001: INFO: Selector matched 1 pods for map[app:agnhost] Jun 17 22:03:12.001: INFO: Found 0 / 1 Jun 17 22:03:13.000: INFO: Selector matched 1 pods for map[app:agnhost] Jun 17 22:03:13.000: INFO: Found 0 / 1 Jun 17 22:03:14.000: INFO: Selector matched 1 pods for map[app:agnhost] Jun 17 22:03:14.000: INFO: Found 0 / 1 Jun 17 22:03:15.001: INFO: Selector matched 1 pods for map[app:agnhost] Jun 17 22:03:15.001: INFO: Found 0 / 1 Jun 17 22:03:16.000: INFO: Selector matched 1 pods for map[app:agnhost] Jun 17 22:03:16.000: INFO: Found 1 / 1 Jun 17 22:03:16.000: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Jun 17 22:03:16.003: INFO: Selector matched 1 pods for map[app:agnhost] Jun 17 22:03:16.003: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Jun 17 22:03:16.003: INFO: wait on agnhost-primary startup in kubectl-8102 Jun 17 22:03:16.003: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-8102 logs agnhost-primary-fmhz8 agnhost-primary' Jun 17 22:03:16.163: INFO: stderr: "" Jun 17 22:03:16.163: INFO: stdout: "Paused\n" STEP: exposing RC Jun 17 22:03:16.163: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-8102 expose rc agnhost-primary --name=rm2 --port=1234 --target-port=6379' Jun 17 22:03:16.380: INFO: stderr: "" Jun 17 22:03:16.380: INFO: stdout: "service/rm2 exposed\n" Jun 17 22:03:16.383: INFO: Service rm2 in namespace kubectl-8102 found. STEP: exposing service Jun 17 22:03:18.390: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-8102 expose service rm2 --name=rm3 --port=2345 --target-port=6379' Jun 17 22:03:18.590: INFO: stderr: "" Jun 17 22:03:18.590: INFO: stdout: "service/rm3 exposed\n" Jun 17 22:03:18.592: INFO: Service rm3 in namespace kubectl-8102 found. [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 17 22:03:20.599: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8102" for this suite. • [SLOW TEST:11.054 seconds] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl expose /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1223 should create services for rc [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl expose should create services for rc [Conformance]","total":-1,"completed":21,"skipped":208,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 17 22:03:17.611: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating configMap with name projected-configmap-test-volume-map-a6509374-f2a9-4f2b-bd2f-3d06b15f3aef STEP: Creating a pod to test consume configMaps Jun 17 22:03:17.646: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-d76022a4-1533-49a1-ada3-626132de916e" in namespace "projected-2557" to be "Succeeded or Failed" Jun 17 22:03:17.650: INFO: Pod "pod-projected-configmaps-d76022a4-1533-49a1-ada3-626132de916e": Phase="Pending", Reason="", readiness=false. Elapsed: 3.865311ms Jun 17 22:03:19.653: INFO: Pod "pod-projected-configmaps-d76022a4-1533-49a1-ada3-626132de916e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007233602s Jun 17 22:03:21.657: INFO: Pod "pod-projected-configmaps-d76022a4-1533-49a1-ada3-626132de916e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011143849s STEP: Saw pod success Jun 17 22:03:21.657: INFO: Pod "pod-projected-configmaps-d76022a4-1533-49a1-ada3-626132de916e" satisfied condition "Succeeded or Failed" Jun 17 22:03:21.660: INFO: Trying to get logs from node node2 pod pod-projected-configmaps-d76022a4-1533-49a1-ada3-626132de916e container agnhost-container: STEP: delete the pod Jun 17 22:03:21.673: INFO: Waiting for pod pod-projected-configmaps-d76022a4-1533-49a1-ada3-626132de916e to disappear Jun 17 22:03:21.675: INFO: Pod pod-projected-configmaps-d76022a4-1533-49a1-ada3-626132de916e no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 17 22:03:21.675: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2557" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":8,"skipped":317,"failed":1,"failures":["[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]"]} SSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 17 22:03:18.589: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test downward API volume plugin Jun 17 22:03:18.621: INFO: Waiting up to 5m0s for pod "downwardapi-volume-fbd3b5ab-2882-4481-8a77-e2751b3e7e9c" in namespace "projected-9516" to be "Succeeded or Failed" Jun 17 22:03:18.624: INFO: Pod "downwardapi-volume-fbd3b5ab-2882-4481-8a77-e2751b3e7e9c": Phase="Pending", Reason="", readiness=false. Elapsed: 3.070187ms Jun 17 22:03:20.627: INFO: Pod "downwardapi-volume-fbd3b5ab-2882-4481-8a77-e2751b3e7e9c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005939659s Jun 17 22:03:22.631: INFO: Pod "downwardapi-volume-fbd3b5ab-2882-4481-8a77-e2751b3e7e9c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.009593633s STEP: Saw pod success Jun 17 22:03:22.631: INFO: Pod "downwardapi-volume-fbd3b5ab-2882-4481-8a77-e2751b3e7e9c" satisfied condition "Succeeded or Failed" Jun 17 22:03:22.633: INFO: Trying to get logs from node node1 pod downwardapi-volume-fbd3b5ab-2882-4481-8a77-e2751b3e7e9c container client-container: STEP: delete the pod Jun 17 22:03:22.645: INFO: Waiting for pod downwardapi-volume-fbd3b5ab-2882-4481-8a77-e2751b3e7e9c to disappear Jun 17 22:03:22.647: INFO: Pod downwardapi-volume-fbd3b5ab-2882-4481-8a77-e2751b3e7e9c no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 17 22:03:22.647: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9516" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":-1,"completed":7,"skipped":77,"failed":1,"failures":["[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]"]} SSSSSSS ------------------------------ [BeforeEach] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 17 22:03:20.405: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Jun 17 22:03:25.463: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 17 22:03:25.472: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-4394" for this suite. • [SLOW TEST:5.075 seconds] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 blackbox test /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:41 on terminated container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:134 should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]","total":-1,"completed":26,"skipped":493,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 17 22:03:22.671: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test emptydir volume type on tmpfs Jun 17 22:03:22.709: INFO: Waiting up to 5m0s for pod "pod-e95c062f-561a-4f19-b7c1-b6f6686080a8" in namespace "emptydir-275" to be "Succeeded or Failed" Jun 17 22:03:22.711: INFO: Pod "pod-e95c062f-561a-4f19-b7c1-b6f6686080a8": Phase="Pending", Reason="", readiness=false. Elapsed: 1.994357ms Jun 17 22:03:24.713: INFO: Pod "pod-e95c062f-561a-4f19-b7c1-b6f6686080a8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00446528s Jun 17 22:03:26.716: INFO: Pod "pod-e95c062f-561a-4f19-b7c1-b6f6686080a8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.007693438s Jun 17 22:03:28.721: INFO: Pod "pod-e95c062f-561a-4f19-b7c1-b6f6686080a8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.012037187s STEP: Saw pod success Jun 17 22:03:28.721: INFO: Pod "pod-e95c062f-561a-4f19-b7c1-b6f6686080a8" satisfied condition "Succeeded or Failed" Jun 17 22:03:28.724: INFO: Trying to get logs from node node2 pod pod-e95c062f-561a-4f19-b7c1-b6f6686080a8 container test-container: STEP: delete the pod Jun 17 22:03:28.747: INFO: Waiting for pod pod-e95c062f-561a-4f19-b7c1-b6f6686080a8 to disappear Jun 17 22:03:28.749: INFO: Pod pod-e95c062f-561a-4f19-b7c1-b6f6686080a8 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 17 22:03:28.749: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-275" for this suite. • [SLOW TEST:6.086 seconds] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":8,"skipped":84,"failed":1,"failures":["[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 17 22:03:21.700: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Jun 17 22:03:22.074: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Jun 17 22:03:24.083: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63791100202, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63791100202, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63791100202, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63791100202, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} Jun 17 22:03:26.086: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63791100202, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63791100202, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63791100202, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63791100202, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jun 17 22:03:29.097: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] listing mutating webhooks should work [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Listing all of the created validation webhooks STEP: Creating a configMap that should be mutated STEP: Deleting the collection of validation webhooks STEP: Creating a configMap that should not be mutated [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 17 22:03:29.193: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-1721" for this suite. STEP: Destroying namespace "webhook-1721-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:7.522 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 listing mutating webhooks should work [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","total":-1,"completed":9,"skipped":326,"failed":1,"failures":["[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 17 22:03:20.655: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/rc.go:54 [It] should serve a basic image on each replica with a public image [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating replication controller my-hostname-basic-e68362fa-d4d5-4829-ac49-50b8e853e285 Jun 17 22:03:20.685: INFO: Pod name my-hostname-basic-e68362fa-d4d5-4829-ac49-50b8e853e285: Found 0 pods out of 1 Jun 17 22:03:25.689: INFO: Pod name my-hostname-basic-e68362fa-d4d5-4829-ac49-50b8e853e285: Found 1 pods out of 1 Jun 17 22:03:25.689: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-e68362fa-d4d5-4829-ac49-50b8e853e285" are running Jun 17 22:03:25.691: INFO: Pod "my-hostname-basic-e68362fa-d4d5-4829-ac49-50b8e853e285-2n2jr" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-17 22:03:20 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-17 22:03:25 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-17 22:03:25 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-17 22:03:20 +0000 UTC Reason: Message:}]) Jun 17 22:03:25.692: INFO: Trying to dial the pod Jun 17 22:03:30.702: INFO: Controller my-hostname-basic-e68362fa-d4d5-4829-ac49-50b8e853e285: Got expected result from replica 1 [my-hostname-basic-e68362fa-d4d5-4829-ac49-50b8e853e285-2n2jr]: "my-hostname-basic-e68362fa-d4d5-4829-ac49-50b8e853e285-2n2jr", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 17 22:03:30.702: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-2758" for this suite. • [SLOW TEST:10.056 seconds] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance]","total":-1,"completed":22,"skipped":234,"failed":0} SSS ------------------------------ [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 17 22:03:25.513: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:186 [It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating the pod STEP: submitting the pod to kubernetes Jun 17 22:03:25.556: INFO: The status of Pod pod-update-activedeadlineseconds-60848298-9b27-4ae4-a1c7-c1ddf141715d is Pending, waiting for it to be Running (with Ready = true) Jun 17 22:03:27.559: INFO: The status of Pod pod-update-activedeadlineseconds-60848298-9b27-4ae4-a1c7-c1ddf141715d is Pending, waiting for it to be Running (with Ready = true) Jun 17 22:03:29.559: INFO: The status of Pod pod-update-activedeadlineseconds-60848298-9b27-4ae4-a1c7-c1ddf141715d is Pending, waiting for it to be Running (with Ready = true) Jun 17 22:03:31.561: INFO: The status of Pod pod-update-activedeadlineseconds-60848298-9b27-4ae4-a1c7-c1ddf141715d is Running (Ready = true) STEP: verifying the pod is in kubernetes STEP: updating the pod Jun 17 22:03:32.076: INFO: Successfully updated pod "pod-update-activedeadlineseconds-60848298-9b27-4ae4-a1c7-c1ddf141715d" Jun 17 22:03:32.076: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-60848298-9b27-4ae4-a1c7-c1ddf141715d" in namespace "pods-6869" to be "terminated due to deadline exceeded" Jun 17 22:03:32.078: INFO: Pod "pod-update-activedeadlineseconds-60848298-9b27-4ae4-a1c7-c1ddf141715d": Phase="Running", Reason="", readiness=true. Elapsed: 1.95982ms Jun 17 22:03:34.081: INFO: Pod "pod-update-activedeadlineseconds-60848298-9b27-4ae4-a1c7-c1ddf141715d": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.004610522s Jun 17 22:03:34.081: INFO: Pod "pod-update-activedeadlineseconds-60848298-9b27-4ae4-a1c7-c1ddf141715d" satisfied condition "terminated due to deadline exceeded" [AfterEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 17 22:03:34.081: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-6869" for this suite. • [SLOW TEST:8.576 seconds] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]","total":-1,"completed":27,"skipped":509,"failed":0} SSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-apps] DisruptionController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 17 22:03:30.721: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename disruption STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] DisruptionController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/disruption.go:69 [BeforeEach] Listing PodDisruptionBudgets for all namespaces /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 17 22:03:30.743: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename disruption-2 STEP: Waiting for a default service account to be provisioned in namespace [It] should list and delete a collection of PodDisruptionBudgets [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Waiting for the pdb to be processed STEP: Waiting for the pdb to be processed STEP: Waiting for the pdb to be processed STEP: listing a collection of PDBs across all namespaces STEP: listing a collection of PDBs in namespace disruption-6045 STEP: deleting a collection of PDBs STEP: Waiting for the PDB collection to be deleted [AfterEach] Listing PodDisruptionBudgets for all namespaces /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 17 22:03:34.810: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "disruption-2-3837" for this suite. [AfterEach] [sig-apps] DisruptionController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 17 22:03:34.818: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "disruption-6045" for this suite. • ------------------------------ {"msg":"PASSED [sig-apps] DisruptionController Listing PodDisruptionBudgets for all namespaces should list and delete a collection of PodDisruptionBudgets [Conformance]","total":-1,"completed":23,"skipped":237,"failed":0} SSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 17 22:03:06.238: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:241 [BeforeEach] Update Demo /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:293 [It] should scale a replication controller [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating a replication controller Jun 17 22:03:06.260: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-8803 create -f -' Jun 17 22:03:06.647: INFO: stderr: "" Jun 17 22:03:06.647: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Jun 17 22:03:06.647: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-8803 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Jun 17 22:03:06.826: INFO: stderr: "" Jun 17 22:03:06.826: INFO: stdout: "update-demo-nautilus-mlljv update-demo-nautilus-s56sp " Jun 17 22:03:06.826: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-8803 get pods update-demo-nautilus-mlljv -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Jun 17 22:03:06.998: INFO: stderr: "" Jun 17 22:03:06.998: INFO: stdout: "" Jun 17 22:03:06.998: INFO: update-demo-nautilus-mlljv is created but not running Jun 17 22:03:12.001: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-8803 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Jun 17 22:03:12.178: INFO: stderr: "" Jun 17 22:03:12.178: INFO: stdout: "update-demo-nautilus-mlljv update-demo-nautilus-s56sp " Jun 17 22:03:12.178: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-8803 get pods update-demo-nautilus-mlljv -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Jun 17 22:03:12.318: INFO: stderr: "" Jun 17 22:03:12.319: INFO: stdout: "" Jun 17 22:03:12.319: INFO: update-demo-nautilus-mlljv is created but not running Jun 17 22:03:17.320: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-8803 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Jun 17 22:03:17.483: INFO: stderr: "" Jun 17 22:03:17.483: INFO: stdout: "update-demo-nautilus-mlljv update-demo-nautilus-s56sp " Jun 17 22:03:17.483: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-8803 get pods update-demo-nautilus-mlljv -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Jun 17 22:03:17.644: INFO: stderr: "" Jun 17 22:03:17.644: INFO: stdout: "true" Jun 17 22:03:17.644: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-8803 get pods update-demo-nautilus-mlljv -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' Jun 17 22:03:17.801: INFO: stderr: "" Jun 17 22:03:17.801: INFO: stdout: "k8s.gcr.io/e2e-test-images/nautilus:1.4" Jun 17 22:03:17.801: INFO: validating pod update-demo-nautilus-mlljv Jun 17 22:03:17.804: INFO: got data: { "image": "nautilus.jpg" } Jun 17 22:03:17.804: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jun 17 22:03:17.804: INFO: update-demo-nautilus-mlljv is verified up and running Jun 17 22:03:17.804: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-8803 get pods update-demo-nautilus-s56sp -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Jun 17 22:03:17.979: INFO: stderr: "" Jun 17 22:03:17.979: INFO: stdout: "true" Jun 17 22:03:17.979: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-8803 get pods update-demo-nautilus-s56sp -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' Jun 17 22:03:18.151: INFO: stderr: "" Jun 17 22:03:18.151: INFO: stdout: "k8s.gcr.io/e2e-test-images/nautilus:1.4" Jun 17 22:03:18.151: INFO: validating pod update-demo-nautilus-s56sp Jun 17 22:03:18.155: INFO: got data: { "image": "nautilus.jpg" } Jun 17 22:03:18.155: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jun 17 22:03:18.155: INFO: update-demo-nautilus-s56sp is verified up and running STEP: scaling down the replication controller Jun 17 22:03:18.164: INFO: scanned /root for discovery docs: Jun 17 22:03:18.164: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-8803 scale rc update-demo-nautilus --replicas=1 --timeout=5m' Jun 17 22:03:18.377: INFO: stderr: "" Jun 17 22:03:18.377: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. Jun 17 22:03:18.377: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-8803 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Jun 17 22:03:18.557: INFO: stderr: "" Jun 17 22:03:18.557: INFO: stdout: "update-demo-nautilus-mlljv update-demo-nautilus-s56sp " STEP: Replicas for name=update-demo: expected=1 actual=2 Jun 17 22:03:23.558: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-8803 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Jun 17 22:03:23.740: INFO: stderr: "" Jun 17 22:03:23.740: INFO: stdout: "update-demo-nautilus-mlljv update-demo-nautilus-s56sp " STEP: Replicas for name=update-demo: expected=1 actual=2 Jun 17 22:03:28.742: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-8803 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Jun 17 22:03:28.933: INFO: stderr: "" Jun 17 22:03:28.933: INFO: stdout: "update-demo-nautilus-s56sp " Jun 17 22:03:28.933: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-8803 get pods update-demo-nautilus-s56sp -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Jun 17 22:03:29.090: INFO: stderr: "" Jun 17 22:03:29.090: INFO: stdout: "true" Jun 17 22:03:29.090: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-8803 get pods update-demo-nautilus-s56sp -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' Jun 17 22:03:29.258: INFO: stderr: "" Jun 17 22:03:29.258: INFO: stdout: "k8s.gcr.io/e2e-test-images/nautilus:1.4" Jun 17 22:03:29.258: INFO: validating pod update-demo-nautilus-s56sp Jun 17 22:03:29.261: INFO: got data: { "image": "nautilus.jpg" } Jun 17 22:03:29.261: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jun 17 22:03:29.261: INFO: update-demo-nautilus-s56sp is verified up and running STEP: scaling up the replication controller Jun 17 22:03:29.270: INFO: scanned /root for discovery docs: Jun 17 22:03:29.270: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-8803 scale rc update-demo-nautilus --replicas=2 --timeout=5m' Jun 17 22:03:29.479: INFO: stderr: "" Jun 17 22:03:29.479: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. Jun 17 22:03:29.479: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-8803 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Jun 17 22:03:29.664: INFO: stderr: "" Jun 17 22:03:29.664: INFO: stdout: "update-demo-nautilus-s56sp update-demo-nautilus-w8bsp " Jun 17 22:03:29.665: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-8803 get pods update-demo-nautilus-s56sp -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Jun 17 22:03:29.835: INFO: stderr: "" Jun 17 22:03:29.835: INFO: stdout: "true" Jun 17 22:03:29.835: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-8803 get pods update-demo-nautilus-s56sp -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' Jun 17 22:03:29.997: INFO: stderr: "" Jun 17 22:03:29.997: INFO: stdout: "k8s.gcr.io/e2e-test-images/nautilus:1.4" Jun 17 22:03:29.997: INFO: validating pod update-demo-nautilus-s56sp Jun 17 22:03:30.000: INFO: got data: { "image": "nautilus.jpg" } Jun 17 22:03:30.001: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jun 17 22:03:30.001: INFO: update-demo-nautilus-s56sp is verified up and running Jun 17 22:03:30.001: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-8803 get pods update-demo-nautilus-w8bsp -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Jun 17 22:03:30.167: INFO: stderr: "" Jun 17 22:03:30.167: INFO: stdout: "" Jun 17 22:03:30.167: INFO: update-demo-nautilus-w8bsp is created but not running Jun 17 22:03:35.168: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-8803 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Jun 17 22:03:35.338: INFO: stderr: "" Jun 17 22:03:35.338: INFO: stdout: "update-demo-nautilus-s56sp update-demo-nautilus-w8bsp " Jun 17 22:03:35.338: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-8803 get pods update-demo-nautilus-s56sp -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Jun 17 22:03:35.482: INFO: stderr: "" Jun 17 22:03:35.482: INFO: stdout: "true" Jun 17 22:03:35.482: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-8803 get pods update-demo-nautilus-s56sp -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' Jun 17 22:03:35.632: INFO: stderr: "" Jun 17 22:03:35.632: INFO: stdout: "k8s.gcr.io/e2e-test-images/nautilus:1.4" Jun 17 22:03:35.632: INFO: validating pod update-demo-nautilus-s56sp Jun 17 22:03:35.634: INFO: got data: { "image": "nautilus.jpg" } Jun 17 22:03:35.635: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jun 17 22:03:35.635: INFO: update-demo-nautilus-s56sp is verified up and running Jun 17 22:03:35.635: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-8803 get pods update-demo-nautilus-w8bsp -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Jun 17 22:03:35.789: INFO: stderr: "" Jun 17 22:03:35.789: INFO: stdout: "true" Jun 17 22:03:35.789: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-8803 get pods update-demo-nautilus-w8bsp -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' Jun 17 22:03:35.948: INFO: stderr: "" Jun 17 22:03:35.948: INFO: stdout: "k8s.gcr.io/e2e-test-images/nautilus:1.4" Jun 17 22:03:35.948: INFO: validating pod update-demo-nautilus-w8bsp Jun 17 22:03:35.951: INFO: got data: { "image": "nautilus.jpg" } Jun 17 22:03:35.951: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jun 17 22:03:35.951: INFO: update-demo-nautilus-w8bsp is verified up and running STEP: using delete to clean up resources Jun 17 22:03:35.952: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-8803 delete --grace-period=0 --force -f -' Jun 17 22:03:36.074: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jun 17 22:03:36.074: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Jun 17 22:03:36.074: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-8803 get rc,svc -l name=update-demo --no-headers' Jun 17 22:03:36.281: INFO: stderr: "No resources found in kubectl-8803 namespace.\n" Jun 17 22:03:36.281: INFO: stdout: "" Jun 17 22:03:36.281: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-8803 get pods -l name=update-demo -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Jun 17 22:03:36.469: INFO: stderr: "" Jun 17 22:03:36.470: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 17 22:03:36.470: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8803" for this suite. • [SLOW TEST:30.239 seconds] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Update Demo /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:291 should scale a replication controller [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance]","total":-1,"completed":26,"skipped":261,"failed":0} SSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 17 22:03:34.116: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating projection with secret that has name projected-secret-test-map-b44d469d-88a8-4f9f-8b3e-ba047ed1b917 STEP: Creating a pod to test consume secrets Jun 17 22:03:34.150: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-cd1f4b58-df2b-4722-8508-cf542bc25817" in namespace "projected-1810" to be "Succeeded or Failed" Jun 17 22:03:34.152: INFO: Pod "pod-projected-secrets-cd1f4b58-df2b-4722-8508-cf542bc25817": Phase="Pending", Reason="", readiness=false. Elapsed: 1.927924ms Jun 17 22:03:36.156: INFO: Pod "pod-projected-secrets-cd1f4b58-df2b-4722-8508-cf542bc25817": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00567814s Jun 17 22:03:38.159: INFO: Pod "pod-projected-secrets-cd1f4b58-df2b-4722-8508-cf542bc25817": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.009060619s STEP: Saw pod success Jun 17 22:03:38.159: INFO: Pod "pod-projected-secrets-cd1f4b58-df2b-4722-8508-cf542bc25817" satisfied condition "Succeeded or Failed" Jun 17 22:03:38.161: INFO: Trying to get logs from node node1 pod pod-projected-secrets-cd1f4b58-df2b-4722-8508-cf542bc25817 container projected-secret-volume-test: STEP: delete the pod Jun 17 22:03:38.175: INFO: Waiting for pod pod-projected-secrets-cd1f4b58-df2b-4722-8508-cf542bc25817 to disappear Jun 17 22:03:38.177: INFO: Pod pod-projected-secrets-cd1f4b58-df2b-4722-8508-cf542bc25817 no longer exists [AfterEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 17 22:03:38.177: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1810" for this suite. • ------------------------------ [BeforeEach] [sig-network] EndpointSliceMirroring /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 17 22:03:36.492: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename endpointslicemirroring STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] EndpointSliceMirroring /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/endpointslicemirroring.go:39 [It] should mirror a custom Endpoints resource through create update and delete [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: mirroring a new custom Endpoint Jun 17 22:03:36.524: INFO: Waiting for at least 1 EndpointSlice to exist, got 0 STEP: mirroring an update to a custom Endpoint STEP: mirroring deletion of a custom Endpoint Jun 17 22:03:38.541: INFO: Waiting for 0 EndpointSlices to exist, got 1 [AfterEach] [sig-network] EndpointSliceMirroring /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 17 22:03:40.547: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "endpointslicemirroring-1443" for this suite. • ------------------------------ {"msg":"PASSED [sig-network] EndpointSliceMirroring should mirror a custom Endpoints resource through create update and delete [Conformance]","total":-1,"completed":27,"skipped":269,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 17 22:03:28.818: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Jun 17 22:03:29.343: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Jun 17 22:03:31.353: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63791100209, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63791100209, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63791100209, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63791100209, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jun 17 22:03:34.362: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource with pruning [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Jun 17 22:03:34.364: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-8459-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource that should be mutated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 17 22:03:42.454: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-6549" for this suite. STEP: Destroying namespace "webhook-6549-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:13.671 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource with pruning [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","total":-1,"completed":9,"skipped":116,"failed":1,"failures":["[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 17 22:03:40.591: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41 [It] should provide podname only [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test downward API volume plugin Jun 17 22:03:40.625: INFO: Waiting up to 5m0s for pod "downwardapi-volume-bafa6c8d-7852-4f20-8817-aac13c38d314" in namespace "downward-api-4811" to be "Succeeded or Failed" Jun 17 22:03:40.627: INFO: Pod "downwardapi-volume-bafa6c8d-7852-4f20-8817-aac13c38d314": Phase="Pending", Reason="", readiness=false. Elapsed: 2.09068ms Jun 17 22:03:42.631: INFO: Pod "downwardapi-volume-bafa6c8d-7852-4f20-8817-aac13c38d314": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005645375s Jun 17 22:03:44.634: INFO: Pod "downwardapi-volume-bafa6c8d-7852-4f20-8817-aac13c38d314": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.009291195s STEP: Saw pod success Jun 17 22:03:44.635: INFO: Pod "downwardapi-volume-bafa6c8d-7852-4f20-8817-aac13c38d314" satisfied condition "Succeeded or Failed" Jun 17 22:03:44.637: INFO: Trying to get logs from node node1 pod downwardapi-volume-bafa6c8d-7852-4f20-8817-aac13c38d314 container client-container: STEP: delete the pod Jun 17 22:03:44.649: INFO: Waiting for pod downwardapi-volume-bafa6c8d-7852-4f20-8817-aac13c38d314 to disappear Jun 17 22:03:44.651: INFO: Pod downwardapi-volume-bafa6c8d-7852-4f20-8817-aac13c38d314 no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 17 22:03:44.651: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-4811" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance]","total":-1,"completed":28,"skipped":288,"failed":0} SSSSSSSS ------------------------------ {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":-1,"completed":28,"skipped":522,"failed":0} [BeforeEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 17 22:03:38.186: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete pods created by rc when not orphaning [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: create the rc STEP: delete the rc STEP: wait for all pods to be garbage collected STEP: Gathering metrics Jun 17 22:03:48.248: INFO: The status of Pod kube-controller-manager-master3 is Running (Ready = true) Jun 17 22:03:48.310: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 17 22:03:48.310: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-8402" for this suite. • [SLOW TEST:10.134 seconds] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should delete pods created by rc when not orphaning [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance]","total":-1,"completed":29,"skipped":522,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 17 22:03:01.185: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of same group but different versions [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: CRs in the same group but different versions (one multiversion CRD) show up in OpenAPI documentation Jun 17 22:03:01.218: INFO: >>> kubeConfig: /root/.kube/config STEP: CRs in the same group but different versions (two CRDs) show up in OpenAPI documentation Jun 17 22:03:19.656: INFO: >>> kubeConfig: /root/.kube/config Jun 17 22:03:28.299: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 17 22:03:48.661: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-71" for this suite. • [SLOW TEST:47.496 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of same group but different versions [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance]","total":-1,"completed":5,"skipped":110,"failed":1,"failures":["[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]"]} SSSSSSS ------------------------------ [BeforeEach] [sig-node] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 17 22:03:44.672: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/kubelet.go:38 [It] should print the output to logs [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Jun 17 22:03:44.707: INFO: The status of Pod busybox-scheduling-bd5d55f4-0703-428f-b3fa-fe5155a119c4 is Pending, waiting for it to be Running (with Ready = true) Jun 17 22:03:46.710: INFO: The status of Pod busybox-scheduling-bd5d55f4-0703-428f-b3fa-fe5155a119c4 is Pending, waiting for it to be Running (with Ready = true) Jun 17 22:03:48.710: INFO: The status of Pod busybox-scheduling-bd5d55f4-0703-428f-b3fa-fe5155a119c4 is Pending, waiting for it to be Running (with Ready = true) Jun 17 22:03:50.712: INFO: The status of Pod busybox-scheduling-bd5d55f4-0703-428f-b3fa-fe5155a119c4 is Running (Ready = true) [AfterEach] [sig-node] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 17 22:03:50.720: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-3747" for this suite. • [SLOW TEST:6.056 seconds] [sig-node] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 when scheduling a busybox command in a pod /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/kubelet.go:41 should print the output to logs [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance]","total":-1,"completed":29,"skipped":296,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 17 22:03:48.367: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test downward API volume plugin Jun 17 22:03:48.402: INFO: Waiting up to 5m0s for pod "downwardapi-volume-612f2866-e9c9-4d18-b1d2-fd0435e0ecd2" in namespace "projected-2049" to be "Succeeded or Failed" Jun 17 22:03:48.405: INFO: Pod "downwardapi-volume-612f2866-e9c9-4d18-b1d2-fd0435e0ecd2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.26124ms Jun 17 22:03:50.408: INFO: Pod "downwardapi-volume-612f2866-e9c9-4d18-b1d2-fd0435e0ecd2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005914829s Jun 17 22:03:52.412: INFO: Pod "downwardapi-volume-612f2866-e9c9-4d18-b1d2-fd0435e0ecd2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.009625314s STEP: Saw pod success Jun 17 22:03:52.412: INFO: Pod "downwardapi-volume-612f2866-e9c9-4d18-b1d2-fd0435e0ecd2" satisfied condition "Succeeded or Failed" Jun 17 22:03:52.414: INFO: Trying to get logs from node node2 pod downwardapi-volume-612f2866-e9c9-4d18-b1d2-fd0435e0ecd2 container client-container: STEP: delete the pod Jun 17 22:03:52.427: INFO: Waiting for pod downwardapi-volume-612f2866-e9c9-4d18-b1d2-fd0435e0ecd2 to disappear Jun 17 22:03:52.429: INFO: Pod downwardapi-volume-612f2866-e9c9-4d18-b1d2-fd0435e0ecd2 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 17 22:03:52.429: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2049" for this suite. • ------------------------------ [BeforeEach] [sig-api-machinery] Watchers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 17 22:03:42.537: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating a watch on configmaps with a certain label STEP: creating a new configmap STEP: modifying the configmap once STEP: changing the label value of the configmap STEP: Expecting to observe a delete notification for the watched object Jun 17 22:03:42.573: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-2142 b962a3ce-df02-47b0-b669-64bd539dc877 39282 0 2022-06-17 22:03:42 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2022-06-17 22:03:42 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} Jun 17 22:03:42.573: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-2142 b962a3ce-df02-47b0-b669-64bd539dc877 39284 0 2022-06-17 22:03:42 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2022-06-17 22:03:42 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} Jun 17 22:03:42.573: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-2142 b962a3ce-df02-47b0-b669-64bd539dc877 39285 0 2022-06-17 22:03:42 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2022-06-17 22:03:42 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: modifying the configmap a second time STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements STEP: changing the label value of the configmap back STEP: modifying the configmap a third time STEP: deleting the configmap STEP: Expecting to observe an add notification for the watched object when the label value was restored Jun 17 22:03:52.592: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-2142 b962a3ce-df02-47b0-b669-64bd539dc877 39515 0 2022-06-17 22:03:42 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2022-06-17 22:03:42 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} Jun 17 22:03:52.592: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-2142 b962a3ce-df02-47b0-b669-64bd539dc877 39516 0 2022-06-17 22:03:42 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2022-06-17 22:03:42 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},Immutable:nil,} Jun 17 22:03:52.593: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-2142 b962a3ce-df02-47b0-b669-64bd539dc877 39517 0 2022-06-17 22:03:42 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2022-06-17 22:03:42 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 17 22:03:52.593: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-2142" for this suite. • [SLOW TEST:10.063 seconds] [sig-api-machinery] Watchers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance]","total":-1,"completed":10,"skipped":145,"failed":1,"failures":["[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]"]} SSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Docker Containers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 17 22:03:48.698: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command and arguments [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test override all Jun 17 22:03:48.733: INFO: Waiting up to 5m0s for pod "client-containers-909e9abc-4162-4222-bedb-8f343655e8a7" in namespace "containers-5669" to be "Succeeded or Failed" Jun 17 22:03:48.737: INFO: Pod "client-containers-909e9abc-4162-4222-bedb-8f343655e8a7": Phase="Pending", Reason="", readiness=false. Elapsed: 3.334315ms Jun 17 22:03:50.740: INFO: Pod "client-containers-909e9abc-4162-4222-bedb-8f343655e8a7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006511598s Jun 17 22:03:52.745: INFO: Pod "client-containers-909e9abc-4162-4222-bedb-8f343655e8a7": Phase="Running", Reason="", readiness=true. Elapsed: 4.011568078s Jun 17 22:03:54.749: INFO: Pod "client-containers-909e9abc-4162-4222-bedb-8f343655e8a7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.01514411s STEP: Saw pod success Jun 17 22:03:54.749: INFO: Pod "client-containers-909e9abc-4162-4222-bedb-8f343655e8a7" satisfied condition "Succeeded or Failed" Jun 17 22:03:54.751: INFO: Trying to get logs from node node2 pod client-containers-909e9abc-4162-4222-bedb-8f343655e8a7 container agnhost-container: STEP: delete the pod Jun 17 22:03:54.764: INFO: Waiting for pod client-containers-909e9abc-4162-4222-bedb-8f343655e8a7 to disappear Jun 17 22:03:54.765: INFO: Pod client-containers-909e9abc-4162-4222-bedb-8f343655e8a7 no longer exists [AfterEach] [sig-node] Docker Containers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 17 22:03:54.766: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-5669" for this suite. • [SLOW TEST:6.076 seconds] [sig-node] Docker Containers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should be able to override the image's default command and arguments [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance]","total":-1,"completed":6,"skipped":117,"failed":1,"failures":["[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 17 22:03:50.768: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test emptydir 0644 on tmpfs Jun 17 22:03:50.801: INFO: Waiting up to 5m0s for pod "pod-cc45c3ca-ac68-429a-8d92-4cd5df41103e" in namespace "emptydir-4252" to be "Succeeded or Failed" Jun 17 22:03:50.804: INFO: Pod "pod-cc45c3ca-ac68-429a-8d92-4cd5df41103e": Phase="Pending", Reason="", readiness=false. Elapsed: 3.034612ms Jun 17 22:03:52.806: INFO: Pod "pod-cc45c3ca-ac68-429a-8d92-4cd5df41103e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00523005s Jun 17 22:03:54.809: INFO: Pod "pod-cc45c3ca-ac68-429a-8d92-4cd5df41103e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.008307264s STEP: Saw pod success Jun 17 22:03:54.809: INFO: Pod "pod-cc45c3ca-ac68-429a-8d92-4cd5df41103e" satisfied condition "Succeeded or Failed" Jun 17 22:03:54.811: INFO: Trying to get logs from node node1 pod pod-cc45c3ca-ac68-429a-8d92-4cd5df41103e container test-container: STEP: delete the pod Jun 17 22:03:54.978: INFO: Waiting for pod pod-cc45c3ca-ac68-429a-8d92-4cd5df41103e to disappear Jun 17 22:03:54.979: INFO: Pod pod-cc45c3ca-ac68-429a-8d92-4cd5df41103e no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 17 22:03:54.980: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-4252" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":30,"skipped":316,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 17 22:03:29.280: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of different groups [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: CRs in different groups (two CRDs) show up in OpenAPI documentation Jun 17 22:03:29.303: INFO: >>> kubeConfig: /root/.kube/config Jun 17 22:03:38.473: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 17 22:03:57.194: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-685" for this suite. • [SLOW TEST:27.933 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of different groups [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","total":-1,"completed":10,"skipped":361,"failed":1,"failures":["[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 17 22:03:54.817: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test emptydir 0666 on tmpfs Jun 17 22:03:54.850: INFO: Waiting up to 5m0s for pod "pod-609f58e4-baba-4a0f-9634-941729d96504" in namespace "emptydir-9710" to be "Succeeded or Failed" Jun 17 22:03:54.852: INFO: Pod "pod-609f58e4-baba-4a0f-9634-941729d96504": Phase="Pending", Reason="", readiness=false. Elapsed: 2.247109ms Jun 17 22:03:56.855: INFO: Pod "pod-609f58e4-baba-4a0f-9634-941729d96504": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005056417s Jun 17 22:03:58.860: INFO: Pod "pod-609f58e4-baba-4a0f-9634-941729d96504": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.009455811s STEP: Saw pod success Jun 17 22:03:58.860: INFO: Pod "pod-609f58e4-baba-4a0f-9634-941729d96504" satisfied condition "Succeeded or Failed" Jun 17 22:03:58.862: INFO: Trying to get logs from node node2 pod pod-609f58e4-baba-4a0f-9634-941729d96504 container test-container: STEP: delete the pod Jun 17 22:03:58.876: INFO: Waiting for pod pod-609f58e4-baba-4a0f-9634-941729d96504 to disappear Jun 17 22:03:58.878: INFO: Pod pod-609f58e4-baba-4a0f-9634-941729d96504 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 17 22:03:58.878: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-9710" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":7,"skipped":142,"failed":1,"failures":["[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]"]} SSSSSSSSSSSSSS ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":30,"skipped":552,"failed":0} [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 17 22:03:52.441: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41 [It] should update annotations on modification [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating the pod Jun 17 22:03:52.477: INFO: The status of Pod annotationupdate00355489-2827-4d66-a2c7-a101dc59ecec is Pending, waiting for it to be Running (with Ready = true) Jun 17 22:03:54.481: INFO: The status of Pod annotationupdate00355489-2827-4d66-a2c7-a101dc59ecec is Pending, waiting for it to be Running (with Ready = true) Jun 17 22:03:56.480: INFO: The status of Pod annotationupdate00355489-2827-4d66-a2c7-a101dc59ecec is Running (Ready = true) Jun 17 22:03:57.001: INFO: Successfully updated pod "annotationupdate00355489-2827-4d66-a2c7-a101dc59ecec" [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 17 22:03:59.026: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-7282" for this suite. • [SLOW TEST:6.594 seconds] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should update annotations on modification [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance]","total":-1,"completed":31,"skipped":552,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 17 22:03:52.634: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Jun 17 22:03:53.070: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Jun 17 22:03:55.080: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63791100233, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63791100233, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63791100233, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63791100233, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} Jun 17 22:03:57.083: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63791100233, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63791100233, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63791100233, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63791100233, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jun 17 22:04:00.089: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] listing validating webhooks should work [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Listing all of the created validation webhooks STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Deleting the collection of validation webhooks STEP: Creating a configMap that does not comply to the validation webhook rules [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 17 22:04:00.218: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-4115" for this suite. STEP: Destroying namespace "webhook-4115-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:7.612 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 listing validating webhooks should work [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]","total":-1,"completed":11,"skipped":164,"failed":1,"failures":["[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]"]} SSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 17 22:03:55.017: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test downward API volume plugin Jun 17 22:03:55.048: INFO: Waiting up to 5m0s for pod "downwardapi-volume-377e5cd5-a026-4863-88bf-5aca0a647104" in namespace "downward-api-3677" to be "Succeeded or Failed" Jun 17 22:03:55.051: INFO: Pod "downwardapi-volume-377e5cd5-a026-4863-88bf-5aca0a647104": Phase="Pending", Reason="", readiness=false. Elapsed: 2.623922ms Jun 17 22:03:57.054: INFO: Pod "downwardapi-volume-377e5cd5-a026-4863-88bf-5aca0a647104": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006233594s Jun 17 22:03:59.058: INFO: Pod "downwardapi-volume-377e5cd5-a026-4863-88bf-5aca0a647104": Phase="Pending", Reason="", readiness=false. Elapsed: 4.010372277s Jun 17 22:04:01.063: INFO: Pod "downwardapi-volume-377e5cd5-a026-4863-88bf-5aca0a647104": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.01520745s STEP: Saw pod success Jun 17 22:04:01.063: INFO: Pod "downwardapi-volume-377e5cd5-a026-4863-88bf-5aca0a647104" satisfied condition "Succeeded or Failed" Jun 17 22:04:01.066: INFO: Trying to get logs from node node1 pod downwardapi-volume-377e5cd5-a026-4863-88bf-5aca0a647104 container client-container: STEP: delete the pod Jun 17 22:04:01.079: INFO: Waiting for pod downwardapi-volume-377e5cd5-a026-4863-88bf-5aca0a647104 to disappear Jun 17 22:04:01.081: INFO: Pod downwardapi-volume-377e5cd5-a026-4863-88bf-5aca0a647104 no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 17 22:04:01.081: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-3677" for this suite. • [SLOW TEST:6.071 seconds] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":-1,"completed":31,"skipped":333,"failed":0} SSSSSS ------------------------------ [BeforeEach] [sig-node] Lease /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 17 22:04:01.103: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename lease-test STEP: Waiting for a default service account to be provisioned in namespace [It] lease API should be available [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [AfterEach] [sig-node] Lease /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 17 22:04:01.171: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "lease-test-1692" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Lease lease API should be available [Conformance]","total":-1,"completed":32,"skipped":339,"failed":0} SS ------------------------------ [BeforeEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 17 22:03:57.333: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/rc.go:54 [It] should adopt matching pods on creation [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Given a Pod with a 'name' label pod-adoption is created Jun 17 22:03:57.370: INFO: The status of Pod pod-adoption is Pending, waiting for it to be Running (with Ready = true) Jun 17 22:03:59.374: INFO: The status of Pod pod-adoption is Pending, waiting for it to be Running (with Ready = true) Jun 17 22:04:01.375: INFO: The status of Pod pod-adoption is Running (Ready = true) STEP: When a replication controller with a matching selector is created STEP: Then the orphan pod is adopted [AfterEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 17 22:04:02.386: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-3676" for this suite. • [SLOW TEST:5.062 seconds] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching pods on creation [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should adopt matching pods on creation [Conformance]","total":-1,"completed":11,"skipped":427,"failed":1,"failures":["[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 17 22:04:00.271: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test downward API volume plugin Jun 17 22:04:00.303: INFO: Waiting up to 5m0s for pod "downwardapi-volume-3a5a98ea-71b4-446b-95d2-b00919388a1e" in namespace "projected-4954" to be "Succeeded or Failed" Jun 17 22:04:00.305: INFO: Pod "downwardapi-volume-3a5a98ea-71b4-446b-95d2-b00919388a1e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.099386ms Jun 17 22:04:02.309: INFO: Pod "downwardapi-volume-3a5a98ea-71b4-446b-95d2-b00919388a1e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005909877s Jun 17 22:04:04.313: INFO: Pod "downwardapi-volume-3a5a98ea-71b4-446b-95d2-b00919388a1e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.009924531s STEP: Saw pod success Jun 17 22:04:04.313: INFO: Pod "downwardapi-volume-3a5a98ea-71b4-446b-95d2-b00919388a1e" satisfied condition "Succeeded or Failed" Jun 17 22:04:04.315: INFO: Trying to get logs from node node1 pod downwardapi-volume-3a5a98ea-71b4-446b-95d2-b00919388a1e container client-container: STEP: delete the pod Jun 17 22:04:04.333: INFO: Waiting for pod downwardapi-volume-3a5a98ea-71b4-446b-95d2-b00919388a1e to disappear Jun 17 22:04:04.336: INFO: Pod downwardapi-volume-3a5a98ea-71b4-446b-95d2-b00919388a1e no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 17 22:04:04.336: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4954" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":12,"skipped":175,"failed":1,"failures":["[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]"]} SSSSS ------------------------------ [BeforeEach] [sig-auth] Certificates API [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 17 22:04:04.356: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename certificates STEP: Waiting for a default service account to be provisioned in namespace [It] should support CSR API operations [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: getting /apis STEP: getting /apis/certificates.k8s.io STEP: getting /apis/certificates.k8s.io/v1 STEP: creating STEP: getting STEP: listing STEP: watching Jun 17 22:04:05.064: INFO: starting watch STEP: patching STEP: updating Jun 17 22:04:05.071: INFO: waiting for watch events with expected annotations Jun 17 22:04:05.071: INFO: saw patched and updated annotations STEP: getting /approval STEP: patching /approval STEP: updating /approval STEP: getting /status STEP: patching /status STEP: updating /status STEP: deleting STEP: deleting a collection [AfterEach] [sig-auth] Certificates API [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 17 22:04:05.110: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "certificates-4463" for this suite. • ------------------------------ {"msg":"PASSED [sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","total":-1,"completed":13,"skipped":180,"failed":1,"failures":["[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]"]} SSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 17 22:03:59.076: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should mount projected service account token [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test service account token: Jun 17 22:03:59.106: INFO: Waiting up to 5m0s for pod "test-pod-fbcbe9de-67d4-464e-8cad-05f832cc40bb" in namespace "svcaccounts-4081" to be "Succeeded or Failed" Jun 17 22:03:59.108: INFO: Pod "test-pod-fbcbe9de-67d4-464e-8cad-05f832cc40bb": Phase="Pending", Reason="", readiness=false. Elapsed: 1.721245ms Jun 17 22:04:01.111: INFO: Pod "test-pod-fbcbe9de-67d4-464e-8cad-05f832cc40bb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.004843924s Jun 17 22:04:03.114: INFO: Pod "test-pod-fbcbe9de-67d4-464e-8cad-05f832cc40bb": Phase="Pending", Reason="", readiness=false. Elapsed: 4.007716594s Jun 17 22:04:05.117: INFO: Pod "test-pod-fbcbe9de-67d4-464e-8cad-05f832cc40bb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.010714611s STEP: Saw pod success Jun 17 22:04:05.117: INFO: Pod "test-pod-fbcbe9de-67d4-464e-8cad-05f832cc40bb" satisfied condition "Succeeded or Failed" Jun 17 22:04:05.119: INFO: Trying to get logs from node node2 pod test-pod-fbcbe9de-67d4-464e-8cad-05f832cc40bb container agnhost-container: STEP: delete the pod Jun 17 22:04:05.138: INFO: Waiting for pod test-pod-fbcbe9de-67d4-464e-8cad-05f832cc40bb to disappear Jun 17 22:04:05.140: INFO: Pod test-pod-fbcbe9de-67d4-464e-8cad-05f832cc40bb no longer exists [AfterEach] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 17 22:04:05.140: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-4081" for this suite. • [SLOW TEST:6.072 seconds] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23 should mount projected service account token [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ SS ------------------------------ {"msg":"PASSED [sig-auth] ServiceAccounts should mount projected service account token [Conformance]","total":-1,"completed":32,"skipped":574,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 17 22:04:05.183: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:241 [It] should check is all data is printed [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Jun 17 22:04:05.202: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-8450 version' Jun 17 22:04:05.306: INFO: stderr: "" Jun 17 22:04:05.306: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"21\", GitVersion:\"v1.21.9\", GitCommit:\"b631974d68ac5045e076c86a5c66fba6f128dc72\", GitTreeState:\"clean\", BuildDate:\"2022-01-19T17:51:12Z\", GoVersion:\"go1.16.12\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"21\", GitVersion:\"v1.21.1\", GitCommit:\"5e58841cce77d4bc13713ad2b91fa0d961e69192\", GitTreeState:\"clean\", BuildDate:\"2021-05-12T14:12:29Z\", GoVersion:\"go1.16.4\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 17 22:04:05.306: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8450" for this suite. • ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl version should check is all data is printed [Conformance]","total":-1,"completed":14,"skipped":219,"failed":1,"failures":["[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]"]} SSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 17 22:04:01.188: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Jun 17 22:04:01.649: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Jun 17 22:04:03.660: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63791100241, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63791100241, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63791100241, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63791100241, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jun 17 22:04:06.670: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] patching/updating a mutating webhook should work [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a mutating webhook configuration STEP: Updating a mutating webhook configuration's rules to not include the create operation STEP: Creating a configMap that should not be mutated STEP: Patching a mutating webhook configuration's rules to include the create operation STEP: Creating a configMap that should be mutated [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 17 22:04:07.722: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-6976" for this suite. STEP: Destroying namespace "webhook-6976-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.563 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 patching/updating a mutating webhook should work [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","total":-1,"completed":33,"skipped":341,"failed":0} SSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 17 22:03:34.848: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] ServiceAccountIssuerDiscovery should support OIDC discovery of service account issuer [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Jun 17 22:03:34.884: INFO: created pod Jun 17 22:03:34.884: INFO: Waiting up to 5m0s for pod "oidc-discovery-validator" in namespace "svcaccounts-7418" to be "Succeeded or Failed" Jun 17 22:03:34.886: INFO: Pod "oidc-discovery-validator": Phase="Pending", Reason="", readiness=false. Elapsed: 1.955177ms Jun 17 22:03:36.889: INFO: Pod "oidc-discovery-validator": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005568655s Jun 17 22:03:38.894: INFO: Pod "oidc-discovery-validator": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010619631s STEP: Saw pod success Jun 17 22:03:38.894: INFO: Pod "oidc-discovery-validator" satisfied condition "Succeeded or Failed" Jun 17 22:04:08.895: INFO: polling logs Jun 17 22:04:08.902: INFO: Pod logs: 2022/06/17 22:03:37 OK: Got token 2022/06/17 22:03:37 validating with in-cluster discovery 2022/06/17 22:03:37 OK: got issuer https://kubernetes.default.svc.cluster.local 2022/06/17 22:03:37 Full, not-validated claims: openidmetadata.claims{Claims:jwt.Claims{Issuer:"https://kubernetes.default.svc.cluster.local", Subject:"system:serviceaccount:svcaccounts-7418:default", Audience:jwt.Audience{"oidc-discovery-test"}, Expiry:1655504015, NotBefore:1655503415, IssuedAt:1655503415, ID:""}, Kubernetes:openidmetadata.kubeClaims{Namespace:"svcaccounts-7418", ServiceAccount:openidmetadata.kubeName{Name:"default", UID:"6b436f17-1fb6-4a00-af57-1e61d9fed6c6"}}} 2022/06/17 22:03:37 OK: Constructed OIDC provider for issuer https://kubernetes.default.svc.cluster.local 2022/06/17 22:03:37 OK: Validated signature on JWT 2022/06/17 22:03:37 OK: Got valid claims from token! 2022/06/17 22:03:37 Full, validated claims: &openidmetadata.claims{Claims:jwt.Claims{Issuer:"https://kubernetes.default.svc.cluster.local", Subject:"system:serviceaccount:svcaccounts-7418:default", Audience:jwt.Audience{"oidc-discovery-test"}, Expiry:1655504015, NotBefore:1655503415, IssuedAt:1655503415, ID:""}, Kubernetes:openidmetadata.kubeClaims{Namespace:"svcaccounts-7418", ServiceAccount:openidmetadata.kubeName{Name:"default", UID:"6b436f17-1fb6-4a00-af57-1e61d9fed6c6"}}} Jun 17 22:04:08.902: INFO: completed pod [AfterEach] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 17 22:04:08.907: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-7418" for this suite. • [SLOW TEST:34.066 seconds] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23 ServiceAccountIssuerDiscovery should support OIDC discovery of service account issuer [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-auth] ServiceAccounts ServiceAccountIssuerDiscovery should support OIDC discovery of service account issuer [Conformance]","total":-1,"completed":24,"skipped":249,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 17 22:04:07.776: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:241 [It] should add annotations for pods in rc [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating Agnhost RC Jun 17 22:04:07.797: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-225 create -f -' Jun 17 22:04:08.205: INFO: stderr: "" Jun 17 22:04:08.205: INFO: stdout: "replicationcontroller/agnhost-primary created\n" STEP: Waiting for Agnhost primary to start. Jun 17 22:04:09.208: INFO: Selector matched 1 pods for map[app:agnhost] Jun 17 22:04:09.208: INFO: Found 0 / 1 Jun 17 22:04:10.220: INFO: Selector matched 1 pods for map[app:agnhost] Jun 17 22:04:10.220: INFO: Found 0 / 1 Jun 17 22:04:11.209: INFO: Selector matched 1 pods for map[app:agnhost] Jun 17 22:04:11.209: INFO: Found 1 / 1 Jun 17 22:04:11.209: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 STEP: patching all pods Jun 17 22:04:11.212: INFO: Selector matched 1 pods for map[app:agnhost] Jun 17 22:04:11.212: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Jun 17 22:04:11.212: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-225 patch pod agnhost-primary-662mp -p {"metadata":{"annotations":{"x":"y"}}}' Jun 17 22:04:11.385: INFO: stderr: "" Jun 17 22:04:11.385: INFO: stdout: "pod/agnhost-primary-662mp patched\n" STEP: checking annotations Jun 17 22:04:11.387: INFO: Selector matched 1 pods for map[app:agnhost] Jun 17 22:04:11.387: INFO: ForEach: Found 1 pods from the filter. Now looping through them. [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 17 22:04:11.387: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-225" for this suite. • ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc [Conformance]","total":-1,"completed":34,"skipped":353,"failed":0} SSSSSS ------------------------------ [BeforeEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 17 22:03:58.915: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for services [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-277.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-277.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-277.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-277.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-277.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-277.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-277.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-277.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-277.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-277.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-277.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-277.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-277.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 95.35.233.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.233.35.95_udp@PTR;check="$$(dig +tcp +noall +answer +search 95.35.233.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.233.35.95_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-277.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-277.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-277.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-277.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-277.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-277.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-277.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-277.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-277.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-277.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-277.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-277.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-277.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 95.35.233.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.233.35.95_udp@PTR;check="$$(dig +tcp +noall +answer +search 95.35.233.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.233.35.95_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Jun 17 22:04:06.980: INFO: Unable to read wheezy_udp@dns-test-service.dns-277.svc.cluster.local from pod dns-277/dns-test-23344920-d820-4e58-a19e-85f5ec996d90: the server could not find the requested resource (get pods dns-test-23344920-d820-4e58-a19e-85f5ec996d90) Jun 17 22:04:06.982: INFO: Unable to read wheezy_tcp@dns-test-service.dns-277.svc.cluster.local from pod dns-277/dns-test-23344920-d820-4e58-a19e-85f5ec996d90: the server could not find the requested resource (get pods dns-test-23344920-d820-4e58-a19e-85f5ec996d90) Jun 17 22:04:06.985: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-277.svc.cluster.local from pod dns-277/dns-test-23344920-d820-4e58-a19e-85f5ec996d90: the server could not find the requested resource (get pods dns-test-23344920-d820-4e58-a19e-85f5ec996d90) Jun 17 22:04:06.987: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-277.svc.cluster.local from pod dns-277/dns-test-23344920-d820-4e58-a19e-85f5ec996d90: the server could not find the requested resource (get pods dns-test-23344920-d820-4e58-a19e-85f5ec996d90) Jun 17 22:04:07.006: INFO: Unable to read jessie_udp@dns-test-service.dns-277.svc.cluster.local from pod dns-277/dns-test-23344920-d820-4e58-a19e-85f5ec996d90: the server could not find the requested resource (get pods dns-test-23344920-d820-4e58-a19e-85f5ec996d90) Jun 17 22:04:07.008: INFO: Unable to read jessie_tcp@dns-test-service.dns-277.svc.cluster.local from pod dns-277/dns-test-23344920-d820-4e58-a19e-85f5ec996d90: the server could not find the requested resource (get pods dns-test-23344920-d820-4e58-a19e-85f5ec996d90) Jun 17 22:04:07.010: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-277.svc.cluster.local from pod dns-277/dns-test-23344920-d820-4e58-a19e-85f5ec996d90: the server could not find the requested resource (get pods dns-test-23344920-d820-4e58-a19e-85f5ec996d90) Jun 17 22:04:07.013: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-277.svc.cluster.local from pod dns-277/dns-test-23344920-d820-4e58-a19e-85f5ec996d90: the server could not find the requested resource (get pods dns-test-23344920-d820-4e58-a19e-85f5ec996d90) Jun 17 22:04:07.028: INFO: Lookups using dns-277/dns-test-23344920-d820-4e58-a19e-85f5ec996d90 failed for: [wheezy_udp@dns-test-service.dns-277.svc.cluster.local wheezy_tcp@dns-test-service.dns-277.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-277.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-277.svc.cluster.local jessie_udp@dns-test-service.dns-277.svc.cluster.local jessie_tcp@dns-test-service.dns-277.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-277.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-277.svc.cluster.local] Jun 17 22:04:12.076: INFO: DNS probes using dns-277/dns-test-23344920-d820-4e58-a19e-85f5ec996d90 succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 17 22:04:12.096: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-277" for this suite. • [SLOW TEST:13.190 seconds] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should provide DNS for services [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for services [Conformance]","total":-1,"completed":8,"skipped":156,"failed":1,"failures":["[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]"]} SSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-apps] ReplicaSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 17 22:04:05.336: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] Replace and Patch tests [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Jun 17 22:04:05.366: INFO: Pod name sample-pod: Found 0 pods out of 1 Jun 17 22:04:10.372: INFO: Pod name sample-pod: Found 1 pods out of 1 STEP: ensuring each pod is running STEP: Scaling up "test-rs" replicaset Jun 17 22:04:10.379: INFO: Updating replica set "test-rs" STEP: patching the ReplicaSet Jun 17 22:04:10.383: INFO: observed ReplicaSet test-rs in namespace replicaset-8549 with ReadyReplicas 1, AvailableReplicas 1 Jun 17 22:04:10.393: INFO: observed ReplicaSet test-rs in namespace replicaset-8549 with ReadyReplicas 1, AvailableReplicas 1 Jun 17 22:04:10.402: INFO: observed ReplicaSet test-rs in namespace replicaset-8549 with ReadyReplicas 1, AvailableReplicas 1 Jun 17 22:04:10.405: INFO: observed ReplicaSet test-rs in namespace replicaset-8549 with ReadyReplicas 1, AvailableReplicas 1 Jun 17 22:04:12.832: INFO: observed ReplicaSet test-rs in namespace replicaset-8549 with ReadyReplicas 2, AvailableReplicas 2 Jun 17 22:04:13.869: INFO: observed Replicaset test-rs in namespace replicaset-8549 with ReadyReplicas 3 found true [AfterEach] [sig-apps] ReplicaSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 17 22:04:13.870: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-8549" for this suite. • [SLOW TEST:8.541 seconds] [sig-apps] ReplicaSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 Replace and Patch tests [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] ReplicaSet Replace and Patch tests [Conformance]","total":-1,"completed":15,"skipped":232,"failed":1,"failures":["[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]"]} S ------------------------------ [BeforeEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 17 22:04:08.994: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not be blocked by dependency circle [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Jun 17 22:04:09.051: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"2bd60f2d-7c9a-4fea-8825-c9c59b0b2530", Controller:(*bool)(0xc00647be82), BlockOwnerDeletion:(*bool)(0xc00647be83)}} Jun 17 22:04:09.055: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"76e8b891-e54d-4f93-98ea-5f58073107a0", Controller:(*bool)(0xc006431902), BlockOwnerDeletion:(*bool)(0xc006431903)}} Jun 17 22:04:09.059: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"515bf0fa-db77-45ac-bed8-2f10ab59eb7a", Controller:(*bool)(0xc006431c2a), BlockOwnerDeletion:(*bool)(0xc006431c2b)}} [AfterEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 17 22:04:14.067: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-5011" for this suite. • [SLOW TEST:5.081 seconds] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not be blocked by dependency circle [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance]","total":-1,"completed":25,"skipped":293,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] RuntimeClass /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 17 22:04:14.113: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename runtimeclass STEP: Waiting for a default service account to be provisioned in namespace [It] should support RuntimeClasses API operations [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: getting /apis STEP: getting /apis/node.k8s.io STEP: getting /apis/node.k8s.io/v1 STEP: creating STEP: watching Jun 17 22:04:14.146: INFO: starting watch STEP: getting STEP: listing STEP: patching STEP: updating Jun 17 22:04:14.161: INFO: waiting for watch events with expected annotations STEP: deleting STEP: deleting a collection [AfterEach] [sig-node] RuntimeClass /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 17 22:04:14.177: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "runtimeclass-8203" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] RuntimeClass should support RuntimeClasses API operations [Conformance]","total":-1,"completed":26,"skipped":310,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 17 22:04:11.407: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test emptydir volume type on node default medium Jun 17 22:04:11.459: INFO: Waiting up to 5m0s for pod "pod-60ebee99-095a-4491-95e2-c6cdea4e73a3" in namespace "emptydir-9008" to be "Succeeded or Failed" Jun 17 22:04:11.464: INFO: Pod "pod-60ebee99-095a-4491-95e2-c6cdea4e73a3": Phase="Pending", Reason="", readiness=false. Elapsed: 4.55344ms Jun 17 22:04:13.467: INFO: Pod "pod-60ebee99-095a-4491-95e2-c6cdea4e73a3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007833181s Jun 17 22:04:15.472: INFO: Pod "pod-60ebee99-095a-4491-95e2-c6cdea4e73a3": Phase="Pending", Reason="", readiness=false. Elapsed: 4.012496097s Jun 17 22:04:17.475: INFO: Pod "pod-60ebee99-095a-4491-95e2-c6cdea4e73a3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.01614302s STEP: Saw pod success Jun 17 22:04:17.475: INFO: Pod "pod-60ebee99-095a-4491-95e2-c6cdea4e73a3" satisfied condition "Succeeded or Failed" Jun 17 22:04:17.480: INFO: Trying to get logs from node node2 pod pod-60ebee99-095a-4491-95e2-c6cdea4e73a3 container test-container: STEP: delete the pod Jun 17 22:04:17.537: INFO: Waiting for pod pod-60ebee99-095a-4491-95e2-c6cdea4e73a3 to disappear Jun 17 22:04:17.538: INFO: Pod pod-60ebee99-095a-4491-95e2-c6cdea4e73a3 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 17 22:04:17.539: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-9008" for this suite. • [SLOW TEST:6.141 seconds] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":35,"skipped":359,"failed":0} SSS ------------------------------ [BeforeEach] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 17 22:04:12.130: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test downward api env vars Jun 17 22:04:12.165: INFO: Waiting up to 5m0s for pod "downward-api-84d0d7d0-fb93-4eb9-ad0e-b61e6c947f2f" in namespace "downward-api-6769" to be "Succeeded or Failed" Jun 17 22:04:12.167: INFO: Pod "downward-api-84d0d7d0-fb93-4eb9-ad0e-b61e6c947f2f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.223474ms Jun 17 22:04:14.170: INFO: Pod "downward-api-84d0d7d0-fb93-4eb9-ad0e-b61e6c947f2f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.004983811s Jun 17 22:04:16.175: INFO: Pod "downward-api-84d0d7d0-fb93-4eb9-ad0e-b61e6c947f2f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.009945141s Jun 17 22:04:18.182: INFO: Pod "downward-api-84d0d7d0-fb93-4eb9-ad0e-b61e6c947f2f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.016546663s STEP: Saw pod success Jun 17 22:04:18.182: INFO: Pod "downward-api-84d0d7d0-fb93-4eb9-ad0e-b61e6c947f2f" satisfied condition "Succeeded or Failed" Jun 17 22:04:18.184: INFO: Trying to get logs from node node2 pod downward-api-84d0d7d0-fb93-4eb9-ad0e-b61e6c947f2f container dapi-container: STEP: delete the pod Jun 17 22:04:18.198: INFO: Waiting for pod downward-api-84d0d7d0-fb93-4eb9-ad0e-b61e6c947f2f to disappear Jun 17 22:04:18.200: INFO: Pod downward-api-84d0d7d0-fb93-4eb9-ad0e-b61e6c947f2f no longer exists [AfterEach] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 17 22:04:18.200: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-6769" for this suite. • [SLOW TEST:6.077 seconds] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]","total":-1,"completed":9,"skipped":172,"failed":1,"failures":["[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]"]} SSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 17 22:04:13.882: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating configMap with name configmap-test-volume-601fd57e-33db-4356-8997-e2159f8f3d79 STEP: Creating a pod to test consume configMaps Jun 17 22:04:13.917: INFO: Waiting up to 5m0s for pod "pod-configmaps-473ad002-3e5d-49e5-99c3-54c86cc47dbf" in namespace "configmap-1132" to be "Succeeded or Failed" Jun 17 22:04:13.919: INFO: Pod "pod-configmaps-473ad002-3e5d-49e5-99c3-54c86cc47dbf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.048978ms Jun 17 22:04:15.923: INFO: Pod "pod-configmaps-473ad002-3e5d-49e5-99c3-54c86cc47dbf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006598048s Jun 17 22:04:17.927: INFO: Pod "pod-configmaps-473ad002-3e5d-49e5-99c3-54c86cc47dbf": Phase="Pending", Reason="", readiness=false. Elapsed: 4.010579676s Jun 17 22:04:19.931: INFO: Pod "pod-configmaps-473ad002-3e5d-49e5-99c3-54c86cc47dbf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.013833095s STEP: Saw pod success Jun 17 22:04:19.931: INFO: Pod "pod-configmaps-473ad002-3e5d-49e5-99c3-54c86cc47dbf" satisfied condition "Succeeded or Failed" Jun 17 22:04:19.933: INFO: Trying to get logs from node node2 pod pod-configmaps-473ad002-3e5d-49e5-99c3-54c86cc47dbf container configmap-volume-test: STEP: delete the pod Jun 17 22:04:19.944: INFO: Waiting for pod pod-configmaps-473ad002-3e5d-49e5-99c3-54c86cc47dbf to disappear Jun 17 22:04:19.946: INFO: Pod pod-configmaps-473ad002-3e5d-49e5-99c3-54c86cc47dbf no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 17 22:04:19.946: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-1132" for this suite. • [SLOW TEST:6.072 seconds] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":-1,"completed":16,"skipped":233,"failed":1,"failures":["[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]"]} SSS ------------------------------ [BeforeEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 17 22:04:14.309: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-4683.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.dns-4683.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-4683.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-4683.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.dns-4683.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-4683.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe /etc/hosts STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Jun 17 22:04:22.385: INFO: DNS probes using dns-4683/dns-test-6e82e5ac-2227-453b-8a2e-83390dee562e succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 17 22:04:22.392: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-4683" for this suite. • [SLOW TEST:8.091 seconds] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]","total":-1,"completed":27,"skipped":384,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 17 22:04:17.556: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating configMap with name configmap-test-volume-map-c3435539-2ea6-42cd-b765-95c16dd9faf6 STEP: Creating a pod to test consume configMaps Jun 17 22:04:17.596: INFO: Waiting up to 5m0s for pod "pod-configmaps-8a63b097-a007-406a-be64-eda2139a086f" in namespace "configmap-7158" to be "Succeeded or Failed" Jun 17 22:04:17.599: INFO: Pod "pod-configmaps-8a63b097-a007-406a-be64-eda2139a086f": Phase="Pending", Reason="", readiness=false. Elapsed: 3.00926ms Jun 17 22:04:19.604: INFO: Pod "pod-configmaps-8a63b097-a007-406a-be64-eda2139a086f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008008452s Jun 17 22:04:21.607: INFO: Pod "pod-configmaps-8a63b097-a007-406a-be64-eda2139a086f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.01170996s Jun 17 22:04:23.611: INFO: Pod "pod-configmaps-8a63b097-a007-406a-be64-eda2139a086f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.015455472s STEP: Saw pod success Jun 17 22:04:23.611: INFO: Pod "pod-configmaps-8a63b097-a007-406a-be64-eda2139a086f" satisfied condition "Succeeded or Failed" Jun 17 22:04:23.614: INFO: Trying to get logs from node node2 pod pod-configmaps-8a63b097-a007-406a-be64-eda2139a086f container agnhost-container: STEP: delete the pod Jun 17 22:04:23.626: INFO: Waiting for pod pod-configmaps-8a63b097-a007-406a-be64-eda2139a086f to disappear Jun 17 22:04:23.628: INFO: Pod pod-configmaps-8a63b097-a007-406a-be64-eda2139a086f no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 17 22:04:23.628: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-7158" for this suite. • [SLOW TEST:6.077 seconds] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":-1,"completed":36,"skipped":362,"failed":0} SSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 17 22:04:18.230: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test downward API volume plugin Jun 17 22:04:18.263: INFO: Waiting up to 5m0s for pod "downwardapi-volume-d6a32e15-1aaf-4278-b23a-049d4539ffc1" in namespace "downward-api-1510" to be "Succeeded or Failed" Jun 17 22:04:18.267: INFO: Pod "downwardapi-volume-d6a32e15-1aaf-4278-b23a-049d4539ffc1": Phase="Pending", Reason="", readiness=false. Elapsed: 3.127977ms Jun 17 22:04:20.270: INFO: Pod "downwardapi-volume-d6a32e15-1aaf-4278-b23a-049d4539ffc1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006455761s Jun 17 22:04:22.274: INFO: Pod "downwardapi-volume-d6a32e15-1aaf-4278-b23a-049d4539ffc1": Phase="Pending", Reason="", readiness=false. Elapsed: 4.01076573s Jun 17 22:04:24.278: INFO: Pod "downwardapi-volume-d6a32e15-1aaf-4278-b23a-049d4539ffc1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.014997046s STEP: Saw pod success Jun 17 22:04:24.278: INFO: Pod "downwardapi-volume-d6a32e15-1aaf-4278-b23a-049d4539ffc1" satisfied condition "Succeeded or Failed" Jun 17 22:04:24.281: INFO: Trying to get logs from node node1 pod downwardapi-volume-d6a32e15-1aaf-4278-b23a-049d4539ffc1 container client-container: STEP: delete the pod Jun 17 22:04:24.296: INFO: Waiting for pod downwardapi-volume-d6a32e15-1aaf-4278-b23a-049d4539ffc1 to disappear Jun 17 22:04:24.297: INFO: Pod downwardapi-volume-d6a32e15-1aaf-4278-b23a-049d4539ffc1 no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 17 22:04:24.297: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-1510" for this suite. • [SLOW TEST:6.075 seconds] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":-1,"completed":10,"skipped":183,"failed":1,"failures":["[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]"]} SSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 17 22:04:19.962: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow composing env vars into new env vars [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test env composition Jun 17 22:04:20.005: INFO: Waiting up to 5m0s for pod "var-expansion-4b3a6c82-2b60-406d-9c1a-8c9776dbebca" in namespace "var-expansion-2758" to be "Succeeded or Failed" Jun 17 22:04:20.007: INFO: Pod "var-expansion-4b3a6c82-2b60-406d-9c1a-8c9776dbebca": Phase="Pending", Reason="", readiness=false. Elapsed: 2.267972ms Jun 17 22:04:22.012: INFO: Pod "var-expansion-4b3a6c82-2b60-406d-9c1a-8c9776dbebca": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00745818s Jun 17 22:04:24.017: INFO: Pod "var-expansion-4b3a6c82-2b60-406d-9c1a-8c9776dbebca": Phase="Pending", Reason="", readiness=false. Elapsed: 4.012573734s Jun 17 22:04:26.023: INFO: Pod "var-expansion-4b3a6c82-2b60-406d-9c1a-8c9776dbebca": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.01764953s STEP: Saw pod success Jun 17 22:04:26.023: INFO: Pod "var-expansion-4b3a6c82-2b60-406d-9c1a-8c9776dbebca" satisfied condition "Succeeded or Failed" Jun 17 22:04:26.025: INFO: Trying to get logs from node node1 pod var-expansion-4b3a6c82-2b60-406d-9c1a-8c9776dbebca container dapi-container: STEP: delete the pod Jun 17 22:04:26.039: INFO: Waiting for pod var-expansion-4b3a6c82-2b60-406d-9c1a-8c9776dbebca to disappear Jun 17 22:04:26.041: INFO: Pod var-expansion-4b3a6c82-2b60-406d-9c1a-8c9776dbebca no longer exists [AfterEach] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 17 22:04:26.041: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-2758" for this suite. • [SLOW TEST:6.086 seconds] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should allow composing env vars into new env vars [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance]","total":-1,"completed":17,"skipped":236,"failed":1,"failures":["[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]"]} SSSS ------------------------------ [BeforeEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 17 22:04:23.653: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating projection with secret that has name projected-secret-test-3fa310dc-a0cd-426b-bf36-ad15f7c15f87 STEP: Creating a pod to test consume secrets Jun 17 22:04:23.689: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-b500074e-43fd-4262-acb1-4aea9455eed7" in namespace "projected-2052" to be "Succeeded or Failed" Jun 17 22:04:23.693: INFO: Pod "pod-projected-secrets-b500074e-43fd-4262-acb1-4aea9455eed7": Phase="Pending", Reason="", readiness=false. Elapsed: 3.426984ms Jun 17 22:04:25.696: INFO: Pod "pod-projected-secrets-b500074e-43fd-4262-acb1-4aea9455eed7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007093366s Jun 17 22:04:27.700: INFO: Pod "pod-projected-secrets-b500074e-43fd-4262-acb1-4aea9455eed7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011015105s STEP: Saw pod success Jun 17 22:04:27.700: INFO: Pod "pod-projected-secrets-b500074e-43fd-4262-acb1-4aea9455eed7" satisfied condition "Succeeded or Failed" Jun 17 22:04:27.703: INFO: Trying to get logs from node node2 pod pod-projected-secrets-b500074e-43fd-4262-acb1-4aea9455eed7 container projected-secret-volume-test: STEP: delete the pod Jun 17 22:04:27.745: INFO: Waiting for pod pod-projected-secrets-b500074e-43fd-4262-acb1-4aea9455eed7 to disappear Jun 17 22:04:27.749: INFO: Pod pod-projected-secrets-b500074e-43fd-4262-acb1-4aea9455eed7 no longer exists [AfterEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 17 22:04:27.749: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2052" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance]","total":-1,"completed":37,"skipped":374,"failed":0} SSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 17 22:04:02.523: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with secret pod [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating pod pod-subpath-test-secret-t47x STEP: Creating a pod to test atomic-volume-subpath Jun 17 22:04:02.561: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-t47x" in namespace "subpath-3167" to be "Succeeded or Failed" Jun 17 22:04:02.565: INFO: Pod "pod-subpath-test-secret-t47x": Phase="Pending", Reason="", readiness=false. Elapsed: 3.995746ms Jun 17 22:04:04.569: INFO: Pod "pod-subpath-test-secret-t47x": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00765557s Jun 17 22:04:06.573: INFO: Pod "pod-subpath-test-secret-t47x": Phase="Running", Reason="", readiness=true. Elapsed: 4.012183731s Jun 17 22:04:08.576: INFO: Pod "pod-subpath-test-secret-t47x": Phase="Running", Reason="", readiness=true. Elapsed: 6.015011381s Jun 17 22:04:10.581: INFO: Pod "pod-subpath-test-secret-t47x": Phase="Running", Reason="", readiness=true. Elapsed: 8.019474289s Jun 17 22:04:12.584: INFO: Pod "pod-subpath-test-secret-t47x": Phase="Running", Reason="", readiness=true. Elapsed: 10.023189827s Jun 17 22:04:14.590: INFO: Pod "pod-subpath-test-secret-t47x": Phase="Running", Reason="", readiness=true. Elapsed: 12.028458508s Jun 17 22:04:16.593: INFO: Pod "pod-subpath-test-secret-t47x": Phase="Running", Reason="", readiness=true. Elapsed: 14.032118065s Jun 17 22:04:18.597: INFO: Pod "pod-subpath-test-secret-t47x": Phase="Running", Reason="", readiness=true. Elapsed: 16.035758042s Jun 17 22:04:20.601: INFO: Pod "pod-subpath-test-secret-t47x": Phase="Running", Reason="", readiness=true. Elapsed: 18.039851535s Jun 17 22:04:22.604: INFO: Pod "pod-subpath-test-secret-t47x": Phase="Running", Reason="", readiness=true. Elapsed: 20.043055198s Jun 17 22:04:24.607: INFO: Pod "pod-subpath-test-secret-t47x": Phase="Running", Reason="", readiness=true. Elapsed: 22.045620355s Jun 17 22:04:26.610: INFO: Pod "pod-subpath-test-secret-t47x": Phase="Running", Reason="", readiness=true. Elapsed: 24.048495593s Jun 17 22:04:28.614: INFO: Pod "pod-subpath-test-secret-t47x": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.052693853s STEP: Saw pod success Jun 17 22:04:28.614: INFO: Pod "pod-subpath-test-secret-t47x" satisfied condition "Succeeded or Failed" Jun 17 22:04:28.616: INFO: Trying to get logs from node node2 pod pod-subpath-test-secret-t47x container test-container-subpath-secret-t47x: STEP: delete the pod Jun 17 22:04:28.628: INFO: Waiting for pod pod-subpath-test-secret-t47x to disappear Jun 17 22:04:28.629: INFO: Pod pod-subpath-test-secret-t47x no longer exists STEP: Deleting pod pod-subpath-test-secret-t47x Jun 17 22:04:28.630: INFO: Deleting pod "pod-subpath-test-secret-t47x" in namespace "subpath-3167" [AfterEach] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 17 22:04:28.631: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-3167" for this suite. • [SLOW TEST:26.115 seconds] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with secret pod [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance]","total":-1,"completed":12,"skipped":497,"failed":1,"failures":["[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]"]} SSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] Watchers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 17 22:04:26.059: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should receive events on concurrent watches in same order [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: getting a starting resourceVersion STEP: starting a background goroutine to produce watch events STEP: creating watches starting from each resource version of the events produced and verifying they all receive resource versions in the same order [AfterEach] [sig-api-machinery] Watchers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 17 22:04:30.414: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-8335" for this suite. • ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance]","total":-1,"completed":18,"skipped":240,"failed":1,"failures":["[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]"]} SSSSS ------------------------------ [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 17 22:04:22.438: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Jun 17 22:04:22.801: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Jun 17 22:04:24.811: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63791100262, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63791100262, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63791100262, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63791100262, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} Jun 17 22:04:26.817: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63791100262, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63791100262, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63791100262, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63791100262, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jun 17 22:04:29.822: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should deny crd creation [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Registering the crd webhook via the AdmissionRegistration API STEP: Creating a custom resource definition that should be denied by the webhook Jun 17 22:04:30.836: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 17 22:04:30.849: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-691" for this suite. STEP: Destroying namespace "webhook-691-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:8.443 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should deny crd creation [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","total":-1,"completed":28,"skipped":405,"failed":0} SSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 17 22:04:05.161: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of same group and version but different kinds [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: CRs in the same group and version but different kinds (two CRDs) show up in OpenAPI documentation Jun 17 22:04:05.184: INFO: >>> kubeConfig: /root/.kube/config Jun 17 22:04:13.791: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 17 22:04:32.552: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-5678" for this suite. • [SLOW TEST:27.410 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of same group and version but different kinds [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance]","total":-1,"completed":33,"skipped":581,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 17 22:04:24.341: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/rc.go:54 [It] should test the lifecycle of a ReplicationController [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating a ReplicationController STEP: waiting for RC to be added STEP: waiting for available Replicas STEP: patching ReplicationController STEP: waiting for RC to be modified STEP: patching ReplicationController status STEP: waiting for RC to be modified STEP: waiting for available Replicas STEP: fetching ReplicationController status STEP: patching ReplicationController scale STEP: waiting for RC to be modified STEP: waiting for ReplicationController's scale to be the max amount STEP: fetching ReplicationController; ensuring that it's patched STEP: updating ReplicationController status STEP: waiting for RC to be modified STEP: listing all ReplicationControllers STEP: checking that ReplicationController has expected values STEP: deleting ReplicationControllers by collection STEP: waiting for ReplicationController to have a DELETED watchEvent [AfterEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 17 22:04:38.136: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-5382" for this suite. • [SLOW TEST:13.806 seconds] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should test the lifecycle of a ReplicationController [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should test the lifecycle of a ReplicationController [Conformance]","total":-1,"completed":11,"skipped":201,"failed":1,"failures":["[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] PodTemplates /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 17 22:04:38.223: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename podtemplate STEP: Waiting for a default service account to be provisioned in namespace [It] should run the lifecycle of PodTemplates [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [AfterEach] [sig-node] PodTemplates /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 17 22:04:38.266: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "podtemplate-3079" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] PodTemplates should run the lifecycle of PodTemplates [Conformance]","total":-1,"completed":12,"skipped":245,"failed":1,"failures":["[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/sysctl.go:35 [BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 17 22:04:30.526: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sysctl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/sysctl.go:64 [It] should support sysctls [MinimumKubeletVersion:1.21] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod with the kernel.shm_rmid_forced sysctl STEP: Watching for error events or started pod STEP: Waiting for pod completion STEP: Checking that the pod succeeded STEP: Getting logs from the pod STEP: Checking that the sysctl is actually updated [AfterEach] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 17 22:04:38.575: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sysctl-2686" for this suite. • [SLOW TEST:8.056 seconds] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should support sysctls [MinimumKubeletVersion:1.21] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] should support sysctls [MinimumKubeletVersion:1.21] [Conformance]","total":-1,"completed":19,"skipped":245,"failed":1,"failures":["[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 17 22:04:27.781: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-8289 A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-8289;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-8289 A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-8289;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-8289.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-8289.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-8289.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-8289.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-8289.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-8289.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-8289.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-8289.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-8289.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-8289.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-8289.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-8289.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-8289.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 192.11.233.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.233.11.192_udp@PTR;check="$$(dig +tcp +noall +answer +search 192.11.233.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.233.11.192_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-8289 A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-8289;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-8289 A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-8289;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-8289.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-8289.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-8289.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-8289.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-8289.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-8289.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-8289.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-8289.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-8289.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-8289.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-8289.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-8289.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-8289.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 192.11.233.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.233.11.192_udp@PTR;check="$$(dig +tcp +noall +answer +search 192.11.233.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.233.11.192_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Jun 17 22:04:33.841: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-8289/dns-test-180dfe7c-1fb1-460c-80ac-4c8eb1f9aeaa: the server could not find the requested resource (get pods dns-test-180dfe7c-1fb1-460c-80ac-4c8eb1f9aeaa) Jun 17 22:04:33.843: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-8289/dns-test-180dfe7c-1fb1-460c-80ac-4c8eb1f9aeaa: the server could not find the requested resource (get pods dns-test-180dfe7c-1fb1-460c-80ac-4c8eb1f9aeaa) Jun 17 22:04:33.846: INFO: Unable to read wheezy_udp@dns-test-service.dns-8289 from pod dns-8289/dns-test-180dfe7c-1fb1-460c-80ac-4c8eb1f9aeaa: the server could not find the requested resource (get pods dns-test-180dfe7c-1fb1-460c-80ac-4c8eb1f9aeaa) Jun 17 22:04:33.848: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8289 from pod dns-8289/dns-test-180dfe7c-1fb1-460c-80ac-4c8eb1f9aeaa: the server could not find the requested resource (get pods dns-test-180dfe7c-1fb1-460c-80ac-4c8eb1f9aeaa) Jun 17 22:04:33.852: INFO: Unable to read wheezy_udp@dns-test-service.dns-8289.svc from pod dns-8289/dns-test-180dfe7c-1fb1-460c-80ac-4c8eb1f9aeaa: the server could not find the requested resource (get pods dns-test-180dfe7c-1fb1-460c-80ac-4c8eb1f9aeaa) Jun 17 22:04:33.854: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8289.svc from pod dns-8289/dns-test-180dfe7c-1fb1-460c-80ac-4c8eb1f9aeaa: the server could not find the requested resource (get pods dns-test-180dfe7c-1fb1-460c-80ac-4c8eb1f9aeaa) Jun 17 22:04:33.857: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-8289.svc from pod dns-8289/dns-test-180dfe7c-1fb1-460c-80ac-4c8eb1f9aeaa: the server could not find the requested resource (get pods dns-test-180dfe7c-1fb1-460c-80ac-4c8eb1f9aeaa) Jun 17 22:04:33.860: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-8289.svc from pod dns-8289/dns-test-180dfe7c-1fb1-460c-80ac-4c8eb1f9aeaa: the server could not find the requested resource (get pods dns-test-180dfe7c-1fb1-460c-80ac-4c8eb1f9aeaa) Jun 17 22:04:33.878: INFO: Unable to read jessie_udp@dns-test-service from pod dns-8289/dns-test-180dfe7c-1fb1-460c-80ac-4c8eb1f9aeaa: the server could not find the requested resource (get pods dns-test-180dfe7c-1fb1-460c-80ac-4c8eb1f9aeaa) Jun 17 22:04:33.881: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-8289/dns-test-180dfe7c-1fb1-460c-80ac-4c8eb1f9aeaa: the server could not find the requested resource (get pods dns-test-180dfe7c-1fb1-460c-80ac-4c8eb1f9aeaa) Jun 17 22:04:33.883: INFO: Unable to read jessie_udp@dns-test-service.dns-8289 from pod dns-8289/dns-test-180dfe7c-1fb1-460c-80ac-4c8eb1f9aeaa: the server could not find the requested resource (get pods dns-test-180dfe7c-1fb1-460c-80ac-4c8eb1f9aeaa) Jun 17 22:04:33.886: INFO: Unable to read jessie_tcp@dns-test-service.dns-8289 from pod dns-8289/dns-test-180dfe7c-1fb1-460c-80ac-4c8eb1f9aeaa: the server could not find the requested resource (get pods dns-test-180dfe7c-1fb1-460c-80ac-4c8eb1f9aeaa) Jun 17 22:04:33.889: INFO: Unable to read jessie_udp@dns-test-service.dns-8289.svc from pod dns-8289/dns-test-180dfe7c-1fb1-460c-80ac-4c8eb1f9aeaa: the server could not find the requested resource (get pods dns-test-180dfe7c-1fb1-460c-80ac-4c8eb1f9aeaa) Jun 17 22:04:33.892: INFO: Unable to read jessie_tcp@dns-test-service.dns-8289.svc from pod dns-8289/dns-test-180dfe7c-1fb1-460c-80ac-4c8eb1f9aeaa: the server could not find the requested resource (get pods dns-test-180dfe7c-1fb1-460c-80ac-4c8eb1f9aeaa) Jun 17 22:04:33.894: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-8289.svc from pod dns-8289/dns-test-180dfe7c-1fb1-460c-80ac-4c8eb1f9aeaa: the server could not find the requested resource (get pods dns-test-180dfe7c-1fb1-460c-80ac-4c8eb1f9aeaa) Jun 17 22:04:33.896: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-8289.svc from pod dns-8289/dns-test-180dfe7c-1fb1-460c-80ac-4c8eb1f9aeaa: the server could not find the requested resource (get pods dns-test-180dfe7c-1fb1-460c-80ac-4c8eb1f9aeaa) Jun 17 22:04:33.912: INFO: Lookups using dns-8289/dns-test-180dfe7c-1fb1-460c-80ac-4c8eb1f9aeaa failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-8289 wheezy_tcp@dns-test-service.dns-8289 wheezy_udp@dns-test-service.dns-8289.svc wheezy_tcp@dns-test-service.dns-8289.svc wheezy_udp@_http._tcp.dns-test-service.dns-8289.svc wheezy_tcp@_http._tcp.dns-test-service.dns-8289.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-8289 jessie_tcp@dns-test-service.dns-8289 jessie_udp@dns-test-service.dns-8289.svc jessie_tcp@dns-test-service.dns-8289.svc jessie_udp@_http._tcp.dns-test-service.dns-8289.svc jessie_tcp@_http._tcp.dns-test-service.dns-8289.svc] Jun 17 22:04:38.978: INFO: DNS probes using dns-8289/dns-test-180dfe7c-1fb1-460c-80ac-4c8eb1f9aeaa succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 17 22:04:39.000: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-8289" for this suite. • [SLOW TEST:11.227 seconds] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]","total":-1,"completed":38,"skipped":387,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 17 22:04:28.673: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:241 [BeforeEach] Update Demo /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:293 [It] should create and stop a replication controller [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating a replication controller Jun 17 22:04:28.697: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-82 create -f -' Jun 17 22:04:29.105: INFO: stderr: "" Jun 17 22:04:29.105: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Jun 17 22:04:29.105: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-82 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Jun 17 22:04:29.280: INFO: stderr: "" Jun 17 22:04:29.280: INFO: stdout: "update-demo-nautilus-8rf27 update-demo-nautilus-lccpd " Jun 17 22:04:29.280: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-82 get pods update-demo-nautilus-8rf27 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Jun 17 22:04:29.458: INFO: stderr: "" Jun 17 22:04:29.458: INFO: stdout: "" Jun 17 22:04:29.458: INFO: update-demo-nautilus-8rf27 is created but not running Jun 17 22:04:34.458: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-82 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Jun 17 22:04:34.615: INFO: stderr: "" Jun 17 22:04:34.615: INFO: stdout: "update-demo-nautilus-8rf27 update-demo-nautilus-lccpd " Jun 17 22:04:34.615: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-82 get pods update-demo-nautilus-8rf27 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Jun 17 22:04:34.787: INFO: stderr: "" Jun 17 22:04:34.787: INFO: stdout: "" Jun 17 22:04:34.787: INFO: update-demo-nautilus-8rf27 is created but not running Jun 17 22:04:39.788: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-82 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Jun 17 22:04:39.973: INFO: stderr: "" Jun 17 22:04:39.973: INFO: stdout: "update-demo-nautilus-8rf27 update-demo-nautilus-lccpd " Jun 17 22:04:39.973: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-82 get pods update-demo-nautilus-8rf27 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Jun 17 22:04:40.162: INFO: stderr: "" Jun 17 22:04:40.162: INFO: stdout: "true" Jun 17 22:04:40.162: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-82 get pods update-demo-nautilus-8rf27 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' Jun 17 22:04:40.343: INFO: stderr: "" Jun 17 22:04:40.343: INFO: stdout: "k8s.gcr.io/e2e-test-images/nautilus:1.4" Jun 17 22:04:40.343: INFO: validating pod update-demo-nautilus-8rf27 Jun 17 22:04:40.347: INFO: got data: { "image": "nautilus.jpg" } Jun 17 22:04:40.347: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jun 17 22:04:40.347: INFO: update-demo-nautilus-8rf27 is verified up and running Jun 17 22:04:40.347: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-82 get pods update-demo-nautilus-lccpd -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Jun 17 22:04:40.509: INFO: stderr: "" Jun 17 22:04:40.509: INFO: stdout: "true" Jun 17 22:04:40.509: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-82 get pods update-demo-nautilus-lccpd -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' Jun 17 22:04:40.685: INFO: stderr: "" Jun 17 22:04:40.685: INFO: stdout: "k8s.gcr.io/e2e-test-images/nautilus:1.4" Jun 17 22:04:40.685: INFO: validating pod update-demo-nautilus-lccpd Jun 17 22:04:40.689: INFO: got data: { "image": "nautilus.jpg" } Jun 17 22:04:40.689: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jun 17 22:04:40.689: INFO: update-demo-nautilus-lccpd is verified up and running STEP: using delete to clean up resources Jun 17 22:04:40.689: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-82 delete --grace-period=0 --force -f -' Jun 17 22:04:40.817: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jun 17 22:04:40.817: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Jun 17 22:04:40.817: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-82 get rc,svc -l name=update-demo --no-headers' Jun 17 22:04:41.024: INFO: stderr: "No resources found in kubectl-82 namespace.\n" Jun 17 22:04:41.024: INFO: stdout: "" Jun 17 22:04:41.024: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-82 get pods -l name=update-demo -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Jun 17 22:04:41.215: INFO: stderr: "" Jun 17 22:04:41.215: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 17 22:04:41.215: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-82" for this suite. • [SLOW TEST:12.550 seconds] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Update Demo /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:291 should create and stop a replication controller [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance]","total":-1,"completed":13,"skipped":514,"failed":1,"failures":["[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]"]} SSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 17 22:04:39.040: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test emptydir 0666 on node default medium Jun 17 22:04:39.072: INFO: Waiting up to 5m0s for pod "pod-7b0f56e9-2b22-448c-9188-058dc3fd4d7b" in namespace "emptydir-4405" to be "Succeeded or Failed" Jun 17 22:04:39.074: INFO: Pod "pod-7b0f56e9-2b22-448c-9188-058dc3fd4d7b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.213191ms Jun 17 22:04:41.079: INFO: Pod "pod-7b0f56e9-2b22-448c-9188-058dc3fd4d7b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007681105s Jun 17 22:04:43.083: INFO: Pod "pod-7b0f56e9-2b22-448c-9188-058dc3fd4d7b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011245239s STEP: Saw pod success Jun 17 22:04:43.083: INFO: Pod "pod-7b0f56e9-2b22-448c-9188-058dc3fd4d7b" satisfied condition "Succeeded or Failed" Jun 17 22:04:43.085: INFO: Trying to get logs from node node2 pod pod-7b0f56e9-2b22-448c-9188-058dc3fd4d7b container test-container: STEP: delete the pod Jun 17 22:04:43.098: INFO: Waiting for pod pod-7b0f56e9-2b22-448c-9188-058dc3fd4d7b to disappear Jun 17 22:04:43.100: INFO: Pod pod-7b0f56e9-2b22-448c-9188-058dc3fd4d7b no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 17 22:04:43.101: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-4405" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":39,"skipped":404,"failed":0} SSSSSSSSSSS ------------------------------ [BeforeEach] [sig-instrumentation] Events /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 17 22:04:43.132: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [It] should delete a collection of events [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Create set of events Jun 17 22:04:43.163: INFO: created test-event-1 Jun 17 22:04:43.166: INFO: created test-event-2 Jun 17 22:04:43.170: INFO: created test-event-3 STEP: get a list of Events with a label in the current namespace STEP: delete collection of events Jun 17 22:04:43.175: INFO: requesting DeleteCollection of events STEP: check that the list of events matches the requested quantity Jun 17 22:04:43.200: INFO: requesting list of events to confirm quantity [AfterEach] [sig-instrumentation] Events /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 17 22:04:43.202: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-4009" for this suite. • ------------------------------ {"msg":"PASSED [sig-instrumentation] Events should delete a collection of events [Conformance]","total":-1,"completed":40,"skipped":415,"failed":0} SSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 17 22:04:38.623: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating configMap with name projected-configmap-test-volume-c5b5f98e-6762-450e-86df-c72600e1531e STEP: Creating a pod to test consume configMaps Jun 17 22:04:38.738: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-6795e81e-6f15-48ba-b6f8-f1e90ab81ff8" in namespace "projected-2788" to be "Succeeded or Failed" Jun 17 22:04:38.740: INFO: Pod "pod-projected-configmaps-6795e81e-6f15-48ba-b6f8-f1e90ab81ff8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.815411ms Jun 17 22:04:40.745: INFO: Pod "pod-projected-configmaps-6795e81e-6f15-48ba-b6f8-f1e90ab81ff8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007881403s Jun 17 22:04:42.750: INFO: Pod "pod-projected-configmaps-6795e81e-6f15-48ba-b6f8-f1e90ab81ff8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.012573449s Jun 17 22:04:44.754: INFO: Pod "pod-projected-configmaps-6795e81e-6f15-48ba-b6f8-f1e90ab81ff8": Phase="Pending", Reason="", readiness=false. Elapsed: 6.016216328s Jun 17 22:04:46.758: INFO: Pod "pod-projected-configmaps-6795e81e-6f15-48ba-b6f8-f1e90ab81ff8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.020836961s STEP: Saw pod success Jun 17 22:04:46.758: INFO: Pod "pod-projected-configmaps-6795e81e-6f15-48ba-b6f8-f1e90ab81ff8" satisfied condition "Succeeded or Failed" Jun 17 22:04:46.762: INFO: Trying to get logs from node node1 pod pod-projected-configmaps-6795e81e-6f15-48ba-b6f8-f1e90ab81ff8 container agnhost-container: STEP: delete the pod Jun 17 22:04:46.774: INFO: Waiting for pod pod-projected-configmaps-6795e81e-6f15-48ba-b6f8-f1e90ab81ff8 to disappear Jun 17 22:04:46.776: INFO: Pod pod-projected-configmaps-6795e81e-6f15-48ba-b6f8-f1e90ab81ff8 no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 17 22:04:46.776: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2788" for this suite. • [SLOW TEST:8.160 seconds] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":-1,"completed":20,"skipped":268,"failed":1,"failures":["[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]"]} SSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] Service endpoints latency /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 17 22:04:38.361: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svc-latency STEP: Waiting for a default service account to be provisioned in namespace [It] should not be very high [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Jun 17 22:04:38.382: INFO: >>> kubeConfig: /root/.kube/config STEP: creating replication controller svc-latency-rc in namespace svc-latency-3638 I0617 22:04:38.397077 35 runners.go:190] Created replication controller with name: svc-latency-rc, namespace: svc-latency-3638, replica count: 1 I0617 22:04:39.448662 35 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0617 22:04:40.449734 35 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0617 22:04:41.450309 35 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0617 22:04:42.450752 35 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0617 22:04:43.451484 35 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0617 22:04:44.451968 35 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0617 22:04:45.453349 35 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jun 17 22:04:45.561: INFO: Created: latency-svc-x2x9m Jun 17 22:04:45.566: INFO: Got endpoints: latency-svc-x2x9m [12.576704ms] Jun 17 22:04:45.572: INFO: Created: latency-svc-tc4q7 Jun 17 22:04:45.575: INFO: Got endpoints: latency-svc-tc4q7 [8.327569ms] Jun 17 22:04:45.575: INFO: Created: latency-svc-8nc4j Jun 17 22:04:45.577: INFO: Got endpoints: latency-svc-8nc4j [10.743856ms] Jun 17 22:04:45.579: INFO: Created: latency-svc-9pztk Jun 17 22:04:45.581: INFO: Got endpoints: latency-svc-9pztk [14.396116ms] Jun 17 22:04:45.581: INFO: Created: latency-svc-l47gh Jun 17 22:04:45.583: INFO: Got endpoints: latency-svc-l47gh [16.983433ms] Jun 17 22:04:45.584: INFO: Created: latency-svc-fdk69 Jun 17 22:04:45.586: INFO: Got endpoints: latency-svc-fdk69 [19.314522ms] Jun 17 22:04:45.587: INFO: Created: latency-svc-487xt Jun 17 22:04:45.589: INFO: Got endpoints: latency-svc-487xt [21.879563ms] Jun 17 22:04:45.589: INFO: Created: latency-svc-vthml Jun 17 22:04:45.592: INFO: Got endpoints: latency-svc-vthml [24.662267ms] Jun 17 22:04:45.592: INFO: Created: latency-svc-j26lr Jun 17 22:04:45.594: INFO: Got endpoints: latency-svc-j26lr [27.394884ms] Jun 17 22:04:45.595: INFO: Created: latency-svc-469rg Jun 17 22:04:45.597: INFO: Got endpoints: latency-svc-469rg [30.492392ms] Jun 17 22:04:45.598: INFO: Created: latency-svc-nr5r4 Jun 17 22:04:45.600: INFO: Got endpoints: latency-svc-nr5r4 [33.452635ms] Jun 17 22:04:45.601: INFO: Created: latency-svc-mq7hc Jun 17 22:04:45.603: INFO: Got endpoints: latency-svc-mq7hc [35.997424ms] Jun 17 22:04:45.605: INFO: Created: latency-svc-lpvcw Jun 17 22:04:45.606: INFO: Created: latency-svc-4x7wc Jun 17 22:04:45.607: INFO: Got endpoints: latency-svc-lpvcw [39.736313ms] Jun 17 22:04:45.608: INFO: Got endpoints: latency-svc-4x7wc [41.000344ms] Jun 17 22:04:45.608: INFO: Created: latency-svc-pd89x Jun 17 22:04:45.610: INFO: Got endpoints: latency-svc-pd89x [43.27594ms] Jun 17 22:04:45.612: INFO: Created: latency-svc-b9zq7 Jun 17 22:04:45.614: INFO: Got endpoints: latency-svc-b9zq7 [46.797061ms] Jun 17 22:04:45.615: INFO: Created: latency-svc-bkksn Jun 17 22:04:45.617: INFO: Got endpoints: latency-svc-bkksn [9.25174ms] Jun 17 22:04:45.618: INFO: Created: latency-svc-7469l Jun 17 22:04:45.622: INFO: Created: latency-svc-zp5ws Jun 17 22:04:45.622: INFO: Got endpoints: latency-svc-7469l [47.291228ms] Jun 17 22:04:45.624: INFO: Got endpoints: latency-svc-zp5ws [46.792279ms] Jun 17 22:04:45.625: INFO: Created: latency-svc-qd45r Jun 17 22:04:45.628: INFO: Got endpoints: latency-svc-qd45r [46.606652ms] Jun 17 22:04:45.628: INFO: Created: latency-svc-cj729 Jun 17 22:04:45.630: INFO: Got endpoints: latency-svc-cj729 [46.368502ms] Jun 17 22:04:45.631: INFO: Created: latency-svc-wjcn6 Jun 17 22:04:45.633: INFO: Got endpoints: latency-svc-wjcn6 [47.241704ms] Jun 17 22:04:45.634: INFO: Created: latency-svc-w4zwm Jun 17 22:04:45.636: INFO: Got endpoints: latency-svc-w4zwm [46.78184ms] Jun 17 22:04:45.636: INFO: Created: latency-svc-jlqc9 Jun 17 22:04:45.638: INFO: Got endpoints: latency-svc-jlqc9 [46.765301ms] Jun 17 22:04:45.639: INFO: Created: latency-svc-gksj2 Jun 17 22:04:45.641: INFO: Got endpoints: latency-svc-gksj2 [46.784522ms] Jun 17 22:04:45.641: INFO: Created: latency-svc-6qhm4 Jun 17 22:04:45.644: INFO: Got endpoints: latency-svc-6qhm4 [46.586008ms] Jun 17 22:04:45.646: INFO: Created: latency-svc-c9lzs Jun 17 22:04:45.649: INFO: Got endpoints: latency-svc-c9lzs [48.074795ms] Jun 17 22:04:45.649: INFO: Created: latency-svc-jr8sj Jun 17 22:04:45.651: INFO: Got endpoints: latency-svc-jr8sj [47.460496ms] Jun 17 22:04:45.651: INFO: Created: latency-svc-xh6jg Jun 17 22:04:45.654: INFO: Got endpoints: latency-svc-xh6jg [46.745963ms] Jun 17 22:04:45.654: INFO: Created: latency-svc-8sjwt Jun 17 22:04:45.656: INFO: Got endpoints: latency-svc-8sjwt [45.939024ms] Jun 17 22:04:45.658: INFO: Created: latency-svc-g2wlw Jun 17 22:04:45.660: INFO: Got endpoints: latency-svc-g2wlw [45.0601ms] Jun 17 22:04:45.660: INFO: Created: latency-svc-9qstz Jun 17 22:04:45.662: INFO: Created: latency-svc-q8qxs Jun 17 22:04:45.664: INFO: Got endpoints: latency-svc-9qstz [46.518633ms] Jun 17 22:04:45.665: INFO: Created: latency-svc-2wbcj Jun 17 22:04:45.670: INFO: Created: latency-svc-t7kdp Jun 17 22:04:45.672: INFO: Created: latency-svc-5xbnz Jun 17 22:04:45.675: INFO: Created: latency-svc-47vjg Jun 17 22:04:45.677: INFO: Created: latency-svc-j8jbp Jun 17 22:04:45.681: INFO: Created: latency-svc-n9nc2 Jun 17 22:04:45.684: INFO: Created: latency-svc-z7vsw Jun 17 22:04:45.685: INFO: Created: latency-svc-kl9rs Jun 17 22:04:45.688: INFO: Created: latency-svc-k97hd Jun 17 22:04:45.691: INFO: Created: latency-svc-krjv9 Jun 17 22:04:45.694: INFO: Created: latency-svc-sfvtk Jun 17 22:04:45.696: INFO: Created: latency-svc-wxs75 Jun 17 22:04:45.700: INFO: Created: latency-svc-q5887 Jun 17 22:04:45.702: INFO: Created: latency-svc-9kg6v Jun 17 22:04:45.715: INFO: Got endpoints: latency-svc-q8qxs [92.457402ms] Jun 17 22:04:45.720: INFO: Created: latency-svc-d9pr9 Jun 17 22:04:45.764: INFO: Got endpoints: latency-svc-2wbcj [139.898607ms] Jun 17 22:04:45.770: INFO: Created: latency-svc-947tn Jun 17 22:04:45.815: INFO: Got endpoints: latency-svc-t7kdp [187.349288ms] Jun 17 22:04:45.821: INFO: Created: latency-svc-qkj9l Jun 17 22:04:45.865: INFO: Got endpoints: latency-svc-5xbnz [235.209725ms] Jun 17 22:04:45.871: INFO: Created: latency-svc-f4dg7 Jun 17 22:04:45.914: INFO: Got endpoints: latency-svc-47vjg [280.924632ms] Jun 17 22:04:45.920: INFO: Created: latency-svc-fc5b9 Jun 17 22:04:45.965: INFO: Got endpoints: latency-svc-j8jbp [329.7937ms] Jun 17 22:04:45.971: INFO: Created: latency-svc-2flx6 Jun 17 22:04:46.016: INFO: Got endpoints: latency-svc-n9nc2 [377.371146ms] Jun 17 22:04:46.021: INFO: Created: latency-svc-tnqvp Jun 17 22:04:46.065: INFO: Got endpoints: latency-svc-z7vsw [423.633403ms] Jun 17 22:04:46.071: INFO: Created: latency-svc-7vx64 Jun 17 22:04:46.115: INFO: Got endpoints: latency-svc-kl9rs [470.678693ms] Jun 17 22:04:46.120: INFO: Created: latency-svc-hlsjj Jun 17 22:04:46.164: INFO: Got endpoints: latency-svc-k97hd [515.764283ms] Jun 17 22:04:46.170: INFO: Created: latency-svc-nz4mr Jun 17 22:04:46.215: INFO: Got endpoints: latency-svc-krjv9 [564.359732ms] Jun 17 22:04:46.221: INFO: Created: latency-svc-8hhpz Jun 17 22:04:46.264: INFO: Got endpoints: latency-svc-sfvtk [610.709957ms] Jun 17 22:04:46.270: INFO: Created: latency-svc-kzk66 Jun 17 22:04:46.314: INFO: Got endpoints: latency-svc-wxs75 [657.431106ms] Jun 17 22:04:46.319: INFO: Created: latency-svc-wrrxp Jun 17 22:04:46.365: INFO: Got endpoints: latency-svc-q5887 [705.801473ms] Jun 17 22:04:46.372: INFO: Created: latency-svc-fpksd Jun 17 22:04:46.414: INFO: Got endpoints: latency-svc-9kg6v [749.755429ms] Jun 17 22:04:46.419: INFO: Created: latency-svc-tmkcj Jun 17 22:04:46.465: INFO: Got endpoints: latency-svc-d9pr9 [750.157009ms] Jun 17 22:04:46.470: INFO: Created: latency-svc-sms5q Jun 17 22:04:46.515: INFO: Got endpoints: latency-svc-947tn [750.519624ms] Jun 17 22:04:46.521: INFO: Created: latency-svc-4ftx8 Jun 17 22:04:46.565: INFO: Got endpoints: latency-svc-qkj9l [749.743502ms] Jun 17 22:04:46.570: INFO: Created: latency-svc-rb9fx Jun 17 22:04:46.615: INFO: Got endpoints: latency-svc-f4dg7 [749.740727ms] Jun 17 22:04:46.621: INFO: Created: latency-svc-7n6bx Jun 17 22:04:46.665: INFO: Got endpoints: latency-svc-fc5b9 [750.604115ms] Jun 17 22:04:46.671: INFO: Created: latency-svc-98xcn Jun 17 22:04:46.716: INFO: Got endpoints: latency-svc-2flx6 [750.033803ms] Jun 17 22:04:46.723: INFO: Created: latency-svc-w2z4g Jun 17 22:04:46.765: INFO: Got endpoints: latency-svc-tnqvp [749.250244ms] Jun 17 22:04:46.770: INFO: Created: latency-svc-nccfp Jun 17 22:04:46.815: INFO: Got endpoints: latency-svc-7vx64 [750.132387ms] Jun 17 22:04:46.821: INFO: Created: latency-svc-lh57w Jun 17 22:04:46.865: INFO: Got endpoints: latency-svc-hlsjj [749.799281ms] Jun 17 22:04:46.869: INFO: Created: latency-svc-lfqzc Jun 17 22:04:46.915: INFO: Got endpoints: latency-svc-nz4mr [750.200837ms] Jun 17 22:04:46.922: INFO: Created: latency-svc-87qqd Jun 17 22:04:46.964: INFO: Got endpoints: latency-svc-8hhpz [749.208325ms] Jun 17 22:04:46.970: INFO: Created: latency-svc-z6fhc Jun 17 22:04:47.017: INFO: Got endpoints: latency-svc-kzk66 [752.240593ms] Jun 17 22:04:47.030: INFO: Created: latency-svc-69pvk Jun 17 22:04:47.065: INFO: Got endpoints: latency-svc-wrrxp [750.456376ms] Jun 17 22:04:47.070: INFO: Created: latency-svc-68gqt Jun 17 22:04:47.115: INFO: Got endpoints: latency-svc-fpksd [749.60728ms] Jun 17 22:04:47.121: INFO: Created: latency-svc-lwf5d Jun 17 22:04:47.165: INFO: Got endpoints: latency-svc-tmkcj [750.670981ms] Jun 17 22:04:47.170: INFO: Created: latency-svc-rtzn8 Jun 17 22:04:47.215: INFO: Got endpoints: latency-svc-sms5q [749.802479ms] Jun 17 22:04:47.221: INFO: Created: latency-svc-xclzt Jun 17 22:04:47.265: INFO: Got endpoints: latency-svc-4ftx8 [750.187983ms] Jun 17 22:04:47.272: INFO: Created: latency-svc-z2v69 Jun 17 22:04:47.315: INFO: Got endpoints: latency-svc-rb9fx [749.742ms] Jun 17 22:04:47.321: INFO: Created: latency-svc-6mm4s Jun 17 22:04:47.365: INFO: Got endpoints: latency-svc-7n6bx [749.617833ms] Jun 17 22:04:47.371: INFO: Created: latency-svc-jd6m2 Jun 17 22:04:47.415: INFO: Got endpoints: latency-svc-98xcn [749.681868ms] Jun 17 22:04:47.421: INFO: Created: latency-svc-2xqxz Jun 17 22:04:47.465: INFO: Got endpoints: latency-svc-w2z4g [749.675727ms] Jun 17 22:04:47.471: INFO: Created: latency-svc-67fxq Jun 17 22:04:47.515: INFO: Got endpoints: latency-svc-nccfp [749.697754ms] Jun 17 22:04:47.521: INFO: Created: latency-svc-nnk9d Jun 17 22:04:47.564: INFO: Got endpoints: latency-svc-lh57w [749.200767ms] Jun 17 22:04:47.570: INFO: Created: latency-svc-bqnh9 Jun 17 22:04:47.615: INFO: Got endpoints: latency-svc-lfqzc [749.978696ms] Jun 17 22:04:47.620: INFO: Created: latency-svc-w5pch Jun 17 22:04:47.666: INFO: Got endpoints: latency-svc-87qqd [751.544033ms] Jun 17 22:04:47.671: INFO: Created: latency-svc-8jngg Jun 17 22:04:47.715: INFO: Got endpoints: latency-svc-z6fhc [750.881501ms] Jun 17 22:04:47.721: INFO: Created: latency-svc-kcg6p Jun 17 22:04:47.764: INFO: Got endpoints: latency-svc-69pvk [747.678715ms] Jun 17 22:04:47.769: INFO: Created: latency-svc-gb74k Jun 17 22:04:47.816: INFO: Got endpoints: latency-svc-68gqt [751.33783ms] Jun 17 22:04:47.822: INFO: Created: latency-svc-7cbnr Jun 17 22:04:47.864: INFO: Got endpoints: latency-svc-lwf5d [748.944815ms] Jun 17 22:04:47.870: INFO: Created: latency-svc-824cv Jun 17 22:04:47.915: INFO: Got endpoints: latency-svc-rtzn8 [750.184657ms] Jun 17 22:04:47.920: INFO: Created: latency-svc-64s5j Jun 17 22:04:47.964: INFO: Got endpoints: latency-svc-xclzt [748.957351ms] Jun 17 22:04:47.969: INFO: Created: latency-svc-zlnpp Jun 17 22:04:48.015: INFO: Got endpoints: latency-svc-z2v69 [749.539213ms] Jun 17 22:04:48.020: INFO: Created: latency-svc-rrjrp Jun 17 22:04:48.065: INFO: Got endpoints: latency-svc-6mm4s [750.387364ms] Jun 17 22:04:48.070: INFO: Created: latency-svc-pc2pm Jun 17 22:04:48.114: INFO: Got endpoints: latency-svc-jd6m2 [749.366007ms] Jun 17 22:04:48.119: INFO: Created: latency-svc-msp7k Jun 17 22:04:48.165: INFO: Got endpoints: latency-svc-2xqxz [750.266929ms] Jun 17 22:04:48.171: INFO: Created: latency-svc-mnn7h Jun 17 22:04:48.215: INFO: Got endpoints: latency-svc-67fxq [749.477655ms] Jun 17 22:04:48.224: INFO: Created: latency-svc-nfv4m Jun 17 22:04:48.265: INFO: Got endpoints: latency-svc-nnk9d [749.841435ms] Jun 17 22:04:48.270: INFO: Created: latency-svc-22mgk Jun 17 22:04:48.315: INFO: Got endpoints: latency-svc-bqnh9 [750.108975ms] Jun 17 22:04:48.320: INFO: Created: latency-svc-sxl74 Jun 17 22:04:48.364: INFO: Got endpoints: latency-svc-w5pch [749.469491ms] Jun 17 22:04:48.369: INFO: Created: latency-svc-nv9sm Jun 17 22:04:48.415: INFO: Got endpoints: latency-svc-8jngg [748.579562ms] Jun 17 22:04:48.426: INFO: Created: latency-svc-z2nzz Jun 17 22:04:48.464: INFO: Got endpoints: latency-svc-kcg6p [748.743267ms] Jun 17 22:04:48.470: INFO: Created: latency-svc-49pzv Jun 17 22:04:48.513: INFO: Got endpoints: latency-svc-gb74k [748.984475ms] Jun 17 22:04:48.518: INFO: Created: latency-svc-n5l7v Jun 17 22:04:48.565: INFO: Got endpoints: latency-svc-7cbnr [748.920193ms] Jun 17 22:04:48.571: INFO: Created: latency-svc-8fqsv Jun 17 22:04:48.615: INFO: Got endpoints: latency-svc-824cv [750.527098ms] Jun 17 22:04:48.621: INFO: Created: latency-svc-5ksbf Jun 17 22:04:48.665: INFO: Got endpoints: latency-svc-64s5j [750.490094ms] Jun 17 22:04:48.671: INFO: Created: latency-svc-55r8n Jun 17 22:04:48.865: INFO: Got endpoints: latency-svc-zlnpp [901.194719ms] Jun 17 22:04:48.871: INFO: Created: latency-svc-vnm2l Jun 17 22:04:48.915: INFO: Got endpoints: latency-svc-rrjrp [900.221776ms] Jun 17 22:04:48.920: INFO: Created: latency-svc-57mmm Jun 17 22:04:48.965: INFO: Got endpoints: latency-svc-pc2pm [899.783639ms] Jun 17 22:04:48.971: INFO: Created: latency-svc-trf2t Jun 17 22:04:49.015: INFO: Got endpoints: latency-svc-msp7k [900.80349ms] Jun 17 22:04:49.024: INFO: Created: latency-svc-pkmlf Jun 17 22:04:49.065: INFO: Got endpoints: latency-svc-mnn7h [899.608035ms] Jun 17 22:04:49.071: INFO: Created: latency-svc-sj2cm Jun 17 22:04:49.115: INFO: Got endpoints: latency-svc-nfv4m [899.965147ms] Jun 17 22:04:49.120: INFO: Created: latency-svc-whpgr Jun 17 22:04:49.166: INFO: Got endpoints: latency-svc-22mgk [901.655228ms] Jun 17 22:04:49.173: INFO: Created: latency-svc-q9z82 Jun 17 22:04:49.215: INFO: Got endpoints: latency-svc-sxl74 [900.39754ms] Jun 17 22:04:49.220: INFO: Created: latency-svc-gcqd7 Jun 17 22:04:49.265: INFO: Got endpoints: latency-svc-nv9sm [900.641782ms] Jun 17 22:04:49.270: INFO: Created: latency-svc-fsvpb Jun 17 22:04:49.315: INFO: Got endpoints: latency-svc-z2nzz [899.773034ms] Jun 17 22:04:49.321: INFO: Created: latency-svc-tjxf8 Jun 17 22:04:49.365: INFO: Got endpoints: latency-svc-49pzv [900.751945ms] Jun 17 22:04:49.371: INFO: Created: latency-svc-hwt55 Jun 17 22:04:49.415: INFO: Got endpoints: latency-svc-n5l7v [901.626536ms] Jun 17 22:04:49.420: INFO: Created: latency-svc-4rv66 Jun 17 22:04:49.465: INFO: Got endpoints: latency-svc-8fqsv [900.081255ms] Jun 17 22:04:49.470: INFO: Created: latency-svc-5nwxx Jun 17 22:04:49.515: INFO: Got endpoints: latency-svc-5ksbf [899.848243ms] Jun 17 22:04:49.521: INFO: Created: latency-svc-lbjll Jun 17 22:04:49.565: INFO: Got endpoints: latency-svc-55r8n [899.206411ms] Jun 17 22:04:49.571: INFO: Created: latency-svc-zntlr Jun 17 22:04:49.616: INFO: Got endpoints: latency-svc-vnm2l [750.635831ms] Jun 17 22:04:49.623: INFO: Created: latency-svc-h5jcs Jun 17 22:04:49.665: INFO: Got endpoints: latency-svc-57mmm [750.005636ms] Jun 17 22:04:49.670: INFO: Created: latency-svc-qjw6s Jun 17 22:04:49.715: INFO: Got endpoints: latency-svc-trf2t [749.896399ms] Jun 17 22:04:49.720: INFO: Created: latency-svc-m2dhl Jun 17 22:04:49.765: INFO: Got endpoints: latency-svc-pkmlf [749.725466ms] Jun 17 22:04:49.774: INFO: Created: latency-svc-lgtcl Jun 17 22:04:49.815: INFO: Got endpoints: latency-svc-sj2cm [749.71842ms] Jun 17 22:04:49.820: INFO: Created: latency-svc-fb4rw Jun 17 22:04:49.865: INFO: Got endpoints: latency-svc-whpgr [749.7821ms] Jun 17 22:04:49.871: INFO: Created: latency-svc-7mlrg Jun 17 22:04:49.965: INFO: Got endpoints: latency-svc-q9z82 [798.89512ms] Jun 17 22:04:49.971: INFO: Created: latency-svc-9zjpl Jun 17 22:04:50.015: INFO: Got endpoints: latency-svc-gcqd7 [800.312225ms] Jun 17 22:04:50.021: INFO: Created: latency-svc-b4gpg Jun 17 22:04:50.065: INFO: Got endpoints: latency-svc-fsvpb [800.077379ms] Jun 17 22:04:50.071: INFO: Created: latency-svc-4nvc7 Jun 17 22:04:50.115: INFO: Got endpoints: latency-svc-tjxf8 [800.648098ms] Jun 17 22:04:50.122: INFO: Created: latency-svc-m9kj5 Jun 17 22:04:50.165: INFO: Got endpoints: latency-svc-hwt55 [799.522523ms] Jun 17 22:04:50.170: INFO: Created: latency-svc-94npv Jun 17 22:04:50.216: INFO: Got endpoints: latency-svc-4rv66 [800.975281ms] Jun 17 22:04:50.223: INFO: Created: latency-svc-5w64c Jun 17 22:04:50.265: INFO: Got endpoints: latency-svc-5nwxx [799.596192ms] Jun 17 22:04:50.271: INFO: Created: latency-svc-r2c8g Jun 17 22:04:50.315: INFO: Got endpoints: latency-svc-lbjll [800.322013ms] Jun 17 22:04:50.321: INFO: Created: latency-svc-7dc8h Jun 17 22:04:50.365: INFO: Got endpoints: latency-svc-zntlr [800.178711ms] Jun 17 22:04:50.370: INFO: Created: latency-svc-w72xs Jun 17 22:04:50.414: INFO: Got endpoints: latency-svc-h5jcs [798.43593ms] Jun 17 22:04:50.421: INFO: Created: latency-svc-c9qbf Jun 17 22:04:50.464: INFO: Got endpoints: latency-svc-qjw6s [799.363345ms] Jun 17 22:04:50.471: INFO: Created: latency-svc-bvv77 Jun 17 22:04:50.515: INFO: Got endpoints: latency-svc-m2dhl [799.886864ms] Jun 17 22:04:50.521: INFO: Created: latency-svc-xqhft Jun 17 22:04:50.565: INFO: Got endpoints: latency-svc-lgtcl [799.953574ms] Jun 17 22:04:50.571: INFO: Created: latency-svc-hlfks Jun 17 22:04:50.614: INFO: Got endpoints: latency-svc-fb4rw [799.814416ms] Jun 17 22:04:50.620: INFO: Created: latency-svc-rznw2 Jun 17 22:04:50.665: INFO: Got endpoints: latency-svc-7mlrg [800.375191ms] Jun 17 22:04:50.671: INFO: Created: latency-svc-dvnm5 Jun 17 22:04:50.716: INFO: Got endpoints: latency-svc-9zjpl [750.066949ms] Jun 17 22:04:50.721: INFO: Created: latency-svc-9zntl Jun 17 22:04:50.765: INFO: Got endpoints: latency-svc-b4gpg [749.776683ms] Jun 17 22:04:50.771: INFO: Created: latency-svc-kdl76 Jun 17 22:04:50.815: INFO: Got endpoints: latency-svc-4nvc7 [750.017246ms] Jun 17 22:04:50.821: INFO: Created: latency-svc-kv8x6 Jun 17 22:04:50.865: INFO: Got endpoints: latency-svc-m9kj5 [749.14125ms] Jun 17 22:04:50.870: INFO: Created: latency-svc-84kvt Jun 17 22:04:50.914: INFO: Got endpoints: latency-svc-94npv [749.879594ms] Jun 17 22:04:50.920: INFO: Created: latency-svc-p4zp9 Jun 17 22:04:50.965: INFO: Got endpoints: latency-svc-5w64c [748.740122ms] Jun 17 22:04:50.971: INFO: Created: latency-svc-k58bl Jun 17 22:04:51.015: INFO: Got endpoints: latency-svc-r2c8g [750.534235ms] Jun 17 22:04:51.022: INFO: Created: latency-svc-p6kwt Jun 17 22:04:51.064: INFO: Got endpoints: latency-svc-7dc8h [748.930342ms] Jun 17 22:04:51.069: INFO: Created: latency-svc-557f6 Jun 17 22:04:51.114: INFO: Got endpoints: latency-svc-w72xs [749.191235ms] Jun 17 22:04:51.119: INFO: Created: latency-svc-zp6gb Jun 17 22:04:51.165: INFO: Got endpoints: latency-svc-c9qbf [750.412001ms] Jun 17 22:04:51.170: INFO: Created: latency-svc-6rmc4 Jun 17 22:04:51.214: INFO: Got endpoints: latency-svc-bvv77 [749.909867ms] Jun 17 22:04:51.220: INFO: Created: latency-svc-5pjnd Jun 17 22:04:51.265: INFO: Got endpoints: latency-svc-xqhft [749.757526ms] Jun 17 22:04:51.271: INFO: Created: latency-svc-kpnsr Jun 17 22:04:51.315: INFO: Got endpoints: latency-svc-hlfks [750.427573ms] Jun 17 22:04:51.321: INFO: Created: latency-svc-5xct8 Jun 17 22:04:51.365: INFO: Got endpoints: latency-svc-rznw2 [750.411076ms] Jun 17 22:04:51.370: INFO: Created: latency-svc-465fj Jun 17 22:04:51.414: INFO: Got endpoints: latency-svc-dvnm5 [749.243668ms] Jun 17 22:04:51.420: INFO: Created: latency-svc-hhhp4 Jun 17 22:04:51.465: INFO: Got endpoints: latency-svc-9zntl [749.079918ms] Jun 17 22:04:51.496: INFO: Created: latency-svc-gqsph Jun 17 22:04:51.515: INFO: Got endpoints: latency-svc-kdl76 [749.770903ms] Jun 17 22:04:51.520: INFO: Created: latency-svc-vqlvh Jun 17 22:04:51.565: INFO: Got endpoints: latency-svc-kv8x6 [749.37224ms] Jun 17 22:04:51.571: INFO: Created: latency-svc-mfp4w Jun 17 22:04:51.614: INFO: Got endpoints: latency-svc-84kvt [749.373166ms] Jun 17 22:04:51.620: INFO: Created: latency-svc-qnc6r Jun 17 22:04:51.664: INFO: Got endpoints: latency-svc-p4zp9 [749.792056ms] Jun 17 22:04:51.671: INFO: Created: latency-svc-pzdp9 Jun 17 22:04:51.716: INFO: Got endpoints: latency-svc-k58bl [750.707941ms] Jun 17 22:04:51.721: INFO: Created: latency-svc-jj8ng Jun 17 22:04:51.764: INFO: Got endpoints: latency-svc-p6kwt [748.472218ms] Jun 17 22:04:51.769: INFO: Created: latency-svc-zcz59 Jun 17 22:04:51.814: INFO: Got endpoints: latency-svc-557f6 [750.41424ms] Jun 17 22:04:51.819: INFO: Created: latency-svc-85vnw Jun 17 22:04:51.864: INFO: Got endpoints: latency-svc-zp6gb [750.175156ms] Jun 17 22:04:51.870: INFO: Created: latency-svc-9bzb6 Jun 17 22:04:51.915: INFO: Got endpoints: latency-svc-6rmc4 [750.09525ms] Jun 17 22:04:51.922: INFO: Created: latency-svc-t5kh2 Jun 17 22:04:51.964: INFO: Got endpoints: latency-svc-5pjnd [749.341021ms] Jun 17 22:04:51.970: INFO: Created: latency-svc-8p678 Jun 17 22:04:52.015: INFO: Got endpoints: latency-svc-kpnsr [750.81537ms] Jun 17 22:04:52.023: INFO: Created: latency-svc-mqrfg Jun 17 22:04:52.066: INFO: Got endpoints: latency-svc-5xct8 [750.758476ms] Jun 17 22:04:52.072: INFO: Created: latency-svc-b4sc2 Jun 17 22:04:52.115: INFO: Got endpoints: latency-svc-465fj [749.710692ms] Jun 17 22:04:52.120: INFO: Created: latency-svc-cz94s Jun 17 22:04:52.165: INFO: Got endpoints: latency-svc-hhhp4 [750.49656ms] Jun 17 22:04:52.171: INFO: Created: latency-svc-46d5r Jun 17 22:04:52.215: INFO: Got endpoints: latency-svc-gqsph [750.366737ms] Jun 17 22:04:52.221: INFO: Created: latency-svc-wcfmq Jun 17 22:04:52.265: INFO: Got endpoints: latency-svc-vqlvh [750.194079ms] Jun 17 22:04:52.270: INFO: Created: latency-svc-vnpqm Jun 17 22:04:52.314: INFO: Got endpoints: latency-svc-mfp4w [749.647086ms] Jun 17 22:04:52.320: INFO: Created: latency-svc-npc6q Jun 17 22:04:52.365: INFO: Got endpoints: latency-svc-qnc6r [751.219599ms] Jun 17 22:04:52.371: INFO: Created: latency-svc-zqgbt Jun 17 22:04:52.415: INFO: Got endpoints: latency-svc-pzdp9 [750.878283ms] Jun 17 22:04:52.420: INFO: Created: latency-svc-fz75h Jun 17 22:04:52.466: INFO: Got endpoints: latency-svc-jj8ng [749.785537ms] Jun 17 22:04:52.471: INFO: Created: latency-svc-9dw6j Jun 17 22:04:52.515: INFO: Got endpoints: latency-svc-zcz59 [751.593922ms] Jun 17 22:04:52.521: INFO: Created: latency-svc-8mkpx Jun 17 22:04:52.565: INFO: Got endpoints: latency-svc-85vnw [750.126301ms] Jun 17 22:04:52.570: INFO: Created: latency-svc-psp67 Jun 17 22:04:52.616: INFO: Got endpoints: latency-svc-9bzb6 [751.454189ms] Jun 17 22:04:52.622: INFO: Created: latency-svc-n6qmj Jun 17 22:04:52.665: INFO: Got endpoints: latency-svc-t5kh2 [750.222833ms] Jun 17 22:04:52.671: INFO: Created: latency-svc-bnjpx Jun 17 22:04:52.715: INFO: Got endpoints: latency-svc-8p678 [750.760464ms] Jun 17 22:04:52.721: INFO: Created: latency-svc-j9p9f Jun 17 22:04:52.764: INFO: Got endpoints: latency-svc-mqrfg [748.980211ms] Jun 17 22:04:52.769: INFO: Created: latency-svc-6m88w Jun 17 22:04:52.815: INFO: Got endpoints: latency-svc-b4sc2 [749.450793ms] Jun 17 22:04:52.822: INFO: Created: latency-svc-pjwj9 Jun 17 22:04:52.866: INFO: Got endpoints: latency-svc-cz94s [750.958443ms] Jun 17 22:04:52.872: INFO: Created: latency-svc-s8pt6 Jun 17 22:04:52.915: INFO: Got endpoints: latency-svc-46d5r [750.241218ms] Jun 17 22:04:52.922: INFO: Created: latency-svc-kvrgb Jun 17 22:04:53.015: INFO: Got endpoints: latency-svc-wcfmq [799.85969ms] Jun 17 22:04:53.021: INFO: Created: latency-svc-v72b5 Jun 17 22:04:53.065: INFO: Got endpoints: latency-svc-vnpqm [799.217031ms] Jun 17 22:04:53.071: INFO: Created: latency-svc-p849l Jun 17 22:04:53.115: INFO: Got endpoints: latency-svc-npc6q [800.510691ms] Jun 17 22:04:53.122: INFO: Created: latency-svc-772nj Jun 17 22:04:53.166: INFO: Got endpoints: latency-svc-zqgbt [800.30544ms] Jun 17 22:04:53.172: INFO: Created: latency-svc-zbhl9 Jun 17 22:04:53.215: INFO: Got endpoints: latency-svc-fz75h [799.922784ms] Jun 17 22:04:53.221: INFO: Created: latency-svc-vq26r Jun 17 22:04:53.265: INFO: Got endpoints: latency-svc-9dw6j [799.509563ms] Jun 17 22:04:53.271: INFO: Created: latency-svc-ljznx Jun 17 22:04:53.315: INFO: Got endpoints: latency-svc-8mkpx [799.213796ms] Jun 17 22:04:53.320: INFO: Created: latency-svc-2fkml Jun 17 22:04:53.364: INFO: Got endpoints: latency-svc-psp67 [799.350865ms] Jun 17 22:04:53.370: INFO: Created: latency-svc-ztvvf Jun 17 22:04:53.416: INFO: Got endpoints: latency-svc-n6qmj [799.939606ms] Jun 17 22:04:53.421: INFO: Created: latency-svc-rddrc Jun 17 22:04:53.464: INFO: Got endpoints: latency-svc-bnjpx [799.151988ms] Jun 17 22:04:53.470: INFO: Created: latency-svc-9slrn Jun 17 22:04:53.516: INFO: Got endpoints: latency-svc-j9p9f [800.868835ms] Jun 17 22:04:53.522: INFO: Created: latency-svc-l78vx Jun 17 22:04:53.566: INFO: Got endpoints: latency-svc-6m88w [801.091232ms] Jun 17 22:04:53.573: INFO: Created: latency-svc-z48ch Jun 17 22:04:53.615: INFO: Got endpoints: latency-svc-pjwj9 [799.882988ms] Jun 17 22:04:53.622: INFO: Created: latency-svc-rmlrt Jun 17 22:04:53.666: INFO: Got endpoints: latency-svc-s8pt6 [800.222933ms] Jun 17 22:04:53.714: INFO: Got endpoints: latency-svc-kvrgb [799.119036ms] Jun 17 22:04:53.765: INFO: Got endpoints: latency-svc-v72b5 [749.813562ms] Jun 17 22:04:53.816: INFO: Got endpoints: latency-svc-p849l [751.523812ms] Jun 17 22:04:53.865: INFO: Got endpoints: latency-svc-772nj [750.404156ms] Jun 17 22:04:53.915: INFO: Got endpoints: latency-svc-zbhl9 [748.972945ms] Jun 17 22:04:53.965: INFO: Got endpoints: latency-svc-vq26r [750.155197ms] Jun 17 22:04:54.015: INFO: Got endpoints: latency-svc-ljznx [750.167947ms] Jun 17 22:04:54.064: INFO: Got endpoints: latency-svc-2fkml [749.670921ms] Jun 17 22:04:54.165: INFO: Got endpoints: latency-svc-ztvvf [801.368347ms] Jun 17 22:04:54.215: INFO: Got endpoints: latency-svc-rddrc [799.299807ms] Jun 17 22:04:54.264: INFO: Got endpoints: latency-svc-9slrn [799.918308ms] Jun 17 22:04:54.325: INFO: Got endpoints: latency-svc-l78vx [809.482002ms] Jun 17 22:04:54.364: INFO: Got endpoints: latency-svc-z48ch [798.751099ms] Jun 17 22:04:54.416: INFO: Got endpoints: latency-svc-rmlrt [800.321745ms] Jun 17 22:04:54.416: INFO: Latencies: [8.327569ms 9.25174ms 10.743856ms 14.396116ms 16.983433ms 19.314522ms 21.879563ms 24.662267ms 27.394884ms 30.492392ms 33.452635ms 35.997424ms 39.736313ms 41.000344ms 43.27594ms 45.0601ms 45.939024ms 46.368502ms 46.518633ms 46.586008ms 46.606652ms 46.745963ms 46.765301ms 46.78184ms 46.784522ms 46.792279ms 46.797061ms 47.241704ms 47.291228ms 47.460496ms 48.074795ms 92.457402ms 139.898607ms 187.349288ms 235.209725ms 280.924632ms 329.7937ms 377.371146ms 423.633403ms 470.678693ms 515.764283ms 564.359732ms 610.709957ms 657.431106ms 705.801473ms 747.678715ms 748.472218ms 748.579562ms 748.740122ms 748.743267ms 748.920193ms 748.930342ms 748.944815ms 748.957351ms 748.972945ms 748.980211ms 748.984475ms 749.079918ms 749.14125ms 749.191235ms 749.200767ms 749.208325ms 749.243668ms 749.250244ms 749.341021ms 749.366007ms 749.37224ms 749.373166ms 749.450793ms 749.469491ms 749.477655ms 749.539213ms 749.60728ms 749.617833ms 749.647086ms 749.670921ms 749.675727ms 749.681868ms 749.697754ms 749.710692ms 749.71842ms 749.725466ms 749.740727ms 749.742ms 749.743502ms 749.755429ms 749.757526ms 749.770903ms 749.776683ms 749.7821ms 749.785537ms 749.792056ms 749.799281ms 749.802479ms 749.813562ms 749.841435ms 749.879594ms 749.896399ms 749.909867ms 749.978696ms 750.005636ms 750.017246ms 750.033803ms 750.066949ms 750.09525ms 750.108975ms 750.126301ms 750.132387ms 750.155197ms 750.157009ms 750.167947ms 750.175156ms 750.184657ms 750.187983ms 750.194079ms 750.200837ms 750.222833ms 750.241218ms 750.266929ms 750.366737ms 750.387364ms 750.404156ms 750.411076ms 750.412001ms 750.41424ms 750.427573ms 750.456376ms 750.490094ms 750.49656ms 750.519624ms 750.527098ms 750.534235ms 750.604115ms 750.635831ms 750.670981ms 750.707941ms 750.758476ms 750.760464ms 750.81537ms 750.878283ms 750.881501ms 750.958443ms 751.219599ms 751.33783ms 751.454189ms 751.523812ms 751.544033ms 751.593922ms 752.240593ms 798.43593ms 798.751099ms 798.89512ms 799.119036ms 799.151988ms 799.213796ms 799.217031ms 799.299807ms 799.350865ms 799.363345ms 799.509563ms 799.522523ms 799.596192ms 799.814416ms 799.85969ms 799.882988ms 799.886864ms 799.918308ms 799.922784ms 799.939606ms 799.953574ms 800.077379ms 800.178711ms 800.222933ms 800.30544ms 800.312225ms 800.321745ms 800.322013ms 800.375191ms 800.510691ms 800.648098ms 800.868835ms 800.975281ms 801.091232ms 801.368347ms 809.482002ms 899.206411ms 899.608035ms 899.773034ms 899.783639ms 899.848243ms 899.965147ms 900.081255ms 900.221776ms 900.39754ms 900.641782ms 900.751945ms 900.80349ms 901.194719ms 901.626536ms 901.655228ms] Jun 17 22:04:54.416: INFO: 50 %ile: 750.005636ms Jun 17 22:04:54.416: INFO: 90 %ile: 800.868835ms Jun 17 22:04:54.416: INFO: 99 %ile: 901.626536ms Jun 17 22:04:54.416: INFO: Total sample count: 200 [AfterEach] [sig-network] Service endpoints latency /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 17 22:04:54.416: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svc-latency-3638" for this suite. • [SLOW TEST:16.070 seconds] [sig-network] Service endpoints latency /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should not be very high [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-network] Service endpoints latency should not be very high [Conformance]","total":-1,"completed":13,"skipped":298,"failed":1,"failures":["[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]"]} SSS ------------------------------ [BeforeEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 17 22:04:30.904: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Performing setup for networking test in namespace pod-network-test-6190 STEP: creating a selector STEP: Creating the service pods in kubernetes Jun 17 22:04:30.930: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Jun 17 22:04:30.973: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jun 17 22:04:32.976: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jun 17 22:04:34.978: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jun 17 22:04:36.979: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jun 17 22:04:38.976: INFO: The status of Pod netserver-0 is Running (Ready = false) Jun 17 22:04:40.977: INFO: The status of Pod netserver-0 is Running (Ready = false) Jun 17 22:04:42.976: INFO: The status of Pod netserver-0 is Running (Ready = false) Jun 17 22:04:44.977: INFO: The status of Pod netserver-0 is Running (Ready = false) Jun 17 22:04:46.977: INFO: The status of Pod netserver-0 is Running (Ready = false) Jun 17 22:04:48.976: INFO: The status of Pod netserver-0 is Running (Ready = false) Jun 17 22:04:50.978: INFO: The status of Pod netserver-0 is Running (Ready = false) Jun 17 22:04:52.976: INFO: The status of Pod netserver-0 is Running (Ready = true) Jun 17 22:04:52.982: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods Jun 17 22:04:57.019: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2 Jun 17 22:04:57.019: INFO: Going to poll 10.244.4.202 on port 8081 at least 0 times, with a maximum of 34 tries before failing Jun 17 22:04:57.022: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.4.202 8081 | grep -v '^\s*$'] Namespace:pod-network-test-6190 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 17 22:04:57.022: INFO: >>> kubeConfig: /root/.kube/config Jun 17 22:04:58.107: INFO: Found all 1 expected endpoints: [netserver-0] Jun 17 22:04:58.107: INFO: Going to poll 10.244.3.133 on port 8081 at least 0 times, with a maximum of 34 tries before failing Jun 17 22:04:58.110: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.3.133 8081 | grep -v '^\s*$'] Namespace:pod-network-test-6190 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 17 22:04:58.110: INFO: >>> kubeConfig: /root/.kube/config Jun 17 22:04:59.192: INFO: Found all 1 expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 17 22:04:59.192: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-6190" for this suite. • [SLOW TEST:28.297 seconds] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/network/framework.go:23 Granular Checks: Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/network/networking.go:30 should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":29,"skipped":419,"failed":0} SSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] EndpointSlice /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 17 22:04:59.229: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename endpointslice STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] EndpointSlice /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/endpointslice.go:49 [It] should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [AfterEach] [sig-network] EndpointSlice /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 17 22:05:01.302: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "endpointslice-8162" for this suite. • ------------------------------ {"msg":"PASSED [sig-network] EndpointSlice should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance]","total":-1,"completed":30,"skipped":433,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 17 22:04:41.259: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with configmap pod [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating pod pod-subpath-test-configmap-2vrl STEP: Creating a pod to test atomic-volume-subpath Jun 17 22:04:41.298: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-2vrl" in namespace "subpath-1460" to be "Succeeded or Failed" Jun 17 22:04:41.301: INFO: Pod "pod-subpath-test-configmap-2vrl": Phase="Pending", Reason="", readiness=false. Elapsed: 2.299909ms Jun 17 22:04:43.303: INFO: Pod "pod-subpath-test-configmap-2vrl": Phase="Pending", Reason="", readiness=false. Elapsed: 2.004906337s Jun 17 22:04:45.307: INFO: Pod "pod-subpath-test-configmap-2vrl": Phase="Pending", Reason="", readiness=false. Elapsed: 4.009248099s Jun 17 22:04:47.311: INFO: Pod "pod-subpath-test-configmap-2vrl": Phase="Running", Reason="", readiness=true. Elapsed: 6.013103704s Jun 17 22:04:49.315: INFO: Pod "pod-subpath-test-configmap-2vrl": Phase="Running", Reason="", readiness=true. Elapsed: 8.0163124s Jun 17 22:04:51.319: INFO: Pod "pod-subpath-test-configmap-2vrl": Phase="Running", Reason="", readiness=true. Elapsed: 10.020656229s Jun 17 22:04:53.323: INFO: Pod "pod-subpath-test-configmap-2vrl": Phase="Running", Reason="", readiness=true. Elapsed: 12.024333345s Jun 17 22:04:55.328: INFO: Pod "pod-subpath-test-configmap-2vrl": Phase="Running", Reason="", readiness=true. Elapsed: 14.030184468s Jun 17 22:04:57.331: INFO: Pod "pod-subpath-test-configmap-2vrl": Phase="Running", Reason="", readiness=true. Elapsed: 16.033219225s Jun 17 22:04:59.335: INFO: Pod "pod-subpath-test-configmap-2vrl": Phase="Running", Reason="", readiness=true. Elapsed: 18.036895738s Jun 17 22:05:01.341: INFO: Pod "pod-subpath-test-configmap-2vrl": Phase="Running", Reason="", readiness=true. Elapsed: 20.042854321s Jun 17 22:05:03.347: INFO: Pod "pod-subpath-test-configmap-2vrl": Phase="Running", Reason="", readiness=true. Elapsed: 22.048478414s Jun 17 22:05:05.352: INFO: Pod "pod-subpath-test-configmap-2vrl": Phase="Running", Reason="", readiness=true. Elapsed: 24.053703139s Jun 17 22:05:07.355: INFO: Pod "pod-subpath-test-configmap-2vrl": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.056954785s STEP: Saw pod success Jun 17 22:05:07.355: INFO: Pod "pod-subpath-test-configmap-2vrl" satisfied condition "Succeeded or Failed" Jun 17 22:05:07.357: INFO: Trying to get logs from node node2 pod pod-subpath-test-configmap-2vrl container test-container-subpath-configmap-2vrl: STEP: delete the pod Jun 17 22:05:07.370: INFO: Waiting for pod pod-subpath-test-configmap-2vrl to disappear Jun 17 22:05:07.372: INFO: Pod pod-subpath-test-configmap-2vrl no longer exists STEP: Deleting pod pod-subpath-test-configmap-2vrl Jun 17 22:05:07.372: INFO: Deleting pod "pod-subpath-test-configmap-2vrl" in namespace "subpath-1460" [AfterEach] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 17 22:05:07.374: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-1460" for this suite. • [SLOW TEST:26.122 seconds] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with configmap pod [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance]","total":-1,"completed":14,"skipped":533,"failed":1,"failures":["[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]"]} SSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 17 22:04:54.442: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:126 STEP: Setting up server cert STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication STEP: Deploying the custom resource conversion webhook pod STEP: Wait for the deployment to be ready Jun 17 22:04:55.101: INFO: new replicaset for deployment "sample-crd-conversion-webhook-deployment" is yet to be created Jun 17 22:04:57.123: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63791100295, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63791100295, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63791100295, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63791100295, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-697cdbd8f4\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jun 17 22:05:00.134: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1 [It] should be able to convert a non homogeneous list of CRs [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Jun 17 22:05:00.137: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating a v1 custom resource STEP: Create a v2 custom resource STEP: List CRs in v1 STEP: List CRs in v2 [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 17 22:05:08.276: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-webhook-3595" for this suite. [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:137 • [SLOW TEST:13.870 seconds] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to convert a non homogeneous list of CRs [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ [BeforeEach] [sig-network] EndpointSlice /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 17 22:04:43.236: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename endpointslice STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] EndpointSlice /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/endpointslice.go:49 [It] should create Endpoints and EndpointSlices for Pods matching a Service [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: referencing a single matching pod Jun 17 22:04:48.305: INFO: EndpointSlices for endpointslice-3380/example-int-port Service have 0/1 endpoints STEP: referencing matching pods with named port STEP: creating empty Endpoints and EndpointSlices for no matching Pods STEP: recreating EndpointSlices after they've been deleted Jun 17 22:05:08.338: INFO: EndpointSlice for Service endpointslice-3380/example-named-port not found [AfterEach] [sig-network] EndpointSlice /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 17 22:05:18.346: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "endpointslice-3380" for this suite. • [SLOW TEST:35.117 seconds] [sig-network] EndpointSlice /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should create Endpoints and EndpointSlices for Pods matching a Service [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-network] EndpointSlice should create Endpoints and EndpointSlices for Pods matching a Service [Conformance]","total":-1,"completed":41,"skipped":429,"failed":0} S ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","total":-1,"completed":14,"skipped":301,"failed":1,"failures":["[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]"]} [BeforeEach] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 17 22:05:08.314: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's command [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test substitution in container's command Jun 17 22:05:08.346: INFO: Waiting up to 5m0s for pod "var-expansion-00134885-bec5-44eb-992a-7cfe00e2de2b" in namespace "var-expansion-2737" to be "Succeeded or Failed" Jun 17 22:05:08.348: INFO: Pod "var-expansion-00134885-bec5-44eb-992a-7cfe00e2de2b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.392873ms Jun 17 22:05:10.353: INFO: Pod "var-expansion-00134885-bec5-44eb-992a-7cfe00e2de2b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007368524s Jun 17 22:05:12.357: INFO: Pod "var-expansion-00134885-bec5-44eb-992a-7cfe00e2de2b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.011061968s Jun 17 22:05:14.361: INFO: Pod "var-expansion-00134885-bec5-44eb-992a-7cfe00e2de2b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.014886539s Jun 17 22:05:16.364: INFO: Pod "var-expansion-00134885-bec5-44eb-992a-7cfe00e2de2b": Phase="Pending", Reason="", readiness=false. Elapsed: 8.018630987s Jun 17 22:05:18.368: INFO: Pod "var-expansion-00134885-bec5-44eb-992a-7cfe00e2de2b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.022617081s STEP: Saw pod success Jun 17 22:05:18.369: INFO: Pod "var-expansion-00134885-bec5-44eb-992a-7cfe00e2de2b" satisfied condition "Succeeded or Failed" Jun 17 22:05:18.371: INFO: Trying to get logs from node node2 pod var-expansion-00134885-bec5-44eb-992a-7cfe00e2de2b container dapi-container: STEP: delete the pod Jun 17 22:05:18.383: INFO: Waiting for pod var-expansion-00134885-bec5-44eb-992a-7cfe00e2de2b to disappear Jun 17 22:05:18.384: INFO: Pod var-expansion-00134885-bec5-44eb-992a-7cfe00e2de2b no longer exists [AfterEach] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 17 22:05:18.384: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-2737" for this suite. • [SLOW TEST:10.078 seconds] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should allow substituting values in a container's command [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance]","total":-1,"completed":15,"skipped":301,"failed":1,"failures":["[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]"]} SSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-apps] Job /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 17 22:05:01.392: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a job STEP: Ensuring job reaches completions [AfterEach] [sig-apps] Job /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 17 22:05:21.421: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-1923" for this suite. • [SLOW TEST:20.039 seconds] [sig-apps] Job /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ [BeforeEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 17 22:05:07.396: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating configMap with name projected-configmap-test-volume-9819659c-37f9-4d8d-b3dd-030685a46d08 STEP: Creating a pod to test consume configMaps Jun 17 22:05:07.429: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-b3439c99-b93a-43ca-92c5-b3b2222a34c4" in namespace "projected-7294" to be "Succeeded or Failed" Jun 17 22:05:07.433: INFO: Pod "pod-projected-configmaps-b3439c99-b93a-43ca-92c5-b3b2222a34c4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.087413ms Jun 17 22:05:09.437: INFO: Pod "pod-projected-configmaps-b3439c99-b93a-43ca-92c5-b3b2222a34c4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007784352s Jun 17 22:05:11.441: INFO: Pod "pod-projected-configmaps-b3439c99-b93a-43ca-92c5-b3b2222a34c4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.011560176s Jun 17 22:05:13.447: INFO: Pod "pod-projected-configmaps-b3439c99-b93a-43ca-92c5-b3b2222a34c4": Phase="Pending", Reason="", readiness=false. Elapsed: 6.017544598s Jun 17 22:05:15.453: INFO: Pod "pod-projected-configmaps-b3439c99-b93a-43ca-92c5-b3b2222a34c4": Phase="Pending", Reason="", readiness=false. Elapsed: 8.023221583s Jun 17 22:05:17.457: INFO: Pod "pod-projected-configmaps-b3439c99-b93a-43ca-92c5-b3b2222a34c4": Phase="Pending", Reason="", readiness=false. Elapsed: 10.027357712s Jun 17 22:05:19.462: INFO: Pod "pod-projected-configmaps-b3439c99-b93a-43ca-92c5-b3b2222a34c4": Phase="Pending", Reason="", readiness=false. Elapsed: 12.032241622s Jun 17 22:05:21.465: INFO: Pod "pod-projected-configmaps-b3439c99-b93a-43ca-92c5-b3b2222a34c4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.0358153s STEP: Saw pod success Jun 17 22:05:21.465: INFO: Pod "pod-projected-configmaps-b3439c99-b93a-43ca-92c5-b3b2222a34c4" satisfied condition "Succeeded or Failed" Jun 17 22:05:21.471: INFO: Trying to get logs from node node2 pod pod-projected-configmaps-b3439c99-b93a-43ca-92c5-b3b2222a34c4 container projected-configmap-volume-test: STEP: delete the pod Jun 17 22:05:21.498: INFO: Waiting for pod pod-projected-configmaps-b3439c99-b93a-43ca-92c5-b3b2222a34c4 to disappear Jun 17 22:05:21.500: INFO: Pod pod-projected-configmaps-b3439c99-b93a-43ca-92c5-b3b2222a34c4 no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 17 22:05:21.500: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7294" for this suite. • [SLOW TEST:14.112 seconds] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":-1,"completed":15,"skipped":539,"failed":1,"failures":["[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]"]} SSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 17 22:05:18.358: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:186 [It] should be updated [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating the pod STEP: submitting the pod to kubernetes Jun 17 22:05:18.394: INFO: The status of Pod pod-update-d7375882-44a0-47e6-86ce-2fc396a761fe is Pending, waiting for it to be Running (with Ready = true) Jun 17 22:05:20.398: INFO: The status of Pod pod-update-d7375882-44a0-47e6-86ce-2fc396a761fe is Pending, waiting for it to be Running (with Ready = true) Jun 17 22:05:22.397: INFO: The status of Pod pod-update-d7375882-44a0-47e6-86ce-2fc396a761fe is Pending, waiting for it to be Running (with Ready = true) Jun 17 22:05:24.399: INFO: The status of Pod pod-update-d7375882-44a0-47e6-86ce-2fc396a761fe is Running (Ready = true) STEP: verifying the pod is in kubernetes STEP: updating the pod Jun 17 22:05:24.915: INFO: Successfully updated pod "pod-update-d7375882-44a0-47e6-86ce-2fc396a761fe" STEP: verifying the updated pod is in kubernetes Jun 17 22:05:24.920: INFO: Pod update OK [AfterEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 17 22:05:24.920: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-7583" for this suite. • [SLOW TEST:6.570 seconds] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should be updated [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Pods should be updated [NodeConformance] [Conformance]","total":-1,"completed":42,"skipped":430,"failed":0} SSS ------------------------------ [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 17 22:05:18.429: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:241 [It] should check if kubectl can dry-run update Pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: running the image k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 Jun 17 22:05:18.451: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-516 run e2e-test-httpd-pod --image=k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 --labels=run=e2e-test-httpd-pod' Jun 17 22:05:18.616: INFO: stderr: "" Jun 17 22:05:18.616: INFO: stdout: "pod/e2e-test-httpd-pod created\n" STEP: replace the image in the pod with server-side dry-run Jun 17 22:05:18.616: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-516 patch pod e2e-test-httpd-pod -p {"spec":{"containers":[{"name": "e2e-test-httpd-pod","image": "k8s.gcr.io/e2e-test-images/busybox:1.29-1"}]}} --dry-run=server' Jun 17 22:05:19.049: INFO: stderr: "" Jun 17 22:05:19.049: INFO: stdout: "pod/e2e-test-httpd-pod patched\n" STEP: verifying the pod e2e-test-httpd-pod has the right image k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 Jun 17 22:05:19.052: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-516 delete pods e2e-test-httpd-pod' Jun 17 22:05:29.336: INFO: stderr: "" Jun 17 22:05:29.336: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 17 22:05:29.336: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-516" for this suite. • [SLOW TEST:10.914 seconds] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl server-side dry-run /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:903 should check if kubectl can dry-run update Pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl server-side dry-run should check if kubectl can dry-run update Pods [Conformance]","total":-1,"completed":16,"skipped":319,"failed":1,"failures":["[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]"]} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 17 22:01:26.741: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:54 [It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating pod test-webserver-c67c950f-e38b-4445-ab3b-ceabf4cf4f10 in namespace container-probe-3180 Jun 17 22:01:30.782: INFO: Started pod test-webserver-c67c950f-e38b-4445-ab3b-ceabf4cf4f10 in namespace container-probe-3180 STEP: checking the pod's current state and verifying that restartCount is present Jun 17 22:01:30.784: INFO: Initial restart count of pod test-webserver-c67c950f-e38b-4445-ab3b-ceabf4cf4f10 is 0 STEP: deleting the pod [AfterEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 17 22:05:31.659: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-3180" for this suite. • [SLOW TEST:244.926 seconds] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":-1,"completed":7,"skipped":169,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 17 22:02:57.484: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746 [It] should have session affinity work for NodePort service [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating service in namespace services-2464 STEP: creating service affinity-nodeport in namespace services-2464 STEP: creating replication controller affinity-nodeport in namespace services-2464 I0617 22:02:57.516136 37 runners.go:190] Created replication controller with name: affinity-nodeport, namespace: services-2464, replica count: 3 I0617 22:03:00.567546 37 runners.go:190] affinity-nodeport Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0617 22:03:03.568374 37 runners.go:190] affinity-nodeport Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0617 22:03:06.571089 37 runners.go:190] affinity-nodeport Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0617 22:03:09.572301 37 runners.go:190] affinity-nodeport Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jun 17 22:03:09.580: INFO: Creating new exec pod Jun 17 22:03:16.606: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2464 exec execpod-affinity5gk77 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport 80' Jun 17 22:03:16.861: INFO: stderr: "+ nc -v -t -w 2 affinity-nodeport 80\n+ echo hostName\nConnection to affinity-nodeport 80 port [tcp/http] succeeded!\n" Jun 17 22:03:16.861: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" Jun 17 22:03:16.861: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2464 exec execpod-affinity5gk77 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.233.20.1 80' Jun 17 22:03:17.115: INFO: stderr: "+ nc -v -t -w 2 10.233.20.1 80\n+ echo hostName\nConnection to 10.233.20.1 80 port [tcp/http] succeeded!\n" Jun 17 22:03:17.115: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" Jun 17 22:03:17.115: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2464 exec execpod-affinity5gk77 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30884' Jun 17 22:03:17.354: INFO: rc: 1 Jun 17 22:03:17.354: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2464 exec execpod-affinity5gk77 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30884: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30884 nc: connect to 10.10.190.207 port 30884 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:03:18.355: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2464 exec execpod-affinity5gk77 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30884' Jun 17 22:03:18.604: INFO: rc: 1 Jun 17 22:03:18.604: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2464 exec execpod-affinity5gk77 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30884: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30884 nc: connect to 10.10.190.207 port 30884 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:03:19.355: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2464 exec execpod-affinity5gk77 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30884' Jun 17 22:03:19.637: INFO: rc: 1 Jun 17 22:03:19.637: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2464 exec execpod-affinity5gk77 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30884: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30884 nc: connect to 10.10.190.207 port 30884 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:03:20.356: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2464 exec execpod-affinity5gk77 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30884' Jun 17 22:03:20.732: INFO: rc: 1 Jun 17 22:03:20.732: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2464 exec execpod-affinity5gk77 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30884: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30884 nc: connect to 10.10.190.207 port 30884 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:03:21.354: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2464 exec execpod-affinity5gk77 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30884' Jun 17 22:03:21.613: INFO: rc: 1 Jun 17 22:03:21.613: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2464 exec execpod-affinity5gk77 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30884: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30884 nc: connect to 10.10.190.207 port 30884 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:03:22.355: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2464 exec execpod-affinity5gk77 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30884' Jun 17 22:03:22.613: INFO: rc: 1 Jun 17 22:03:22.613: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2464 exec execpod-affinity5gk77 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30884: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30884 nc: connect to 10.10.190.207 port 30884 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:03:23.356: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2464 exec execpod-affinity5gk77 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30884' Jun 17 22:03:23.637: INFO: rc: 1 Jun 17 22:03:23.637: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2464 exec execpod-affinity5gk77 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30884: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30884 nc: connect to 10.10.190.207 port 30884 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:03:24.355: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2464 exec execpod-affinity5gk77 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30884' Jun 17 22:03:24.617: INFO: rc: 1 Jun 17 22:03:24.617: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2464 exec execpod-affinity5gk77 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30884: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30884 nc: connect to 10.10.190.207 port 30884 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:03:25.355: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2464 exec execpod-affinity5gk77 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30884' Jun 17 22:03:25.636: INFO: rc: 1 Jun 17 22:03:25.636: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2464 exec execpod-affinity5gk77 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30884: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30884 nc: connect to 10.10.190.207 port 30884 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:03:26.354: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2464 exec execpod-affinity5gk77 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30884' Jun 17 22:03:26.667: INFO: rc: 1 Jun 17 22:03:26.667: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2464 exec execpod-affinity5gk77 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30884: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30884 nc: connect to 10.10.190.207 port 30884 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:03:27.355: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2464 exec execpod-affinity5gk77 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30884' Jun 17 22:03:27.597: INFO: rc: 1 Jun 17 22:03:27.598: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2464 exec execpod-affinity5gk77 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30884: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30884 nc: connect to 10.10.190.207 port 30884 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:03:28.354: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2464 exec execpod-affinity5gk77 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30884' Jun 17 22:03:28.617: INFO: rc: 1 Jun 17 22:03:28.617: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2464 exec execpod-affinity5gk77 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30884: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30884 nc: connect to 10.10.190.207 port 30884 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:03:29.355: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2464 exec execpod-affinity5gk77 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30884' Jun 17 22:03:29.598: INFO: rc: 1 Jun 17 22:03:29.598: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2464 exec execpod-affinity5gk77 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30884: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30884 nc: connect to 10.10.190.207 port 30884 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:03:30.354: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2464 exec execpod-affinity5gk77 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30884' Jun 17 22:03:30.666: INFO: rc: 1 Jun 17 22:03:30.666: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2464 exec execpod-affinity5gk77 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30884: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30884 nc: connect to 10.10.190.207 port 30884 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:03:31.354: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2464 exec execpod-affinity5gk77 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30884' Jun 17 22:03:31.607: INFO: rc: 1 Jun 17 22:03:31.607: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2464 exec execpod-affinity5gk77 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30884: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30884 nc: connect to 10.10.190.207 port 30884 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:03:32.355: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2464 exec execpod-affinity5gk77 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30884' Jun 17 22:03:32.639: INFO: rc: 1 Jun 17 22:03:32.639: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2464 exec execpod-affinity5gk77 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30884: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30884 nc: connect to 10.10.190.207 port 30884 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:03:33.355: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2464 exec execpod-affinity5gk77 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30884' Jun 17 22:03:33.605: INFO: rc: 1 Jun 17 22:03:33.605: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2464 exec execpod-affinity5gk77 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30884: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30884 nc: connect to 10.10.190.207 port 30884 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:03:34.354: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2464 exec execpod-affinity5gk77 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30884' Jun 17 22:03:34.652: INFO: rc: 1 Jun 17 22:03:34.652: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2464 exec execpod-affinity5gk77 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30884: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30884 nc: connect to 10.10.190.207 port 30884 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:03:35.355: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2464 exec execpod-affinity5gk77 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30884' Jun 17 22:03:35.669: INFO: rc: 1 Jun 17 22:03:35.669: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2464 exec execpod-affinity5gk77 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30884: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30884 nc: connect to 10.10.190.207 port 30884 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:03:36.355: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2464 exec execpod-affinity5gk77 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30884' Jun 17 22:03:36.617: INFO: rc: 1 Jun 17 22:03:36.617: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2464 exec execpod-affinity5gk77 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30884: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30884 nc: connect to 10.10.190.207 port 30884 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:03:37.355: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2464 exec execpod-affinity5gk77 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30884' Jun 17 22:03:37.630: INFO: rc: 1 Jun 17 22:03:37.630: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2464 exec execpod-affinity5gk77 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30884: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30884 nc: connect to 10.10.190.207 port 30884 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:03:38.355: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2464 exec execpod-affinity5gk77 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30884' Jun 17 22:03:38.594: INFO: rc: 1 Jun 17 22:03:38.594: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2464 exec execpod-affinity5gk77 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30884: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30884 nc: connect to 10.10.190.207 port 30884 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:03:39.355: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2464 exec execpod-affinity5gk77 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30884' Jun 17 22:03:39.621: INFO: rc: 1 Jun 17 22:03:39.621: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2464 exec execpod-affinity5gk77 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30884: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30884 nc: connect to 10.10.190.207 port 30884 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:03:40.354: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2464 exec execpod-affinity5gk77 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30884' Jun 17 22:03:40.615: INFO: rc: 1 Jun 17 22:03:40.615: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2464 exec execpod-affinity5gk77 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30884: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30884 nc: connect to 10.10.190.207 port 30884 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:03:41.355: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2464 exec execpod-affinity5gk77 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30884' Jun 17 22:03:41.761: INFO: rc: 1 Jun 17 22:03:41.761: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2464 exec execpod-affinity5gk77 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30884: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30884 nc: connect to 10.10.190.207 port 30884 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:03:42.354: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2464 exec execpod-affinity5gk77 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30884' Jun 17 22:03:42.614: INFO: rc: 1 Jun 17 22:03:42.614: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2464 exec execpod-affinity5gk77 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30884: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30884 nc: connect to 10.10.190.207 port 30884 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:03:43.357: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2464 exec execpod-affinity5gk77 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30884' Jun 17 22:03:43.615: INFO: rc: 1 Jun 17 22:03:43.615: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2464 exec execpod-affinity5gk77 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30884: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30884 nc: connect to 10.10.190.207 port 30884 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:03:44.354: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2464 exec execpod-affinity5gk77 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30884' Jun 17 22:03:44.626: INFO: rc: 1 Jun 17 22:03:44.626: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2464 exec execpod-affinity5gk77 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30884: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30884 nc: connect to 10.10.190.207 port 30884 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:03:45.355: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2464 exec execpod-affinity5gk77 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30884' Jun 17 22:03:45.675: INFO: rc: 1 Jun 17 22:03:45.675: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2464 exec execpod-affinity5gk77 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30884: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30884 nc: connect to 10.10.190.207 port 30884 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:03:46.354: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2464 exec execpod-affinity5gk77 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30884' Jun 17 22:03:46.601: INFO: rc: 1 Jun 17 22:03:46.601: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2464 exec execpod-affinity5gk77 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30884: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30884 nc: connect to 10.10.190.207 port 30884 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:03:47.355: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2464 exec execpod-affinity5gk77 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30884' Jun 17 22:03:47.603: INFO: rc: 1 Jun 17 22:03:47.604: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2464 exec execpod-affinity5gk77 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30884: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30884 nc: connect to 10.10.190.207 port 30884 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:03:48.355: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2464 exec execpod-affinity5gk77 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30884' Jun 17 22:03:48.598: INFO: rc: 1 Jun 17 22:03:48.598: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2464 exec execpod-affinity5gk77 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30884: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30884 nc: connect to 10.10.190.207 port 30884 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:03:49.355: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2464 exec execpod-affinity5gk77 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30884' Jun 17 22:03:49.597: INFO: rc: 1 Jun 17 22:03:49.597: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2464 exec execpod-affinity5gk77 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30884: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30884 nc: connect to 10.10.190.207 port 30884 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:03:50.355: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2464 exec execpod-affinity5gk77 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30884' Jun 17 22:03:50.587: INFO: rc: 1 Jun 17 22:03:50.587: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2464 exec execpod-affinity5gk77 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30884: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30884 nc: connect to 10.10.190.207 port 30884 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:03:51.355: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2464 exec execpod-affinity5gk77 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30884' Jun 17 22:03:51.594: INFO: rc: 1 Jun 17 22:03:51.594: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2464 exec execpod-affinity5gk77 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30884: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30884 nc: connect to 10.10.190.207 port 30884 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:03:52.354: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2464 exec execpod-affinity5gk77 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30884' Jun 17 22:03:52.595: INFO: rc: 1 Jun 17 22:03:52.595: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2464 exec execpod-affinity5gk77 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30884: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30884 nc: connect to 10.10.190.207 port 30884 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:03:53.354: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2464 exec execpod-affinity5gk77 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30884' Jun 17 22:03:53.872: INFO: rc: 1 Jun 17 22:03:53.872: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2464 exec execpod-affinity5gk77 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30884: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30884 nc: connect to 10.10.190.207 port 30884 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:03:54.355: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2464 exec execpod-affinity5gk77 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30884' Jun 17 22:03:55.046: INFO: rc: 1 Jun 17 22:03:55.046: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2464 exec execpod-affinity5gk77 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30884: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30884 nc: connect to 10.10.190.207 port 30884 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:03:55.355: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2464 exec execpod-affinity5gk77 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30884' Jun 17 22:03:55.746: INFO: rc: 1 Jun 17 22:03:55.746: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2464 exec execpod-affinity5gk77 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30884: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30884 nc: connect to 10.10.190.207 port 30884 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:03:56.354: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2464 exec execpod-affinity5gk77 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30884' Jun 17 22:03:57.101: INFO: rc: 1 Jun 17 22:03:57.101: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2464 exec execpod-affinity5gk77 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30884: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30884 nc: connect to 10.10.190.207 port 30884 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:03:57.355: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2464 exec execpod-affinity5gk77 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30884' Jun 17 22:03:59.317: INFO: rc: 1 Jun 17 22:03:59.317: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2464 exec execpod-affinity5gk77 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30884: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30884 nc: connect to 10.10.190.207 port 30884 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:03:59.355: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2464 exec execpod-affinity5gk77 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30884' Jun 17 22:03:59.711: INFO: rc: 1 Jun 17 22:03:59.711: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2464 exec execpod-affinity5gk77 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30884: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30884 nc: connect to 10.10.190.207 port 30884 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:04:00.355: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2464 exec execpod-affinity5gk77 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30884' Jun 17 22:04:00.826: INFO: rc: 1 Jun 17 22:04:00.826: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2464 exec execpod-affinity5gk77 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30884: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30884 nc: connect to 10.10.190.207 port 30884 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:04:01.355: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2464 exec execpod-affinity5gk77 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30884' Jun 17 22:04:01.640: INFO: rc: 1 Jun 17 22:04:01.640: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2464 exec execpod-affinity5gk77 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30884: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30884 nc: connect to 10.10.190.207 port 30884 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:04:02.354: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2464 exec execpod-affinity5gk77 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30884' Jun 17 22:04:02.657: INFO: rc: 1 Jun 17 22:04:02.657: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2464 exec execpod-affinity5gk77 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30884: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30884 nc: connect to 10.10.190.207 port 30884 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:04:03.356: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2464 exec execpod-affinity5gk77 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30884' Jun 17 22:04:03.627: INFO: rc: 1 Jun 17 22:04:03.627: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2464 exec execpod-affinity5gk77 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30884: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30884 nc: connect to 10.10.190.207 port 30884 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:04:04.355: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2464 exec execpod-affinity5gk77 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30884' Jun 17 22:04:04.726: INFO: rc: 1 Jun 17 22:04:04.726: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2464 exec execpod-affinity5gk77 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30884: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30884 nc: connect to 10.10.190.207 port 30884 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:04:05.355: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2464 exec execpod-affinity5gk77 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30884' Jun 17 22:04:05.608: INFO: rc: 1 Jun 17 22:04:05.609: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2464 exec execpod-affinity5gk77 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30884: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30884 nc: connect to 10.10.190.207 port 30884 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:04:06.354: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2464 exec execpod-affinity5gk77 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30884' Jun 17 22:04:06.870: INFO: rc: 1 Jun 17 22:04:06.870: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2464 exec execpod-affinity5gk77 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30884: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30884 nc: connect to 10.10.190.207 port 30884 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:04:07.355: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2464 exec execpod-affinity5gk77 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30884' Jun 17 22:04:07.635: INFO: rc: 1 Jun 17 22:04:07.635: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2464 exec execpod-affinity5gk77 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30884: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30884 nc: connect to 10.10.190.207 port 30884 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:04:08.355: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2464 exec execpod-affinity5gk77 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30884' Jun 17 22:04:08.619: INFO: rc: 1 Jun 17 22:04:08.619: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2464 exec execpod-affinity5gk77 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30884: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30884 nc: connect to 10.10.190.207 port 30884 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:04:09.354: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2464 exec execpod-affinity5gk77 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30884' Jun 17 22:04:09.678: INFO: rc: 1 Jun 17 22:04:09.678: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2464 exec execpod-affinity5gk77 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30884: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30884 nc: connect to 10.10.190.207 port 30884 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:04:10.356: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2464 exec execpod-affinity5gk77 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30884' Jun 17 22:04:10.620: INFO: rc: 1 Jun 17 22:04:10.620: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2464 exec execpod-affinity5gk77 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30884: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30884 nc: connect to 10.10.190.207 port 30884 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:04:11.354: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2464 exec execpod-affinity5gk77 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30884' Jun 17 22:04:11.942: INFO: rc: 1 Jun 17 22:04:11.942: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2464 exec execpod-affinity5gk77 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30884: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30884 nc: connect to 10.10.190.207 port 30884 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:04:12.355: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2464 exec execpod-affinity5gk77 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30884' Jun 17 22:04:12.676: INFO: rc: 1 Jun 17 22:04:12.676: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2464 exec execpod-affinity5gk77 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30884: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30884 nc: connect to 10.10.190.207 port 30884 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:04:13.355: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2464 exec execpod-affinity5gk77 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30884' Jun 17 22:04:13.591: INFO: rc: 1 Jun 17 22:04:13.591: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2464 exec execpod-affinity5gk77 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30884: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30884 nc: connect to 10.10.190.207 port 30884 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:04:14.354: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2464 exec execpod-affinity5gk77 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30884' Jun 17 22:04:15.579: INFO: rc: 1 Jun 17 22:04:15.579: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2464 exec execpod-affinity5gk77 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30884: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30884 nc: connect to 10.10.190.207 port 30884 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:04:16.357: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2464 exec execpod-affinity5gk77 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30884' Jun 17 22:04:16.631: INFO: rc: 1 Jun 17 22:04:16.631: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2464 exec execpod-affinity5gk77 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30884: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30884 nc: connect to 10.10.190.207 port 30884 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:04:17.354: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2464 exec execpod-affinity5gk77 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30884' Jun 17 22:04:17.616: INFO: rc: 1 Jun 17 22:04:17.616: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2464 exec execpod-affinity5gk77 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30884: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30884 nc: connect to 10.10.190.207 port 30884 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:04:18.356: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2464 exec execpod-affinity5gk77 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30884' Jun 17 22:04:18.612: INFO: rc: 1 Jun 17 22:04:18.612: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2464 exec execpod-affinity5gk77 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30884: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30884 nc: connect to 10.10.190.207 port 30884 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:04:19.355: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2464 exec execpod-affinity5gk77 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30884' Jun 17 22:04:19.759: INFO: rc: 1 Jun 17 22:04:19.759: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2464 exec execpod-affinity5gk77 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30884: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30884 nc: connect to 10.10.190.207 port 30884 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:04:20.355: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2464 exec execpod-affinity5gk77 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30884' Jun 17 22:04:21.227: INFO: rc: 1 Jun 17 22:04:21.227: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2464 exec execpod-affinity5gk77 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30884: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30884 nc: connect to 10.10.190.207 port 30884 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:04:21.355: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2464 exec execpod-affinity5gk77 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30884' Jun 17 22:04:21.644: INFO: rc: 1 Jun 17 22:04:21.644: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2464 exec execpod-affinity5gk77 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30884: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30884 nc: connect to 10.10.190.207 port 30884 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:04:22.354: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2464 exec execpod-affinity5gk77 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30884' Jun 17 22:04:22.685: INFO: rc: 1 Jun 17 22:04:22.685: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2464 exec execpod-affinity5gk77 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30884: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30884 nc: connect to 10.10.190.207 port 30884 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:04:23.356: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2464 exec execpod-affinity5gk77 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30884' Jun 17 22:04:23.684: INFO: rc: 1 Jun 17 22:04:23.684: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2464 exec execpod-affinity5gk77 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30884: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30884 nc: connect to 10.10.190.207 port 30884 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:04:24.355: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2464 exec execpod-affinity5gk77 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30884' Jun 17 22:04:24.608: INFO: rc: 1 Jun 17 22:04:24.609: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2464 exec execpod-affinity5gk77 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30884: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30884 nc: connect to 10.10.190.207 port 30884 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:04:25.355: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2464 exec execpod-affinity5gk77 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30884' Jun 17 22:04:25.614: INFO: rc: 1 Jun 17 22:04:25.614: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2464 exec execpod-affinity5gk77 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30884: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30884 nc: connect to 10.10.190.207 port 30884 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:04:26.355: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2464 exec execpod-affinity5gk77 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30884' Jun 17 22:04:26.612: INFO: rc: 1 Jun 17 22:04:26.612: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2464 exec execpod-affinity5gk77 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30884: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30884 nc: connect to 10.10.190.207 port 30884 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:04:27.355: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2464 exec execpod-affinity5gk77 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30884' Jun 17 22:04:27.586: INFO: rc: 1 Jun 17 22:04:27.586: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2464 exec execpod-affinity5gk77 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30884: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30884 nc: connect to 10.10.190.207 port 30884 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:04:28.355: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2464 exec execpod-affinity5gk77 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30884' Jun 17 22:04:28.597: INFO: rc: 1 Jun 17 22:04:28.597: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2464 exec execpod-affinity5gk77 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30884: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30884 nc: connect to 10.10.190.207 port 30884 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:04:29.355: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2464 exec execpod-affinity5gk77 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30884' Jun 17 22:04:29.599: INFO: rc: 1 Jun 17 22:04:29.599: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2464 exec execpod-affinity5gk77 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30884: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30884 nc: connect to 10.10.190.207 port 30884 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:04:30.355: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2464 exec execpod-affinity5gk77 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30884' Jun 17 22:04:30.596: INFO: rc: 1 Jun 17 22:04:30.596: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2464 exec execpod-affinity5gk77 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30884: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30884 nc: connect to 10.10.190.207 port 30884 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:04:31.356: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2464 exec execpod-affinity5gk77 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30884' Jun 17 22:04:32.082: INFO: rc: 1 Jun 17 22:04:32.083: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2464 exec execpod-affinity5gk77 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30884: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30884 nc: connect to 10.10.190.207 port 30884 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:04:32.354: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2464 exec execpod-affinity5gk77 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30884' Jun 17 22:04:32.622: INFO: rc: 1 Jun 17 22:04:32.622: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2464 exec execpod-affinity5gk77 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30884: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30884 nc: connect to 10.10.190.207 port 30884 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:04:33.355: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2464 exec execpod-affinity5gk77 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30884' Jun 17 22:04:33.858: INFO: rc: 1 Jun 17 22:04:33.858: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2464 exec execpod-affinity5gk77 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30884: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30884 nc: connect to 10.10.190.207 port 30884 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:04:34.355: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2464 exec execpod-affinity5gk77 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30884' Jun 17 22:04:34.641: INFO: rc: 1 Jun 17 22:04:34.641: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2464 exec execpod-affinity5gk77 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30884: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30884 nc: connect to 10.10.190.207 port 30884 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:04:35.355: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2464 exec execpod-affinity5gk77 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30884' Jun 17 22:04:35.624: INFO: rc: 1 Jun 17 22:04:35.624: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2464 exec execpod-affinity5gk77 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30884: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30884 nc: connect to 10.10.190.207 port 30884 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:04:36.357: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2464 exec execpod-affinity5gk77 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30884' Jun 17 22:04:36.618: INFO: rc: 1 Jun 17 22:04:36.618: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2464 exec execpod-affinity5gk77 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30884: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30884 nc: connect to 10.10.190.207 port 30884 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:04:37.355: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2464 exec execpod-affinity5gk77 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30884' Jun 17 22:04:37.691: INFO: rc: 1 Jun 17 22:04:37.691: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2464 exec execpod-affinity5gk77 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30884: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30884 nc: connect to 10.10.190.207 port 30884 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:04:38.356: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2464 exec execpod-affinity5gk77 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30884' Jun 17 22:04:38.615: INFO: rc: 1 Jun 17 22:04:38.615: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2464 exec execpod-affinity5gk77 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30884: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30884 nc: connect to 10.10.190.207 port 30884 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:04:39.355: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2464 exec execpod-affinity5gk77 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30884' Jun 17 22:04:40.383: INFO: rc: 1 Jun 17 22:04:40.383: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2464 exec execpod-affinity5gk77 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30884: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30884 nc: connect to 10.10.190.207 port 30884 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:04:41.357: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2464 exec execpod-affinity5gk77 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30884' Jun 17 22:04:42.072: INFO: rc: 1 Jun 17 22:04:42.072: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2464 exec execpod-affinity5gk77 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30884: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30884 nc: connect to 10.10.190.207 port 30884 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:04:42.355: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2464 exec execpod-affinity5gk77 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30884' Jun 17 22:04:42.614: INFO: rc: 1 Jun 17 22:04:42.614: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2464 exec execpod-affinity5gk77 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30884: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30884 nc: connect to 10.10.190.207 port 30884 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:04:43.354: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2464 exec execpod-affinity5gk77 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30884' Jun 17 22:04:43.781: INFO: rc: 1 Jun 17 22:04:43.781: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2464 exec execpod-affinity5gk77 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30884: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30884 nc: connect to 10.10.190.207 port 30884 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:04:44.355: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2464 exec execpod-affinity5gk77 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30884' Jun 17 22:04:44.730: INFO: rc: 1 Jun 17 22:04:44.730: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2464 exec execpod-affinity5gk77 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30884: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30884 nc: connect to 10.10.190.207 port 30884 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:04:45.354: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2464 exec execpod-affinity5gk77 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30884' Jun 17 22:04:45.597: INFO: rc: 1 Jun 17 22:04:45.597: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2464 exec execpod-affinity5gk77 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30884: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30884 nc: connect to 10.10.190.207 port 30884 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:04:46.354: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2464 exec execpod-affinity5gk77 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30884' Jun 17 22:04:46.595: INFO: rc: 1 Jun 17 22:04:46.595: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2464 exec execpod-affinity5gk77 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30884: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30884 nc: connect to 10.10.190.207 port 30884 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:04:47.355: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2464 exec execpod-affinity5gk77 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30884' Jun 17 22:04:47.594: INFO: rc: 1 Jun 17 22:04:47.594: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2464 exec execpod-affinity5gk77 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30884: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30884 nc: connect to 10.10.190.207 port 30884 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:04:48.355: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2464 exec execpod-affinity5gk77 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30884' Jun 17 22:04:48.622: INFO: rc: 1 Jun 17 22:04:48.622: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2464 exec execpod-affinity5gk77 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30884: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30884 nc: connect to 10.10.190.207 port 30884 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:04:49.355: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2464 exec execpod-affinity5gk77 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30884' Jun 17 22:04:49.610: INFO: rc: 1 Jun 17 22:04:49.610: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2464 exec execpod-affinity5gk77 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30884: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30884 nc: connect to 10.10.190.207 port 30884 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:04:50.355: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2464 exec execpod-affinity5gk77 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30884' Jun 17 22:04:50.709: INFO: rc: 1 Jun 17 22:04:50.709: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2464 exec execpod-affinity5gk77 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30884: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30884 nc: connect to 10.10.190.207 port 30884 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:04:51.355: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2464 exec execpod-affinity5gk77 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30884' Jun 17 22:04:51.608: INFO: rc: 1 Jun 17 22:04:51.608: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2464 exec execpod-affinity5gk77 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30884: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30884 nc: connect to 10.10.190.207 port 30884 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:04:52.355: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2464 exec execpod-affinity5gk77 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30884' Jun 17 22:04:52.610: INFO: rc: 1 Jun 17 22:04:52.610: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2464 exec execpod-affinity5gk77 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30884: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30884 nc: connect to 10.10.190.207 port 30884 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:04:53.355: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2464 exec execpod-affinity5gk77 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30884' Jun 17 22:04:53.729: INFO: rc: 1 Jun 17 22:04:53.729: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2464 exec execpod-affinity5gk77 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30884: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30884 nc: connect to 10.10.190.207 port 30884 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:04:54.356: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2464 exec execpod-affinity5gk77 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30884' Jun 17 22:04:54.627: INFO: rc: 1 Jun 17 22:04:54.627: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2464 exec execpod-affinity5gk77 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30884: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30884 nc: connect to 10.10.190.207 port 30884 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:04:55.356: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2464 exec execpod-affinity5gk77 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30884' Jun 17 22:04:55.674: INFO: rc: 1 Jun 17 22:04:55.674: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2464 exec execpod-affinity5gk77 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30884: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30884 nc: connect to 10.10.190.207 port 30884 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:04:56.356: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2464 exec execpod-affinity5gk77 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30884' Jun 17 22:04:56.927: INFO: rc: 1 Jun 17 22:04:56.927: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2464 exec execpod-affinity5gk77 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30884: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30884 nc: connect to 10.10.190.207 port 30884 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:04:57.354: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2464 exec execpod-affinity5gk77 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30884' Jun 17 22:04:57.682: INFO: rc: 1 Jun 17 22:04:57.682: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2464 exec execpod-affinity5gk77 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30884: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30884 nc: connect to 10.10.190.207 port 30884 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:04:58.356: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2464 exec execpod-affinity5gk77 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30884' Jun 17 22:04:58.582: INFO: rc: 1 Jun 17 22:04:58.582: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2464 exec execpod-affinity5gk77 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30884: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30884 nc: connect to 10.10.190.207 port 30884 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:04:59.355: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2464 exec execpod-affinity5gk77 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30884' Jun 17 22:04:59.688: INFO: rc: 1 Jun 17 22:04:59.688: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2464 exec execpod-affinity5gk77 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30884: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30884 nc: connect to 10.10.190.207 port 30884 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:05:00.355: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2464 exec execpod-affinity5gk77 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30884' Jun 17 22:05:00.636: INFO: rc: 1 Jun 17 22:05:00.636: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2464 exec execpod-affinity5gk77 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30884: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30884 nc: connect to 10.10.190.207 port 30884 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:05:01.354: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2464 exec execpod-affinity5gk77 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30884' Jun 17 22:05:02.403: INFO: rc: 1 Jun 17 22:05:02.403: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2464 exec execpod-affinity5gk77 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30884: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30884 nc: connect to 10.10.190.207 port 30884 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:05:03.355: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2464 exec execpod-affinity5gk77 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30884' Jun 17 22:05:04.722: INFO: rc: 1 Jun 17 22:05:04.722: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2464 exec execpod-affinity5gk77 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30884: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30884 nc: connect to 10.10.190.207 port 30884 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:05:05.355: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2464 exec execpod-affinity5gk77 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30884' Jun 17 22:05:05.656: INFO: rc: 1 Jun 17 22:05:05.656: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2464 exec execpod-affinity5gk77 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30884: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30884 nc: connect to 10.10.190.207 port 30884 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:05:06.354: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2464 exec execpod-affinity5gk77 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30884' Jun 17 22:05:06.614: INFO: rc: 1 Jun 17 22:05:06.614: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2464 exec execpod-affinity5gk77 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30884: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30884 nc: connect to 10.10.190.207 port 30884 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:05:07.355: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2464 exec execpod-affinity5gk77 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30884' Jun 17 22:05:07.737: INFO: rc: 1 Jun 17 22:05:07.737: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2464 exec execpod-affinity5gk77 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30884: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30884 nc: connect to 10.10.190.207 port 30884 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:05:08.355: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2464 exec execpod-affinity5gk77 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30884' Jun 17 22:05:08.674: INFO: rc: 1 Jun 17 22:05:08.674: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2464 exec execpod-affinity5gk77 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30884: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30884 nc: connect to 10.10.190.207 port 30884 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:05:09.355: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2464 exec execpod-affinity5gk77 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30884' Jun 17 22:05:09.609: INFO: rc: 1 Jun 17 22:05:09.609: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2464 exec execpod-affinity5gk77 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30884: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30884 nc: connect to 10.10.190.207 port 30884 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:05:10.356: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2464 exec execpod-affinity5gk77 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30884' Jun 17 22:05:10.598: INFO: rc: 1 Jun 17 22:05:10.598: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2464 exec execpod-affinity5gk77 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30884: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30884 nc: connect to 10.10.190.207 port 30884 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:05:11.356: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2464 exec execpod-affinity5gk77 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30884' Jun 17 22:05:11.623: INFO: rc: 1 Jun 17 22:05:11.623: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2464 exec execpod-affinity5gk77 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30884: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30884 nc: connect to 10.10.190.207 port 30884 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:05:12.355: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2464 exec execpod-affinity5gk77 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30884' Jun 17 22:05:12.605: INFO: rc: 1 Jun 17 22:05:12.605: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2464 exec execpod-affinity5gk77 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30884: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30884 nc: connect to 10.10.190.207 port 30884 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:05:13.355: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2464 exec execpod-affinity5gk77 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30884' Jun 17 22:05:13.875: INFO: rc: 1 Jun 17 22:05:13.875: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2464 exec execpod-affinity5gk77 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30884: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30884 nc: connect to 10.10.190.207 port 30884 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:05:14.355: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2464 exec execpod-affinity5gk77 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30884' Jun 17 22:05:15.550: INFO: rc: 1 Jun 17 22:05:15.550: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2464 exec execpod-affinity5gk77 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30884: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30884 nc: connect to 10.10.190.207 port 30884 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:05:16.355: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2464 exec execpod-affinity5gk77 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30884' Jun 17 22:05:16.629: INFO: rc: 1 Jun 17 22:05:16.629: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2464 exec execpod-affinity5gk77 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30884: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30884 nc: connect to 10.10.190.207 port 30884 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:05:17.355: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2464 exec execpod-affinity5gk77 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30884' Jun 17 22:05:17.626: INFO: rc: 1 Jun 17 22:05:17.626: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2464 exec execpod-affinity5gk77 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30884: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30884 nc: connect to 10.10.190.207 port 30884 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:05:17.626: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2464 exec execpod-affinity5gk77 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30884' Jun 17 22:05:17.889: INFO: rc: 1 Jun 17 22:05:17.889: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2464 exec execpod-affinity5gk77 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30884: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30884 nc: connect to 10.10.190.207 port 30884 (tcp) failed: Connection refused + echo hostName command terminated with exit code 1 error: exit status 1 Retrying... Jun 17 22:05:17.890: FAIL: Unexpected error: <*errors.errorString | 0xc000339c10>: { s: "service is not reachable within 2m0s timeout on endpoint 10.10.190.207:30884 over TCP protocol", } service is not reachable within 2m0s timeout on endpoint 10.10.190.207:30884 over TCP protocol occurred Full Stack Trace k8s.io/kubernetes/test/e2e/network.execAffinityTestForNonLBServiceWithOptionalTransition(0xc001caf080, 0x77b33d8, 0xc003640420, 0xc002c7a280, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:2576 +0x625 k8s.io/kubernetes/test/e2e/network.execAffinityTestForNonLBService(...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:2535 k8s.io/kubernetes/test/e2e/network.glob..func24.25() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1829 +0xa5 k8s.io/kubernetes/test/e2e.RunE2ETests(0xc00178a780) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x36c k8s.io/kubernetes/test/e2e.TestE2E(0xc00178a780) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:144 +0x2b testing.tRunner(0xc00178a780, 0x70f99e8) /usr/local/go/src/testing/testing.go:1193 +0xef created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1238 +0x2b3 Jun 17 22:05:17.891: INFO: Cleaning up the exec pod STEP: deleting ReplicationController affinity-nodeport in namespace services-2464, will wait for the garbage collector to delete the pods Jun 17 22:05:17.968: INFO: Deleting ReplicationController affinity-nodeport took: 4.357212ms Jun 17 22:05:18.069: INFO: Terminating ReplicationController affinity-nodeport pods took: 101.043027ms [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 STEP: Collecting events from namespace "services-2464". STEP: Found 27 events. Jun 17 22:05:29.287: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for affinity-nodeport-qzwbb: { } Scheduled: Successfully assigned services-2464/affinity-nodeport-qzwbb to node2 Jun 17 22:05:29.287: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for affinity-nodeport-rfvjh: { } Scheduled: Successfully assigned services-2464/affinity-nodeport-rfvjh to node2 Jun 17 22:05:29.287: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for affinity-nodeport-rw4w9: { } Scheduled: Successfully assigned services-2464/affinity-nodeport-rw4w9 to node2 Jun 17 22:05:29.287: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for execpod-affinity5gk77: { } Scheduled: Successfully assigned services-2464/execpod-affinity5gk77 to node1 Jun 17 22:05:29.287: INFO: At 2022-06-17 22:02:57 +0000 UTC - event for affinity-nodeport: {replication-controller } SuccessfulCreate: Created pod: affinity-nodeport-rfvjh Jun 17 22:05:29.287: INFO: At 2022-06-17 22:02:57 +0000 UTC - event for affinity-nodeport: {replication-controller } SuccessfulCreate: Created pod: affinity-nodeport-qzwbb Jun 17 22:05:29.287: INFO: At 2022-06-17 22:02:57 +0000 UTC - event for affinity-nodeport: {replication-controller } SuccessfulCreate: Created pod: affinity-nodeport-rw4w9 Jun 17 22:05:29.287: INFO: At 2022-06-17 22:03:00 +0000 UTC - event for affinity-nodeport-rfvjh: {kubelet node2} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/agnhost:2.32" Jun 17 22:05:29.287: INFO: At 2022-06-17 22:03:00 +0000 UTC - event for affinity-nodeport-rfvjh: {kubelet node2} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/agnhost:2.32" in 467.020177ms Jun 17 22:05:29.287: INFO: At 2022-06-17 22:03:01 +0000 UTC - event for affinity-nodeport-qzwbb: {kubelet node2} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/agnhost:2.32" Jun 17 22:05:29.287: INFO: At 2022-06-17 22:03:01 +0000 UTC - event for affinity-nodeport-rfvjh: {kubelet node2} Created: Created container affinity-nodeport Jun 17 22:05:29.287: INFO: At 2022-06-17 22:03:01 +0000 UTC - event for affinity-nodeport-rw4w9: {kubelet node2} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/agnhost:2.32" Jun 17 22:05:29.287: INFO: At 2022-06-17 22:03:02 +0000 UTC - event for affinity-nodeport-qzwbb: {kubelet node2} Created: Created container affinity-nodeport Jun 17 22:05:29.287: INFO: At 2022-06-17 22:03:02 +0000 UTC - event for affinity-nodeport-qzwbb: {kubelet node2} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/agnhost:2.32" in 358.047673ms Jun 17 22:05:29.287: INFO: At 2022-06-17 22:03:02 +0000 UTC - event for affinity-nodeport-rfvjh: {kubelet node2} Started: Started container affinity-nodeport Jun 17 22:05:29.287: INFO: At 2022-06-17 22:03:02 +0000 UTC - event for affinity-nodeport-rw4w9: {kubelet node2} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/agnhost:2.32" in 726.153784ms Jun 17 22:05:29.287: INFO: At 2022-06-17 22:03:03 +0000 UTC - event for affinity-nodeport-qzwbb: {kubelet node2} Started: Started container affinity-nodeport Jun 17 22:05:29.287: INFO: At 2022-06-17 22:03:03 +0000 UTC - event for affinity-nodeport-rw4w9: {kubelet node2} Started: Started container affinity-nodeport Jun 17 22:05:29.287: INFO: At 2022-06-17 22:03:03 +0000 UTC - event for affinity-nodeport-rw4w9: {kubelet node2} Created: Created container affinity-nodeport Jun 17 22:05:29.287: INFO: At 2022-06-17 22:03:12 +0000 UTC - event for execpod-affinity5gk77: {kubelet node1} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/agnhost:2.32" Jun 17 22:05:29.287: INFO: At 2022-06-17 22:03:13 +0000 UTC - event for execpod-affinity5gk77: {kubelet node1} Created: Created container agnhost-container Jun 17 22:05:29.287: INFO: At 2022-06-17 22:03:13 +0000 UTC - event for execpod-affinity5gk77: {kubelet node1} Started: Started container agnhost-container Jun 17 22:05:29.287: INFO: At 2022-06-17 22:03:13 +0000 UTC - event for execpod-affinity5gk77: {kubelet node1} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/agnhost:2.32" in 513.19607ms Jun 17 22:05:29.287: INFO: At 2022-06-17 22:05:17 +0000 UTC - event for affinity-nodeport-qzwbb: {kubelet node2} Killing: Stopping container affinity-nodeport Jun 17 22:05:29.287: INFO: At 2022-06-17 22:05:17 +0000 UTC - event for affinity-nodeport-rfvjh: {kubelet node2} Killing: Stopping container affinity-nodeport Jun 17 22:05:29.287: INFO: At 2022-06-17 22:05:17 +0000 UTC - event for affinity-nodeport-rw4w9: {kubelet node2} Killing: Stopping container affinity-nodeport Jun 17 22:05:29.287: INFO: At 2022-06-17 22:05:17 +0000 UTC - event for execpod-affinity5gk77: {kubelet node1} Killing: Stopping container agnhost-container Jun 17 22:05:29.289: INFO: POD NODE PHASE GRACE CONDITIONS Jun 17 22:05:29.289: INFO: Jun 17 22:05:29.294: INFO: Logging node info for node master1 Jun 17 22:05:29.296: INFO: Node Info: &Node{ObjectMeta:{master1 47691bb2-4ee9-4386-8bec-0f9db1917afd 43730 0 2022-06-17 19:59:00 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master1 kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.202 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2022-06-17 19:59:03 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}}} {flanneld Update v1 2022-06-17 20:01:43 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2022-06-17 20:01:45 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.0.0/24\"":{}},"f:taints":{}}}} {kubelet Update v1 2022-06-17 20:06:30 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{451201003520 0} {} 440625980Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234739200 0} {} 196518300Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{406080902496 0} {} 406080902496 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324575232 0} {} 195629468Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-06-17 20:04:36 +0000 UTC,LastTransitionTime:2022-06-17 20:04:36 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-06-17 22:05:21 +0000 UTC,LastTransitionTime:2022-06-17 19:58:57 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-06-17 22:05:21 +0000 UTC,LastTransitionTime:2022-06-17 19:58:57 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-06-17 22:05:21 +0000 UTC,LastTransitionTime:2022-06-17 19:58:57 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-06-17 22:05:21 +0000 UTC,LastTransitionTime:2022-06-17 20:01:45 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.202,},NodeAddress{Type:Hostname,Address:master1,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:f59e69c8e0cc41ff966b02f015e9cf30,SystemUUID:00ACFB60-0631-E711-906E-0017A4403562,BootID:81e1dc93-cb0d-4bf9-b7c4-28e0b4aef603,KernelVersion:3.10.0-1160.66.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.17,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:727740703,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 sirot/netperf-latest:latest],SizeBytes:282025213,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e k8s.gcr.io/coredns/coredns:v1.8.0],SizeBytes:42454755,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[localhost:30500/tasextender@sha256:a226a9c613b9eeed89115dd78ba697306e50d1b4466033c8415371714720c861 tasextender:latest localhost:30500/tasextender:0.4],SizeBytes:28910791,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[registry@sha256:1cd9409a311350c3072fe510b52046f104416376c126a479cef9a4dfe692cf57 registry:2.7.0],SizeBytes:24191168,},ContainerImage{Names:[nginx@sha256:b92d3b942c8b84da889ac3dc6e83bd20ffb8cd2d8298eba92c8b0bf88d52f03e nginx:1.20.1-alpine],SizeBytes:22721538,},ContainerImage{Names:[@ :],SizeBytes:5577654,},ContainerImage{Names:[alpine@sha256:c0e9560cda118f9ec63ddefb4a173a2b2a0347082d7dff7dc14272e7841a5b5a alpine:3.12.1],SizeBytes:5573013,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jun 17 22:05:29.297: INFO: Logging kubelet events for node master1 Jun 17 22:05:29.299: INFO: Logging pods the kubelet thinks is on node master1 Jun 17 22:05:29.312: INFO: kube-controller-manager-master1 started at 2022-06-17 20:08:08 +0000 UTC (0+1 container statuses recorded) Jun 17 22:05:29.312: INFO: Container kube-controller-manager ready: true, restart count 2 Jun 17 22:05:29.312: INFO: kube-flannel-z9nqz started at 2022-06-17 20:01:38 +0000 UTC (1+1 container statuses recorded) Jun 17 22:05:29.312: INFO: Init container install-cni ready: true, restart count 2 Jun 17 22:05:29.312: INFO: Container kube-flannel ready: true, restart count 2 Jun 17 22:05:29.312: INFO: kube-multus-ds-amd64-rqb4r started at 2022-06-17 20:01:47 +0000 UTC (0+1 container statuses recorded) Jun 17 22:05:29.312: INFO: Container kube-multus ready: true, restart count 1 Jun 17 22:05:29.312: INFO: kube-apiserver-master1 started at 2022-06-17 20:00:04 +0000 UTC (0+1 container statuses recorded) Jun 17 22:05:29.312: INFO: Container kube-apiserver ready: true, restart count 0 Jun 17 22:05:29.312: INFO: kube-proxy-b2xlr started at 2022-06-17 20:00:43 +0000 UTC (0+1 container statuses recorded) Jun 17 22:05:29.312: INFO: Container kube-proxy ready: true, restart count 2 Jun 17 22:05:29.312: INFO: container-registry-65d7c44b96-hq7rp started at 2022-06-17 20:06:17 +0000 UTC (0+2 container statuses recorded) Jun 17 22:05:29.312: INFO: Container docker-registry ready: true, restart count 0 Jun 17 22:05:29.312: INFO: Container nginx ready: true, restart count 0 Jun 17 22:05:29.312: INFO: node-exporter-bts5h started at 2022-06-17 20:14:54 +0000 UTC (0+2 container statuses recorded) Jun 17 22:05:29.312: INFO: Container kube-rbac-proxy ready: true, restart count 0 Jun 17 22:05:29.312: INFO: Container node-exporter ready: true, restart count 0 Jun 17 22:05:29.312: INFO: kube-scheduler-master1 started at 2022-06-17 20:08:08 +0000 UTC (0+1 container statuses recorded) Jun 17 22:05:29.312: INFO: Container kube-scheduler ready: true, restart count 0 Jun 17 22:05:29.388: INFO: Latency metrics for node master1 Jun 17 22:05:29.388: INFO: Logging node info for node master2 Jun 17 22:05:29.391: INFO: Node Info: &Node{ObjectMeta:{master2 71ab7827-6f85-4ecf-82ce-5b27d8ba1a11 43932 0 2022-06-17 19:59:29 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master2 kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.203 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/master.version:v0.8.2 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2022-06-17 19:59:30 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}}} {flanneld Update v1 2022-06-17 20:01:42 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2022-06-17 20:01:45 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}},"f:taints":{}}}} {nfd-master Update v1 2022-06-17 20:09:34 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/master.version":{}}}}} {kubelet Update v1 2022-06-17 20:09:40 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:capacity":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.1.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{451201003520 0} {} 440625980Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234743296 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{406080902496 0} {} 406080902496 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324579328 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-06-17 20:04:35 +0000 UTC,LastTransitionTime:2022-06-17 20:04:35 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-06-17 22:05:28 +0000 UTC,LastTransitionTime:2022-06-17 19:59:29 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-06-17 22:05:28 +0000 UTC,LastTransitionTime:2022-06-17 19:59:29 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-06-17 22:05:28 +0000 UTC,LastTransitionTime:2022-06-17 19:59:29 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-06-17 22:05:28 +0000 UTC,LastTransitionTime:2022-06-17 20:01:45 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.203,},NodeAddress{Type:Hostname,Address:master2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:ba0363db4fd2476098c500989c8b1fd5,SystemUUID:00A0DE53-E51D-E711-906E-0017A4403562,BootID:cafb2298-e9e8-4bc9-82ab-0feb6c416066,KernelVersion:3.10.0-1160.66.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.17,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:727740703,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 sirot/netperf-latest:latest],SizeBytes:282025213,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[k8s.gcr.io/nfd/node-feature-discovery@sha256:74a1cbd82354f148277e20cdce25d57816e355a896bc67f67a0f722164b16945 k8s.gcr.io/nfd/node-feature-discovery:v0.8.2],SizeBytes:108486428,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e k8s.gcr.io/coredns/coredns:v1.8.0],SizeBytes:42454755,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 aquasec/kube-bench:0.3.1],SizeBytes:19301876,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jun 17 22:05:29.391: INFO: Logging kubelet events for node master2 Jun 17 22:05:29.393: INFO: Logging pods the kubelet thinks is on node master2 Jun 17 22:05:29.408: INFO: kube-apiserver-master2 started at 2022-06-17 20:00:05 +0000 UTC (0+1 container statuses recorded) Jun 17 22:05:29.408: INFO: Container kube-apiserver ready: true, restart count 0 Jun 17 22:05:29.408: INFO: kube-proxy-52p78 started at 2022-06-17 20:00:43 +0000 UTC (0+1 container statuses recorded) Jun 17 22:05:29.408: INFO: Container kube-proxy ready: true, restart count 1 Jun 17 22:05:29.408: INFO: kube-multus-ds-amd64-spg7h started at 2022-06-17 20:01:47 +0000 UTC (0+1 container statuses recorded) Jun 17 22:05:29.408: INFO: Container kube-multus ready: true, restart count 1 Jun 17 22:05:29.408: INFO: coredns-8474476ff8-55pd7 started at 2022-06-17 20:02:14 +0000 UTC (0+1 container statuses recorded) Jun 17 22:05:29.408: INFO: Container coredns ready: true, restart count 1 Jun 17 22:05:29.408: INFO: dns-autoscaler-7df78bfcfb-ml447 started at 2022-06-17 20:02:16 +0000 UTC (0+1 container statuses recorded) Jun 17 22:05:29.408: INFO: Container autoscaler ready: true, restart count 1 Jun 17 22:05:29.408: INFO: kube-controller-manager-master2 started at 2022-06-17 20:08:05 +0000 UTC (0+1 container statuses recorded) Jun 17 22:05:29.408: INFO: Container kube-controller-manager ready: true, restart count 2 Jun 17 22:05:29.408: INFO: kube-scheduler-master2 started at 2022-06-17 20:00:05 +0000 UTC (0+1 container statuses recorded) Jun 17 22:05:29.408: INFO: Container kube-scheduler ready: true, restart count 2 Jun 17 22:05:29.408: INFO: kube-flannel-kmc7f started at 2022-06-17 20:01:38 +0000 UTC (1+1 container statuses recorded) Jun 17 22:05:29.408: INFO: Init container install-cni ready: true, restart count 2 Jun 17 22:05:29.408: INFO: Container kube-flannel ready: true, restart count 2 Jun 17 22:05:29.408: INFO: node-feature-discovery-controller-cff799f9f-zlzkd started at 2022-06-17 20:09:28 +0000 UTC (0+1 container statuses recorded) Jun 17 22:05:29.408: INFO: Container nfd-controller ready: true, restart count 0 Jun 17 22:05:29.408: INFO: node-exporter-ccmb2 started at 2022-06-17 20:14:54 +0000 UTC (0+2 container statuses recorded) Jun 17 22:05:29.408: INFO: Container kube-rbac-proxy ready: true, restart count 0 Jun 17 22:05:29.408: INFO: Container node-exporter ready: true, restart count 0 Jun 17 22:05:29.890: INFO: Latency metrics for node master2 Jun 17 22:05:29.890: INFO: Logging node info for node master3 Jun 17 22:05:29.892: INFO: Node Info: &Node{ObjectMeta:{master3 4495d2b3-3dc7-45fa-93e4-2ad5ef91730e 43867 0 2022-06-17 19:59:37 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master3 kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.204 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2022-06-17 19:59:39 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}}} {kube-controller-manager Update v1 2022-06-17 20:00:32 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.2.0/24\"":{}},"f:taints":{}}}} {flanneld Update v1 2022-06-17 20:01:42 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kubelet Update v1 2022-06-17 20:12:12 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:capacity":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.2.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{451201003520 0} {} 440625980Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234743296 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{406080902496 0} {} 406080902496 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324579328 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-06-17 20:04:33 +0000 UTC,LastTransitionTime:2022-06-17 20:04:33 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-06-17 22:05:26 +0000 UTC,LastTransitionTime:2022-06-17 19:59:37 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-06-17 22:05:26 +0000 UTC,LastTransitionTime:2022-06-17 19:59:37 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-06-17 22:05:26 +0000 UTC,LastTransitionTime:2022-06-17 19:59:37 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-06-17 22:05:26 +0000 UTC,LastTransitionTime:2022-06-17 20:01:45 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.204,},NodeAddress{Type:Hostname,Address:master3,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:e420146228b341cbbaf470c338ef023e,SystemUUID:008B1444-141E-E711-906E-0017A4403562,BootID:88e9c5d2-4324-4e63-8acf-ee80e9511e70,KernelVersion:3.10.0-1160.66.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.17,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:727740703,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[quay.io/prometheus-operator/prometheus-operator@sha256:850c86bfeda4389bc9c757a9fd17ca5a090ea6b424968178d4467492cfa13921 quay.io/prometheus-operator/prometheus-operator:v0.44.1],SizeBytes:42617274,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e k8s.gcr.io/coredns/coredns:v1.8.0],SizeBytes:42454755,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jun 17 22:05:29.892: INFO: Logging kubelet events for node master3 Jun 17 22:05:29.895: INFO: Logging pods the kubelet thinks is on node master3 Jun 17 22:05:29.911: INFO: kube-controller-manager-master3 started at 2022-06-17 20:08:07 +0000 UTC (0+1 container statuses recorded) Jun 17 22:05:29.911: INFO: Container kube-controller-manager ready: true, restart count 2 Jun 17 22:05:29.911: INFO: coredns-8474476ff8-plfdq started at 2022-06-17 20:02:18 +0000 UTC (0+1 container statuses recorded) Jun 17 22:05:29.911: INFO: Container coredns ready: true, restart count 1 Jun 17 22:05:29.911: INFO: prometheus-operator-585ccfb458-kz9ss started at 2022-06-17 20:14:47 +0000 UTC (0+2 container statuses recorded) Jun 17 22:05:29.911: INFO: Container kube-rbac-proxy ready: true, restart count 0 Jun 17 22:05:29.911: INFO: Container prometheus-operator ready: true, restart count 0 Jun 17 22:05:29.911: INFO: kube-multus-ds-amd64-vtvhp started at 2022-06-17 20:01:47 +0000 UTC (0+1 container statuses recorded) Jun 17 22:05:29.911: INFO: Container kube-multus ready: true, restart count 1 Jun 17 22:05:29.911: INFO: node-exporter-tv8q4 started at 2022-06-17 20:14:54 +0000 UTC (0+2 container statuses recorded) Jun 17 22:05:29.911: INFO: Container kube-rbac-proxy ready: true, restart count 0 Jun 17 22:05:29.911: INFO: Container node-exporter ready: true, restart count 0 Jun 17 22:05:29.911: INFO: kube-apiserver-master3 started at 2022-06-17 20:00:05 +0000 UTC (0+1 container statuses recorded) Jun 17 22:05:29.911: INFO: Container kube-apiserver ready: true, restart count 0 Jun 17 22:05:29.911: INFO: kube-scheduler-master3 started at 2022-06-17 20:08:07 +0000 UTC (0+1 container statuses recorded) Jun 17 22:05:29.911: INFO: Container kube-scheduler ready: true, restart count 2 Jun 17 22:05:29.911: INFO: kube-proxy-qw2lh started at 2022-06-17 20:00:43 +0000 UTC (0+1 container statuses recorded) Jun 17 22:05:29.911: INFO: Container kube-proxy ready: true, restart count 1 Jun 17 22:05:29.911: INFO: kube-flannel-7sp2w started at 2022-06-17 20:01:38 +0000 UTC (1+1 container statuses recorded) Jun 17 22:05:29.911: INFO: Init container install-cni ready: true, restart count 0 Jun 17 22:05:29.911: INFO: Container kube-flannel ready: true, restart count 2 Jun 17 22:05:29.993: INFO: Latency metrics for node master3 Jun 17 22:05:29.993: INFO: Logging node info for node node1 Jun 17 22:05:29.995: INFO: Node Info: &Node{ObjectMeta:{node1 2db3a28c-448f-4511-9db8-4ef739b681b1 43863 0 2022-06-17 20:00:39 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux cmk.intel.com/cmk-node:true feature.node.kubernetes.io/cpu-cpuid.ADX:true feature.node.kubernetes.io/cpu-cpuid.AESNI:true feature.node.kubernetes.io/cpu-cpuid.AVX:true feature.node.kubernetes.io/cpu-cpuid.AVX2:true feature.node.kubernetes.io/cpu-cpuid.AVX512BW:true feature.node.kubernetes.io/cpu-cpuid.AVX512CD:true feature.node.kubernetes.io/cpu-cpuid.AVX512DQ:true feature.node.kubernetes.io/cpu-cpuid.AVX512F:true feature.node.kubernetes.io/cpu-cpuid.AVX512VL:true feature.node.kubernetes.io/cpu-cpuid.FMA3:true feature.node.kubernetes.io/cpu-cpuid.HLE:true feature.node.kubernetes.io/cpu-cpuid.IBPB:true feature.node.kubernetes.io/cpu-cpuid.MPX:true feature.node.kubernetes.io/cpu-cpuid.RTM:true feature.node.kubernetes.io/cpu-cpuid.SSE4:true feature.node.kubernetes.io/cpu-cpuid.SSE42:true feature.node.kubernetes.io/cpu-cpuid.STIBP:true feature.node.kubernetes.io/cpu-cpuid.VMX:true feature.node.kubernetes.io/cpu-cstate.enabled:true feature.node.kubernetes.io/cpu-hardware_multithreading:true feature.node.kubernetes.io/cpu-pstate.status:active feature.node.kubernetes.io/cpu-pstate.turbo:true feature.node.kubernetes.io/cpu-rdt.RDTCMT:true feature.node.kubernetes.io/cpu-rdt.RDTL3CA:true feature.node.kubernetes.io/cpu-rdt.RDTMBA:true feature.node.kubernetes.io/cpu-rdt.RDTMBM:true feature.node.kubernetes.io/cpu-rdt.RDTMON:true feature.node.kubernetes.io/kernel-config.NO_HZ:true feature.node.kubernetes.io/kernel-config.NO_HZ_FULL:true feature.node.kubernetes.io/kernel-selinux.enabled:true feature.node.kubernetes.io/kernel-version.full:3.10.0-1160.66.1.el7.x86_64 feature.node.kubernetes.io/kernel-version.major:3 feature.node.kubernetes.io/kernel-version.minor:10 feature.node.kubernetes.io/kernel-version.revision:0 feature.node.kubernetes.io/memory-numa:true feature.node.kubernetes.io/network-sriov.capable:true feature.node.kubernetes.io/network-sriov.configured:true feature.node.kubernetes.io/pci-0300_1a03.present:true feature.node.kubernetes.io/storage-nonrotationaldisk:true feature.node.kubernetes.io/system-os_release.ID:centos feature.node.kubernetes.io/system-os_release.VERSION_ID:7 feature.node.kubernetes.io/system-os_release.VERSION_ID.major:7 kubernetes.io/arch:amd64 kubernetes.io/hostname:node1 kubernetes.io/os:linux] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.207 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/extended-resources: nfd.node.kubernetes.io/feature-labels:cpu-cpuid.ADX,cpu-cpuid.AESNI,cpu-cpuid.AVX,cpu-cpuid.AVX2,cpu-cpuid.AVX512BW,cpu-cpuid.AVX512CD,cpu-cpuid.AVX512DQ,cpu-cpuid.AVX512F,cpu-cpuid.AVX512VL,cpu-cpuid.FMA3,cpu-cpuid.HLE,cpu-cpuid.IBPB,cpu-cpuid.MPX,cpu-cpuid.RTM,cpu-cpuid.SSE4,cpu-cpuid.SSE42,cpu-cpuid.STIBP,cpu-cpuid.VMX,cpu-cstate.enabled,cpu-hardware_multithreading,cpu-pstate.status,cpu-pstate.turbo,cpu-rdt.RDTCMT,cpu-rdt.RDTL3CA,cpu-rdt.RDTMBA,cpu-rdt.RDTMBM,cpu-rdt.RDTMON,kernel-config.NO_HZ,kernel-config.NO_HZ_FULL,kernel-selinux.enabled,kernel-version.full,kernel-version.major,kernel-version.minor,kernel-version.revision,memory-numa,network-sriov.capable,network-sriov.configured,pci-0300_1a03.present,storage-nonrotationaldisk,system-os_release.ID,system-os_release.VERSION_ID,system-os_release.VERSION_ID.major nfd.node.kubernetes.io/worker.version:v0.8.2 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2022-06-17 20:00:39 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.4.0/24\"":{}}}}} {kubeadm Update v1 2022-06-17 20:00:40 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {flanneld Update v1 2022-06-17 20:01:43 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {nfd-master Update v1 2022-06-17 20:09:35 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/extended-resources":{},"f:nfd.node.kubernetes.io/feature-labels":{},"f:nfd.node.kubernetes.io/worker.version":{}},"f:labels":{"f:feature.node.kubernetes.io/cpu-cpuid.ADX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AESNI":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX2":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512BW":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512CD":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512DQ":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512F":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512VL":{},"f:feature.node.kubernetes.io/cpu-cpuid.FMA3":{},"f:feature.node.kubernetes.io/cpu-cpuid.HLE":{},"f:feature.node.kubernetes.io/cpu-cpuid.IBPB":{},"f:feature.node.kubernetes.io/cpu-cpuid.MPX":{},"f:feature.node.kubernetes.io/cpu-cpuid.RTM":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE4":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE42":{},"f:feature.node.kubernetes.io/cpu-cpuid.STIBP":{},"f:feature.node.kubernetes.io/cpu-cpuid.VMX":{},"f:feature.node.kubernetes.io/cpu-cstate.enabled":{},"f:feature.node.kubernetes.io/cpu-hardware_multithreading":{},"f:feature.node.kubernetes.io/cpu-pstate.status":{},"f:feature.node.kubernetes.io/cpu-pstate.turbo":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTCMT":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTL3CA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBM":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMON":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ_FULL":{},"f:feature.node.kubernetes.io/kernel-selinux.enabled":{},"f:feature.node.kubernetes.io/kernel-version.full":{},"f:feature.node.kubernetes.io/kernel-version.major":{},"f:feature.node.kubernetes.io/kernel-version.minor":{},"f:feature.node.kubernetes.io/kernel-version.revision":{},"f:feature.node.kubernetes.io/memory-numa":{},"f:feature.node.kubernetes.io/network-sriov.capable":{},"f:feature.node.kubernetes.io/network-sriov.configured":{},"f:feature.node.kubernetes.io/pci-0300_1a03.present":{},"f:feature.node.kubernetes.io/storage-nonrotationaldisk":{},"f:feature.node.kubernetes.io/system-os_release.ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID.major":{}}}}} {Swagger-Codegen Update v1 2022-06-17 20:13:06 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:cmk.intel.com/cmk-node":{}}},"f:status":{"f:capacity":{"f:cmk.intel.com/exclusive-cores":{}}}}} {kubelet Update v1 2022-06-17 20:13:08 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:cmk.intel.com/exclusive-cores":{},"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:capacity":{"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.4.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.4.0/24],},Status:NodeStatus{Capacity:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{451201003520 0} {} 440625980Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{201269608448 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{77 0} {} 77 DecimalSI},ephemeral-storage: {{406080902496 0} {} 406080902496 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{178884608000 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-06-17 20:04:34 +0000 UTC,LastTransitionTime:2022-06-17 20:04:34 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-06-17 22:05:25 +0000 UTC,LastTransitionTime:2022-06-17 20:00:39 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-06-17 22:05:25 +0000 UTC,LastTransitionTime:2022-06-17 20:00:39 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-06-17 22:05:25 +0000 UTC,LastTransitionTime:2022-06-17 20:00:39 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-06-17 22:05:25 +0000 UTC,LastTransitionTime:2022-06-17 20:01:49 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.207,},NodeAddress{Type:Hostname,Address:node1,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:b4b206100a5d45e9959c4a79c836676a,SystemUUID:00CDA902-D022-E711-906E-0017A4403562,BootID:5a19e1a7-8d9a-4724-83a4-bd77b1a0f8f4,KernelVersion:3.10.0-1160.66.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.17,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[opnfv/barometer-collectd@sha256:f30e965aa6195e6ac4ca2410f5a15e3704c92e4afa5208178ca22a7911975d66],SizeBytes:1075575763,},ContainerImage{Names:[@ :],SizeBytes:1007077455,},ContainerImage{Names:[localhost:30500/cmk@sha256:7227e64d78c2a9dd290de0ec1cbbaf536dad977fc3efca629dc87d6ffb97071e cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:727740703,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[golang@sha256:db2475a1dbb2149508e5db31d7d77a75e6600d54be645f37681f03f2762169ba golang:alpine3.12],SizeBytes:301186719,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[k8s.gcr.io/etcd@sha256:4ad90a11b55313b182afc186b9876c8e891531b8db4c9bf1541953021618d0e2 k8s.gcr.io/etcd:3.4.13-0],SizeBytes:253392289,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[grafana/grafana@sha256:ba39bf5131dcc0464134a3ff0e26e8c6380415249fa725e5f619176601255172 grafana/grafana:7.5.4],SizeBytes:203572842,},ContainerImage{Names:[quay.io/prometheus/prometheus@sha256:b899dbd1b9017b9a379f76ce5b40eead01a62762c4f2057eacef945c3c22d210 quay.io/prometheus/prometheus:v2.22.1],SizeBytes:168344243,},ContainerImage{Names:[nginx@sha256:a05b0cdd4fc1be3b224ba9662ebdf98fe44c09c0c9215b45f84344c12867002e nginx:1.21.1],SizeBytes:133175493,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:125930239,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:1f36a24cfb5e0c3f725d7565a867c2384282fcbeccc77b07b423c9da95763a9a k8s.gcr.io/e2e-test-images/nautilus:1.4],SizeBytes:121748345,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[k8s.gcr.io/nfd/node-feature-discovery@sha256:74a1cbd82354f148277e20cdce25d57816e355a896bc67f67a0f722164b16945 k8s.gcr.io/nfd/node-feature-discovery:v0.8.2],SizeBytes:108486428,},ContainerImage{Names:[directxman12/k8s-prometheus-adapter@sha256:2b09a571757a12c0245f2f1a74db4d1b9386ff901cf57f5ce48a0a682bd0e3af directxman12/k8s-prometheus-adapter:v0.8.2],SizeBytes:68230450,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/sample-apiserver@sha256:e7fddbaac4c3451da2365ab90bad149d32f11409738034e41e0f460927f7c276 k8s.gcr.io/e2e-test-images/sample-apiserver:1.17.4],SizeBytes:58172101,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[localhost:30500/sriov-device-plugin@sha256:21d7abd21ac65aac7d19aaa2b1b05a71e496b7bf6251c76df58855be9c3aaa59 nfvpe/sriov-device-plugin:latest localhost:30500/sriov-device-plugin:v3.3.2],SizeBytes:42676189,},ContainerImage{Names:[localhost:30500/tasextender@sha256:a226a9c613b9eeed89115dd78ba697306e50d1b4466033c8415371714720c861 localhost:30500/tasextender:0.4],SizeBytes:28910791,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[prom/collectd-exporter@sha256:73fbda4d24421bff3b741c27efc36f1b6fbe7c57c378d56d4ff78101cd556654],SizeBytes:17463681,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:16032814,},ContainerImage{Names:[quay.io/prometheus-operator/prometheus-config-reloader@sha256:4dee0fcf1820355ddd6986c1317b555693776c731315544a99d6cc59a7e34ce9 quay.io/prometheus-operator/prometheus-config-reloader:v0.44.1],SizeBytes:13433274,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:11443478,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonewprivs@sha256:8ac1264691820febacf3aea5d152cbde6d10685731ec14966a9401c6f47a68ac k8s.gcr.io/e2e-test-images/nonewprivs:1.3],SizeBytes:7107254,},ContainerImage{Names:[appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172cead6502fec989198b98d8bb appropriate/curl:edge],SizeBytes:5654234,},ContainerImage{Names:[alpine@sha256:c75ac27b49326926b803b9ed43bf088bc220d22556de1bc5f72d742c91398f69 alpine:3.12],SizeBytes:5581590,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:1154361,},ContainerImage{Names:[busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 busybox:1.28],SizeBytes:1146369,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jun 17 22:05:29.996: INFO: Logging kubelet events for node node1 Jun 17 22:05:29.998: INFO: Logging pods the kubelet thinks is on node node1 Jun 17 22:05:30.013: INFO: collectd-5src2 started at 2022-06-17 20:18:47 +0000 UTC (0+3 container statuses recorded) Jun 17 22:05:30.013: INFO: Container collectd ready: true, restart count 0 Jun 17 22:05:30.013: INFO: Container collectd-exporter ready: true, restart count 0 Jun 17 22:05:30.013: INFO: Container rbac-proxy ready: true, restart count 0 Jun 17 22:05:30.013: INFO: kube-flannel-wqcwq started at 2022-06-17 20:01:38 +0000 UTC (1+1 container statuses recorded) Jun 17 22:05:30.013: INFO: Init container install-cni ready: true, restart count 2 Jun 17 22:05:30.013: INFO: Container kube-flannel ready: true, restart count 2 Jun 17 22:05:30.013: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-whtq2 started at 2022-06-17 20:10:41 +0000 UTC (0+1 container statuses recorded) Jun 17 22:05:30.013: INFO: Container kube-sriovdp ready: true, restart count 0 Jun 17 22:05:30.013: INFO: cmk-init-discover-node1-bvmrv started at 2022-06-17 20:13:02 +0000 UTC (0+3 container statuses recorded) Jun 17 22:05:30.013: INFO: Container discover ready: false, restart count 0 Jun 17 22:05:30.013: INFO: Container init ready: false, restart count 0 Jun 17 22:05:30.013: INFO: Container install ready: false, restart count 0 Jun 17 22:05:30.013: INFO: node-exporter-8ftgl started at 2022-06-17 20:14:54 +0000 UTC (0+2 container statuses recorded) Jun 17 22:05:30.013: INFO: Container kube-rbac-proxy ready: true, restart count 0 Jun 17 22:05:30.013: INFO: Container node-exporter ready: true, restart count 0 Jun 17 22:05:30.013: INFO: cmk-webhook-6c9d5f8578-qcmrd started at 2022-06-17 20:13:52 +0000 UTC (0+1 container statuses recorded) Jun 17 22:05:30.013: INFO: Container cmk-webhook ready: true, restart count 0 Jun 17 22:05:30.013: INFO: busybox-1d7e38a9-ea32-4597-ac48-fc08f0d0407d started at 2022-06-17 22:02:26 +0000 UTC (0+1 container statuses recorded) Jun 17 22:05:30.013: INFO: Container busybox ready: true, restart count 0 Jun 17 22:05:30.013: INFO: ss2-1 started at 2022-06-17 22:04:38 +0000 UTC (0+1 container statuses recorded) Jun 17 22:05:30.013: INFO: Container webserver ready: true, restart count 0 Jun 17 22:05:30.013: INFO: kube-proxy-t4lqk started at 2022-06-17 20:00:43 +0000 UTC (0+1 container statuses recorded) Jun 17 22:05:30.013: INFO: Container kube-proxy ready: true, restart count 2 Jun 17 22:05:30.013: INFO: cmk-xh247 started at 2022-06-17 20:13:51 +0000 UTC (0+2 container statuses recorded) Jun 17 22:05:30.013: INFO: Container nodereport ready: true, restart count 0 Jun 17 22:05:30.013: INFO: Container reconcile ready: true, restart count 0 Jun 17 22:05:30.013: INFO: nginx-proxy-node1 started at 2022-06-17 20:00:39 +0000 UTC (0+1 container statuses recorded) Jun 17 22:05:30.013: INFO: Container nginx-proxy ready: true, restart count 2 Jun 17 22:05:30.013: INFO: kube-multus-ds-amd64-m6vf8 started at 2022-06-17 20:01:47 +0000 UTC (0+1 container statuses recorded) Jun 17 22:05:30.013: INFO: Container kube-multus ready: true, restart count 1 Jun 17 22:05:30.013: INFO: kubernetes-dashboard-785dcbb76d-26kg6 started at 2022-06-17 20:02:19 +0000 UTC (0+1 container statuses recorded) Jun 17 22:05:30.013: INFO: Container kubernetes-dashboard ready: true, restart count 2 Jun 17 22:05:30.014: INFO: tas-telemetry-aware-scheduling-84ff454dfb-tbvjv started at 2022-06-17 20:17:57 +0000 UTC (0+1 container statuses recorded) Jun 17 22:05:30.014: INFO: Container tas-extender ready: true, restart count 0 Jun 17 22:05:30.014: INFO: ss2-2 started at 2022-06-17 22:05:20 +0000 UTC (0+1 container statuses recorded) Jun 17 22:05:30.014: INFO: Container webserver ready: false, restart count 0 Jun 17 22:05:30.014: INFO: sample-webhook-deployment-78988fc6cd-wgxgq started at 2022-06-17 22:05:25 +0000 UTC (0+1 container statuses recorded) Jun 17 22:05:30.014: INFO: Container sample-webhook ready: false, restart count 0 Jun 17 22:05:30.014: INFO: node-feature-discovery-worker-dgp4b started at 2022-06-17 20:09:28 +0000 UTC (0+1 container statuses recorded) Jun 17 22:05:30.014: INFO: Container nfd-worker ready: true, restart count 0 Jun 17 22:05:30.014: INFO: prometheus-k8s-0 started at 2022-06-17 20:14:56 +0000 UTC (0+4 container statuses recorded) Jun 17 22:05:30.014: INFO: Container config-reloader ready: true, restart count 0 Jun 17 22:05:30.014: INFO: Container custom-metrics-apiserver ready: true, restart count 0 Jun 17 22:05:30.014: INFO: Container grafana ready: true, restart count 0 Jun 17 22:05:30.014: INFO: Container prometheus ready: true, restart count 1 Jun 17 22:05:31.602: INFO: Latency metrics for node node1 Jun 17 22:05:31.602: INFO: Logging node info for node node2 Jun 17 22:05:31.605: INFO: Node Info: &Node{ObjectMeta:{node2 467d2582-10f7-475b-9f20-5b7c2e46267a 43924 0 2022-06-17 20:00:37 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux cmk.intel.com/cmk-node:true feature.node.kubernetes.io/cpu-cpuid.ADX:true feature.node.kubernetes.io/cpu-cpuid.AESNI:true feature.node.kubernetes.io/cpu-cpuid.AVX:true feature.node.kubernetes.io/cpu-cpuid.AVX2:true feature.node.kubernetes.io/cpu-cpuid.AVX512BW:true feature.node.kubernetes.io/cpu-cpuid.AVX512CD:true feature.node.kubernetes.io/cpu-cpuid.AVX512DQ:true feature.node.kubernetes.io/cpu-cpuid.AVX512F:true feature.node.kubernetes.io/cpu-cpuid.AVX512VL:true feature.node.kubernetes.io/cpu-cpuid.FMA3:true feature.node.kubernetes.io/cpu-cpuid.HLE:true feature.node.kubernetes.io/cpu-cpuid.IBPB:true feature.node.kubernetes.io/cpu-cpuid.MPX:true feature.node.kubernetes.io/cpu-cpuid.RTM:true feature.node.kubernetes.io/cpu-cpuid.SSE4:true feature.node.kubernetes.io/cpu-cpuid.SSE42:true feature.node.kubernetes.io/cpu-cpuid.STIBP:true feature.node.kubernetes.io/cpu-cpuid.VMX:true feature.node.kubernetes.io/cpu-cstate.enabled:true feature.node.kubernetes.io/cpu-hardware_multithreading:true feature.node.kubernetes.io/cpu-pstate.status:active feature.node.kubernetes.io/cpu-pstate.turbo:true feature.node.kubernetes.io/cpu-rdt.RDTCMT:true feature.node.kubernetes.io/cpu-rdt.RDTL3CA:true feature.node.kubernetes.io/cpu-rdt.RDTMBA:true feature.node.kubernetes.io/cpu-rdt.RDTMBM:true feature.node.kubernetes.io/cpu-rdt.RDTMON:true feature.node.kubernetes.io/kernel-config.NO_HZ:true feature.node.kubernetes.io/kernel-config.NO_HZ_FULL:true feature.node.kubernetes.io/kernel-selinux.enabled:true feature.node.kubernetes.io/kernel-version.full:3.10.0-1160.66.1.el7.x86_64 feature.node.kubernetes.io/kernel-version.major:3 feature.node.kubernetes.io/kernel-version.minor:10 feature.node.kubernetes.io/kernel-version.revision:0 feature.node.kubernetes.io/memory-numa:true feature.node.kubernetes.io/network-sriov.capable:true feature.node.kubernetes.io/network-sriov.configured:true feature.node.kubernetes.io/pci-0300_1a03.present:true feature.node.kubernetes.io/storage-nonrotationaldisk:true feature.node.kubernetes.io/system-os_release.ID:centos feature.node.kubernetes.io/system-os_release.VERSION_ID:7 feature.node.kubernetes.io/system-os_release.VERSION_ID.major:7 kubernetes.io/arch:amd64 kubernetes.io/hostname:node2 kubernetes.io/os:linux] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.208 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/extended-resources: nfd.node.kubernetes.io/feature-labels:cpu-cpuid.ADX,cpu-cpuid.AESNI,cpu-cpuid.AVX,cpu-cpuid.AVX2,cpu-cpuid.AVX512BW,cpu-cpuid.AVX512CD,cpu-cpuid.AVX512DQ,cpu-cpuid.AVX512F,cpu-cpuid.AVX512VL,cpu-cpuid.FMA3,cpu-cpuid.HLE,cpu-cpuid.IBPB,cpu-cpuid.MPX,cpu-cpuid.RTM,cpu-cpuid.SSE4,cpu-cpuid.SSE42,cpu-cpuid.STIBP,cpu-cpuid.VMX,cpu-cstate.enabled,cpu-hardware_multithreading,cpu-pstate.status,cpu-pstate.turbo,cpu-rdt.RDTCMT,cpu-rdt.RDTL3CA,cpu-rdt.RDTMBA,cpu-rdt.RDTMBM,cpu-rdt.RDTMON,kernel-config.NO_HZ,kernel-config.NO_HZ_FULL,kernel-selinux.enabled,kernel-version.full,kernel-version.major,kernel-version.minor,kernel-version.revision,memory-numa,network-sriov.capable,network-sriov.configured,pci-0300_1a03.present,storage-nonrotationaldisk,system-os_release.ID,system-os_release.VERSION_ID,system-os_release.VERSION_ID.major nfd.node.kubernetes.io/worker.version:v0.8.2 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2022-06-17 20:00:37 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.3.0/24\"":{}}}}} {kubeadm Update v1 2022-06-17 20:00:40 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {flanneld Update v1 2022-06-17 20:01:43 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {nfd-master Update v1 2022-06-17 20:09:34 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/extended-resources":{},"f:nfd.node.kubernetes.io/feature-labels":{},"f:nfd.node.kubernetes.io/worker.version":{}},"f:labels":{"f:feature.node.kubernetes.io/cpu-cpuid.ADX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AESNI":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX2":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512BW":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512CD":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512DQ":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512F":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512VL":{},"f:feature.node.kubernetes.io/cpu-cpuid.FMA3":{},"f:feature.node.kubernetes.io/cpu-cpuid.HLE":{},"f:feature.node.kubernetes.io/cpu-cpuid.IBPB":{},"f:feature.node.kubernetes.io/cpu-cpuid.MPX":{},"f:feature.node.kubernetes.io/cpu-cpuid.RTM":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE4":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE42":{},"f:feature.node.kubernetes.io/cpu-cpuid.STIBP":{},"f:feature.node.kubernetes.io/cpu-cpuid.VMX":{},"f:feature.node.kubernetes.io/cpu-cstate.enabled":{},"f:feature.node.kubernetes.io/cpu-hardware_multithreading":{},"f:feature.node.kubernetes.io/cpu-pstate.status":{},"f:feature.node.kubernetes.io/cpu-pstate.turbo":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTCMT":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTL3CA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBM":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMON":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ_FULL":{},"f:feature.node.kubernetes.io/kernel-selinux.enabled":{},"f:feature.node.kubernetes.io/kernel-version.full":{},"f:feature.node.kubernetes.io/kernel-version.major":{},"f:feature.node.kubernetes.io/kernel-version.minor":{},"f:feature.node.kubernetes.io/kernel-version.revision":{},"f:feature.node.kubernetes.io/memory-numa":{},"f:feature.node.kubernetes.io/network-sriov.capable":{},"f:feature.node.kubernetes.io/network-sriov.configured":{},"f:feature.node.kubernetes.io/pci-0300_1a03.present":{},"f:feature.node.kubernetes.io/storage-nonrotationaldisk":{},"f:feature.node.kubernetes.io/system-os_release.ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID.major":{}}}}} {Swagger-Codegen Update v1 2022-06-17 20:13:31 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:cmk.intel.com/cmk-node":{}}},"f:status":{"f:capacity":{"f:cmk.intel.com/exclusive-cores":{}}}}} {kubelet Update v1 2022-06-17 20:13:37 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:cmk.intel.com/exclusive-cores":{},"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:capacity":{"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.3.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.3.0/24],},Status:NodeStatus{Capacity:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{451201003520 0} {} 440625980Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{201269604352 0} {} 196552348Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{77 0} {} 77 DecimalSI},ephemeral-storage: {{406080902496 0} {} 406080902496 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{178884603904 0} {} 174691996Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-06-17 20:04:33 +0000 UTC,LastTransitionTime:2022-06-17 20:04:33 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-06-17 22:05:26 +0000 UTC,LastTransitionTime:2022-06-17 20:00:37 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-06-17 22:05:26 +0000 UTC,LastTransitionTime:2022-06-17 20:00:37 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-06-17 22:05:26 +0000 UTC,LastTransitionTime:2022-06-17 20:00:37 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-06-17 22:05:26 +0000 UTC,LastTransitionTime:2022-06-17 20:04:30 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.208,},NodeAddress{Type:Hostname,Address:node2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:3b9e31fbb30d4e48b9ac063755a76deb,SystemUUID:80B3CD56-852F-E711-906E-0017A4403562,BootID:5cd4c1a7-c6ca-496c-9122-4f944da708e6,KernelVersion:3.10.0-1160.66.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.17,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[opnfv/barometer-collectd@sha256:f30e965aa6195e6ac4ca2410f5a15e3704c92e4afa5208178ca22a7911975d66],SizeBytes:1075575763,},ContainerImage{Names:[cmk:v1.5.1],SizeBytes:727740703,},ContainerImage{Names:[localhost:30500/cmk@sha256:7227e64d78c2a9dd290de0ec1cbbaf536dad977fc3efca629dc87d6ffb97071e localhost:30500/cmk:v1.5.1],SizeBytes:727740703,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[aquasec/kube-hunter@sha256:2be6820bc1d7e0f57193a9a27d5a3e16b2fd93c53747b03ce8ca48c6fc323781 aquasec/kube-hunter:0.3.1],SizeBytes:347611549,},ContainerImage{Names:[sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 sirot/netperf-latest:latest],SizeBytes:282025213,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:702a992280fb7c3303e84a5801acbb4c9c7fcf48cffe0e9c8be3f0c60f74cf89 k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.4],SizeBytes:253371792,},ContainerImage{Names:[nginx@sha256:a05b0cdd4fc1be3b224ba9662ebdf98fe44c09c0c9215b45f84344c12867002e nginx:1.21.1],SizeBytes:133175493,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:125930239,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:1f36a24cfb5e0c3f725d7565a867c2384282fcbeccc77b07b423c9da95763a9a k8s.gcr.io/e2e-test-images/nautilus:1.4],SizeBytes:121748345,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[k8s.gcr.io/nfd/node-feature-discovery@sha256:74a1cbd82354f148277e20cdce25d57816e355a896bc67f67a0f722164b16945 k8s.gcr.io/nfd/node-feature-discovery:v0.8.2],SizeBytes:108486428,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[localhost:30500/sriov-device-plugin@sha256:21d7abd21ac65aac7d19aaa2b1b05a71e496b7bf6251c76df58855be9c3aaa59 localhost:30500/sriov-device-plugin:v3.3.2],SizeBytes:42676189,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 aquasec/kube-bench:0.3.1],SizeBytes:19301876,},ContainerImage{Names:[prom/collectd-exporter@sha256:73fbda4d24421bff3b741c27efc36f1b6fbe7c57c378d56d4ff78101cd556654],SizeBytes:17463681,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:16032814,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:11443478,},ContainerImage{Names:[appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172cead6502fec989198b98d8bb appropriate/curl:edge],SizeBytes:5654234,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:1154361,},ContainerImage{Names:[busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 busybox:1.28],SizeBytes:1146369,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jun 17 22:05:31.606: INFO: Logging kubelet events for node node2 Jun 17 22:05:31.609: INFO: Logging pods the kubelet thinks is on node node2 Jun 17 22:05:31.621: INFO: pod-projected-configmaps-0b01b6d1-fa47-48fd-a46b-4c22224c0178 started at 2022-06-17 22:05:29 +0000 UTC (0+1 container statuses recorded) Jun 17 22:05:31.621: INFO: Container agnhost-container ready: false, restart count 0 Jun 17 22:05:31.621: INFO: liveness-9d921226-5ef5-4b95-8fd5-73a7ea4da2c5 started at 2022-06-17 22:05:21 +0000 UTC (0+1 container statuses recorded) Jun 17 22:05:31.621: INFO: Container agnhost-container ready: true, restart count 0 Jun 17 22:05:31.621: INFO: pod-update-d7375882-44a0-47e6-86ce-2fc396a761fe started at 2022-06-17 22:05:18 +0000 UTC (0+1 container statuses recorded) Jun 17 22:05:31.621: INFO: Container nginx ready: true, restart count 0 Jun 17 22:05:31.621: INFO: kubernetes-metrics-scraper-5558854cb-w4nk8 started at 2022-06-17 20:02:19 +0000 UTC (0+1 container statuses recorded) Jun 17 22:05:31.621: INFO: Container kubernetes-metrics-scraper ready: true, restart count 1 Jun 17 22:05:31.621: INFO: node-exporter-xgz6d started at 2022-06-17 20:14:54 +0000 UTC (0+2 container statuses recorded) Jun 17 22:05:31.621: INFO: Container kube-rbac-proxy ready: true, restart count 0 Jun 17 22:05:31.621: INFO: Container node-exporter ready: true, restart count 0 Jun 17 22:05:31.621: INFO: test-webserver-c67c950f-e38b-4445-ab3b-ceabf4cf4f10 started at 2022-06-17 22:01:26 +0000 UTC (0+1 container statuses recorded) Jun 17 22:05:31.621: INFO: Container test-webserver ready: true, restart count 0 Jun 17 22:05:31.621: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-xr9c5 started at 2022-06-17 20:10:41 +0000 UTC (0+1 container statuses recorded) Jun 17 22:05:31.621: INFO: Container kube-sriovdp ready: true, restart count 0 Jun 17 22:05:31.621: INFO: ss2-0 started at 2022-06-17 22:05:12 +0000 UTC (0+1 container statuses recorded) Jun 17 22:05:31.621: INFO: Container webserver ready: true, restart count 0 Jun 17 22:05:31.621: INFO: kube-flannel-plbl8 started at 2022-06-17 20:01:38 +0000 UTC (1+1 container statuses recorded) Jun 17 22:05:31.621: INFO: Init container install-cni ready: true, restart count 2 Jun 17 22:05:31.621: INFO: Container kube-flannel ready: true, restart count 2 Jun 17 22:05:31.621: INFO: cmk-init-discover-node2-z2vgz started at 2022-06-17 20:13:25 +0000 UTC (0+3 container statuses recorded) Jun 17 22:05:31.621: INFO: Container discover ready: false, restart count 0 Jun 17 22:05:31.621: INFO: Container init ready: false, restart count 0 Jun 17 22:05:31.621: INFO: Container install ready: false, restart count 0 Jun 17 22:05:31.621: INFO: affinity-clusterip-timeout-nlh7m started at 2022-06-17 22:04:49 +0000 UTC (0+1 container statuses recorded) Jun 17 22:05:31.621: INFO: Container affinity-clusterip-timeout ready: false, restart count 0 Jun 17 22:05:31.622: INFO: node-feature-discovery-worker-82r46 started at 2022-06-17 20:09:28 +0000 UTC (0+1 container statuses recorded) Jun 17 22:05:31.622: INFO: Container nfd-worker ready: true, restart count 0 Jun 17 22:05:31.622: INFO: kube-proxy-pvtj6 started at 2022-06-17 20:00:43 +0000 UTC (0+1 container statuses recorded) Jun 17 22:05:31.622: INFO: Container kube-proxy ready: true, restart count 2 Jun 17 22:05:31.622: INFO: kube-multus-ds-amd64-hblk4 started at 2022-06-17 20:01:47 +0000 UTC (0+1 container statuses recorded) Jun 17 22:05:31.622: INFO: Container kube-multus ready: true, restart count 1 Jun 17 22:05:31.622: INFO: cmk-5gtjq started at 2022-06-17 20:13:52 +0000 UTC (0+2 container statuses recorded) Jun 17 22:05:31.622: INFO: Container nodereport ready: true, restart count 0 Jun 17 22:05:31.622: INFO: Container reconcile ready: true, restart count 0 Jun 17 22:05:31.622: INFO: collectd-6bcqz started at 2022-06-17 20:18:47 +0000 UTC (0+3 container statuses recorded) Jun 17 22:05:31.622: INFO: Container collectd ready: true, restart count 0 Jun 17 22:05:31.622: INFO: Container collectd-exporter ready: true, restart count 0 Jun 17 22:05:31.622: INFO: Container rbac-proxy ready: true, restart count 0 Jun 17 22:05:31.622: INFO: affinity-clusterip-timeout-bjr2l started at 2022-06-17 22:04:49 +0000 UTC (0+1 container statuses recorded) Jun 17 22:05:31.622: INFO: Container affinity-clusterip-timeout ready: false, restart count 0 Jun 17 22:05:31.622: INFO: nginx-proxy-node2 started at 2022-06-17 20:00:37 +0000 UTC (0+1 container statuses recorded) Jun 17 22:05:31.622: INFO: Container nginx-proxy ready: true, restart count 2 Jun 17 22:05:31.818: INFO: Latency metrics for node node2 Jun 17 22:05:31.818: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-2464" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750 • Failure [154.342 seconds] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should have session affinity work for NodePort service [LinuxOnly] [Conformance] [It] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Jun 17 22:05:17.890: Unexpected error: <*errors.errorString | 0xc000339c10>: { s: "service is not reachable within 2m0s timeout on endpoint 10.10.190.207:30884 over TCP protocol", } service is not reachable within 2m0s timeout on endpoint 10.10.190.207:30884 over TCP protocol occurred /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:2576 ------------------------------ {"msg":"FAILED [sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","total":-1,"completed":11,"skipped":172,"failed":2,"failures":["[sig-network] Services should be able to create a functioning NodePort service [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]"]} SSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 17 22:05:29.383: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating configMap with name projected-configmap-test-volume-ea8be685-8f6d-4802-84f2-68a7ed9902e1 STEP: Creating a pod to test consume configMaps Jun 17 22:05:29.417: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-0b01b6d1-fa47-48fd-a46b-4c22224c0178" in namespace "projected-7792" to be "Succeeded or Failed" Jun 17 22:05:29.420: INFO: Pod "pod-projected-configmaps-0b01b6d1-fa47-48fd-a46b-4c22224c0178": Phase="Pending", Reason="", readiness=false. Elapsed: 2.804832ms Jun 17 22:05:31.424: INFO: Pod "pod-projected-configmaps-0b01b6d1-fa47-48fd-a46b-4c22224c0178": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007447632s Jun 17 22:05:33.428: INFO: Pod "pod-projected-configmaps-0b01b6d1-fa47-48fd-a46b-4c22224c0178": Phase="Pending", Reason="", readiness=false. Elapsed: 4.011426096s Jun 17 22:05:35.433: INFO: Pod "pod-projected-configmaps-0b01b6d1-fa47-48fd-a46b-4c22224c0178": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.015955927s STEP: Saw pod success Jun 17 22:05:35.433: INFO: Pod "pod-projected-configmaps-0b01b6d1-fa47-48fd-a46b-4c22224c0178" satisfied condition "Succeeded or Failed" Jun 17 22:05:35.435: INFO: Trying to get logs from node node2 pod pod-projected-configmaps-0b01b6d1-fa47-48fd-a46b-4c22224c0178 container agnhost-container: STEP: delete the pod Jun 17 22:05:35.449: INFO: Waiting for pod pod-projected-configmaps-0b01b6d1-fa47-48fd-a46b-4c22224c0178 to disappear Jun 17 22:05:35.451: INFO: Pod pod-projected-configmaps-0b01b6d1-fa47-48fd-a46b-4c22224c0178 no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 17 22:05:35.451: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7792" for this suite. • [SLOW TEST:6.079 seconds] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":17,"skipped":340,"failed":1,"failures":["[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]"]} SSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 17 22:05:31.848: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46 [It] should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Jun 17 22:05:31.881: INFO: Waiting up to 5m0s for pod "busybox-user-65534-626a2e7b-9dfe-44b7-a9b1-36fef078b52f" in namespace "security-context-test-2167" to be "Succeeded or Failed" Jun 17 22:05:31.883: INFO: Pod "busybox-user-65534-626a2e7b-9dfe-44b7-a9b1-36fef078b52f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.523678ms Jun 17 22:05:33.886: INFO: Pod "busybox-user-65534-626a2e7b-9dfe-44b7-a9b1-36fef078b52f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005491701s Jun 17 22:05:35.891: INFO: Pod "busybox-user-65534-626a2e7b-9dfe-44b7-a9b1-36fef078b52f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.010145145s Jun 17 22:05:37.894: INFO: Pod "busybox-user-65534-626a2e7b-9dfe-44b7-a9b1-36fef078b52f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.013108208s Jun 17 22:05:37.894: INFO: Pod "busybox-user-65534-626a2e7b-9dfe-44b7-a9b1-36fef078b52f" satisfied condition "Succeeded or Failed" [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 17 22:05:37.894: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-2167" for this suite. • [SLOW TEST:6.056 seconds] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 When creating a container with runAsUser /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:50 should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 17 22:04:46.810: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746 [It] should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating service in namespace services-4384 Jun 17 22:04:46.843: INFO: The status of Pod kube-proxy-mode-detector is Pending, waiting for it to be Running (with Ready = true) Jun 17 22:04:48.846: INFO: The status of Pod kube-proxy-mode-detector is Running (Ready = true) Jun 17 22:04:48.849: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4384 exec kube-proxy-mode-detector -- /bin/sh -x -c curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode' Jun 17 22:04:49.129: INFO: stderr: "+ curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode\n" Jun 17 22:04:49.129: INFO: stdout: "iptables" Jun 17 22:04:49.130: INFO: proxyMode: iptables Jun 17 22:04:49.137: INFO: Waiting for pod kube-proxy-mode-detector to disappear Jun 17 22:04:49.139: INFO: Pod kube-proxy-mode-detector no longer exists STEP: creating service affinity-clusterip-timeout in namespace services-4384 STEP: creating replication controller affinity-clusterip-timeout in namespace services-4384 I0617 22:04:49.147629 32 runners.go:190] Created replication controller with name: affinity-clusterip-timeout, namespace: services-4384, replica count: 3 I0617 22:04:52.199077 32 runners.go:190] affinity-clusterip-timeout Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0617 22:04:55.200233 32 runners.go:190] affinity-clusterip-timeout Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jun 17 22:04:55.204: INFO: Creating new exec pod Jun 17 22:05:00.222: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4384 exec execpod-affinityzljs7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-timeout 80' Jun 17 22:05:00.469: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-clusterip-timeout 80\nConnection to affinity-clusterip-timeout 80 port [tcp/http] succeeded!\n" Jun 17 22:05:00.469: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" Jun 17 22:05:00.469: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4384 exec execpod-affinityzljs7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.233.1.93 80' Jun 17 22:05:00.764: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 10.233.1.93 80\nConnection to 10.233.1.93 80 port [tcp/http] succeeded!\n" Jun 17 22:05:00.764: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" Jun 17 22:05:00.764: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4384 exec execpod-affinityzljs7 -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://10.233.1.93:80/ ; done' Jun 17 22:05:01.117: INFO: stderr: "+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.1.93:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.1.93:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.1.93:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.1.93:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.1.93:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.1.93:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.1.93:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.1.93:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.1.93:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.1.93:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.1.93:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.1.93:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.1.93:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.1.93:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.1.93:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.1.93:80/\n" Jun 17 22:05:01.117: INFO: stdout: "\naffinity-clusterip-timeout-bjr2l\naffinity-clusterip-timeout-bjr2l\naffinity-clusterip-timeout-bjr2l\naffinity-clusterip-timeout-bjr2l\naffinity-clusterip-timeout-bjr2l\naffinity-clusterip-timeout-bjr2l\naffinity-clusterip-timeout-bjr2l\naffinity-clusterip-timeout-bjr2l\naffinity-clusterip-timeout-bjr2l\naffinity-clusterip-timeout-bjr2l\naffinity-clusterip-timeout-bjr2l\naffinity-clusterip-timeout-bjr2l\naffinity-clusterip-timeout-bjr2l\naffinity-clusterip-timeout-bjr2l\naffinity-clusterip-timeout-bjr2l\naffinity-clusterip-timeout-bjr2l" Jun 17 22:05:01.117: INFO: Received response from host: affinity-clusterip-timeout-bjr2l Jun 17 22:05:01.117: INFO: Received response from host: affinity-clusterip-timeout-bjr2l Jun 17 22:05:01.117: INFO: Received response from host: affinity-clusterip-timeout-bjr2l Jun 17 22:05:01.117: INFO: Received response from host: affinity-clusterip-timeout-bjr2l Jun 17 22:05:01.117: INFO: Received response from host: affinity-clusterip-timeout-bjr2l Jun 17 22:05:01.117: INFO: Received response from host: affinity-clusterip-timeout-bjr2l Jun 17 22:05:01.117: INFO: Received response from host: affinity-clusterip-timeout-bjr2l Jun 17 22:05:01.117: INFO: Received response from host: affinity-clusterip-timeout-bjr2l Jun 17 22:05:01.117: INFO: Received response from host: affinity-clusterip-timeout-bjr2l Jun 17 22:05:01.117: INFO: Received response from host: affinity-clusterip-timeout-bjr2l Jun 17 22:05:01.117: INFO: Received response from host: affinity-clusterip-timeout-bjr2l Jun 17 22:05:01.117: INFO: Received response from host: affinity-clusterip-timeout-bjr2l Jun 17 22:05:01.117: INFO: Received response from host: affinity-clusterip-timeout-bjr2l Jun 17 22:05:01.117: INFO: Received response from host: affinity-clusterip-timeout-bjr2l Jun 17 22:05:01.117: INFO: Received response from host: affinity-clusterip-timeout-bjr2l Jun 17 22:05:01.117: INFO: Received response from host: affinity-clusterip-timeout-bjr2l Jun 17 22:05:01.117: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4384 exec execpod-affinityzljs7 -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://10.233.1.93:80/' Jun 17 22:05:01.417: INFO: stderr: "+ curl -q -s --connect-timeout 2 http://10.233.1.93:80/\n" Jun 17 22:05:01.417: INFO: stdout: "affinity-clusterip-timeout-bjr2l" Jun 17 22:05:21.420: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4384 exec execpod-affinityzljs7 -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://10.233.1.93:80/' Jun 17 22:05:22.144: INFO: stderr: "+ curl -q -s --connect-timeout 2 http://10.233.1.93:80/\n" Jun 17 22:05:22.144: INFO: stdout: "affinity-clusterip-timeout-nlh7m" Jun 17 22:05:22.144: INFO: Cleaning up the exec pod STEP: deleting ReplicationController affinity-clusterip-timeout in namespace services-4384, will wait for the garbage collector to delete the pods Jun 17 22:05:22.210: INFO: Deleting ReplicationController affinity-clusterip-timeout took: 4.44192ms Jun 17 22:05:22.310: INFO: Terminating ReplicationController affinity-clusterip-timeout pods took: 100.186349ms [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 17 22:05:38.619: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-4384" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750 • [SLOW TEST:51.816 seconds] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","total":-1,"completed":21,"skipped":282,"failed":1,"failures":["[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]"]} SSSSSSSSSSSSSSS ------------------------------ {"msg":"PASSED [sig-node] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":12,"skipped":178,"failed":2,"failures":["[sig-network] Services should be able to create a functioning NodePort service [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]"]} [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 17 22:05:37.906: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test emptydir 0777 on node default medium Jun 17 22:05:37.938: INFO: Waiting up to 5m0s for pod "pod-a7b9c8b1-dbce-43c8-96a6-2983bbe4a677" in namespace "emptydir-3627" to be "Succeeded or Failed" Jun 17 22:05:37.939: INFO: Pod "pod-a7b9c8b1-dbce-43c8-96a6-2983bbe4a677": Phase="Pending", Reason="", readiness=false. Elapsed: 1.865713ms Jun 17 22:05:39.944: INFO: Pod "pod-a7b9c8b1-dbce-43c8-96a6-2983bbe4a677": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006068252s Jun 17 22:05:41.949: INFO: Pod "pod-a7b9c8b1-dbce-43c8-96a6-2983bbe4a677": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011047904s STEP: Saw pod success Jun 17 22:05:41.949: INFO: Pod "pod-a7b9c8b1-dbce-43c8-96a6-2983bbe4a677" satisfied condition "Succeeded or Failed" Jun 17 22:05:41.952: INFO: Trying to get logs from node node2 pod pod-a7b9c8b1-dbce-43c8-96a6-2983bbe4a677 container test-container: STEP: delete the pod Jun 17 22:05:41.963: INFO: Waiting for pod pod-a7b9c8b1-dbce-43c8-96a6-2983bbe4a677 to disappear Jun 17 22:05:41.966: INFO: Pod pod-a7b9c8b1-dbce-43c8-96a6-2983bbe4a677 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 17 22:05:41.966: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-3627" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":13,"skipped":178,"failed":2,"failures":["[sig-network] Services should be able to create a functioning NodePort service [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]"]} SSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 17 22:05:24.937: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Jun 17 22:05:25.227: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Jun 17 22:05:27.236: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63791100325, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63791100325, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63791100325, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63791100325, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} Jun 17 22:05:29.240: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63791100325, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63791100325, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63791100325, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63791100325, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} Jun 17 22:05:31.243: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63791100325, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63791100325, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63791100325, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63791100325, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} Jun 17 22:05:33.241: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63791100325, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63791100325, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63791100325, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63791100325, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jun 17 22:05:36.249: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny custom resource creation, update and deletion [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Jun 17 22:05:36.253: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the custom resource webhook via the AdmissionRegistration API STEP: Creating a custom resource that should be denied by the webhook STEP: Creating a custom resource whose deletion would be denied by the webhook STEP: Updating the custom resource with disallowed data should be denied STEP: Deleting the custom resource should be denied STEP: Remove the offending key and value from the custom resource data STEP: Deleting the updated custom resource should be successful [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 17 22:05:44.343: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-5774" for this suite. STEP: Destroying namespace "webhook-5774-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:19.436 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny custom resource creation, update and deletion [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","total":-1,"completed":43,"skipped":433,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 17 22:05:35.490: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for pods for Subdomain [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-4221.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-querier-2.dns-test-service-2.dns-4221.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-4221.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-querier-2.dns-test-service-2.dns-4221.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-4221.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service-2.dns-4221.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-4221.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service-2.dns-4221.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-4221.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-4221.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-querier-2.dns-test-service-2.dns-4221.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-4221.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-querier-2.dns-test-service-2.dns-4221.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-4221.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service-2.dns-4221.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-4221.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service-2.dns-4221.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-4221.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Jun 17 22:05:41.550: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-4221.svc.cluster.local from pod dns-4221/dns-test-ef4739a6-93db-4882-a744-9a7bc35a90cc: the server could not find the requested resource (get pods dns-test-ef4739a6-93db-4882-a744-9a7bc35a90cc) Jun 17 22:05:41.553: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-4221.svc.cluster.local from pod dns-4221/dns-test-ef4739a6-93db-4882-a744-9a7bc35a90cc: the server could not find the requested resource (get pods dns-test-ef4739a6-93db-4882-a744-9a7bc35a90cc) Jun 17 22:05:41.556: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-4221.svc.cluster.local from pod dns-4221/dns-test-ef4739a6-93db-4882-a744-9a7bc35a90cc: the server could not find the requested resource (get pods dns-test-ef4739a6-93db-4882-a744-9a7bc35a90cc) Jun 17 22:05:41.559: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-4221.svc.cluster.local from pod dns-4221/dns-test-ef4739a6-93db-4882-a744-9a7bc35a90cc: the server could not find the requested resource (get pods dns-test-ef4739a6-93db-4882-a744-9a7bc35a90cc) Jun 17 22:05:41.566: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-4221.svc.cluster.local from pod dns-4221/dns-test-ef4739a6-93db-4882-a744-9a7bc35a90cc: the server could not find the requested resource (get pods dns-test-ef4739a6-93db-4882-a744-9a7bc35a90cc) Jun 17 22:05:41.568: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-4221.svc.cluster.local from pod dns-4221/dns-test-ef4739a6-93db-4882-a744-9a7bc35a90cc: the server could not find the requested resource (get pods dns-test-ef4739a6-93db-4882-a744-9a7bc35a90cc) Jun 17 22:05:41.570: INFO: Unable to read jessie_udp@dns-test-service-2.dns-4221.svc.cluster.local from pod dns-4221/dns-test-ef4739a6-93db-4882-a744-9a7bc35a90cc: the server could not find the requested resource (get pods dns-test-ef4739a6-93db-4882-a744-9a7bc35a90cc) Jun 17 22:05:41.573: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-4221.svc.cluster.local from pod dns-4221/dns-test-ef4739a6-93db-4882-a744-9a7bc35a90cc: the server could not find the requested resource (get pods dns-test-ef4739a6-93db-4882-a744-9a7bc35a90cc) Jun 17 22:05:41.578: INFO: Lookups using dns-4221/dns-test-ef4739a6-93db-4882-a744-9a7bc35a90cc failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-4221.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-4221.svc.cluster.local wheezy_udp@dns-test-service-2.dns-4221.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-4221.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-4221.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-4221.svc.cluster.local jessie_udp@dns-test-service-2.dns-4221.svc.cluster.local jessie_tcp@dns-test-service-2.dns-4221.svc.cluster.local] Jun 17 22:05:46.607: INFO: DNS probes using dns-4221/dns-test-ef4739a6-93db-4882-a744-9a7bc35a90cc succeeded STEP: deleting the pod STEP: deleting the test headless service [AfterEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 17 22:05:46.623: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-4221" for this suite. • [SLOW TEST:11.142 seconds] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should provide DNS for pods for Subdomain [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for pods for Subdomain [Conformance]","total":-1,"completed":18,"skipped":353,"failed":1,"failures":["[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]"]} S ------------------------------ [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 17 22:05:46.637: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41 [It] should provide container's cpu limit [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test downward API volume plugin Jun 17 22:05:46.678: INFO: Waiting up to 5m0s for pod "downwardapi-volume-459c2722-3e18-426e-924b-9faf959136b0" in namespace "projected-6016" to be "Succeeded or Failed" Jun 17 22:05:46.682: INFO: Pod "downwardapi-volume-459c2722-3e18-426e-924b-9faf959136b0": Phase="Pending", Reason="", readiness=false. Elapsed: 4.369895ms Jun 17 22:05:48.691: INFO: Pod "downwardapi-volume-459c2722-3e18-426e-924b-9faf959136b0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012889413s Jun 17 22:05:50.695: INFO: Pod "downwardapi-volume-459c2722-3e18-426e-924b-9faf959136b0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.016725341s STEP: Saw pod success Jun 17 22:05:50.695: INFO: Pod "downwardapi-volume-459c2722-3e18-426e-924b-9faf959136b0" satisfied condition "Succeeded or Failed" Jun 17 22:05:50.697: INFO: Trying to get logs from node node2 pod downwardapi-volume-459c2722-3e18-426e-924b-9faf959136b0 container client-container: STEP: delete the pod Jun 17 22:05:50.748: INFO: Waiting for pod downwardapi-volume-459c2722-3e18-426e-924b-9faf959136b0 to disappear Jun 17 22:05:50.750: INFO: Pod downwardapi-volume-459c2722-3e18-426e-924b-9faf959136b0 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 17 22:05:50.750: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6016" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance]","total":-1,"completed":19,"skipped":354,"failed":1,"failures":["[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]"]} SSSSS ------------------------------ [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 17 22:05:41.996: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD preserving unknown fields in an embedded object [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Jun 17 22:05:42.019: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties Jun 17 22:05:50.137: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-535 --namespace=crd-publish-openapi-535 create -f -' Jun 17 22:05:50.662: INFO: stderr: "" Jun 17 22:05:50.662: INFO: stdout: "e2e-test-crd-publish-openapi-6005-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n" Jun 17 22:05:50.662: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-535 --namespace=crd-publish-openapi-535 delete e2e-test-crd-publish-openapi-6005-crds test-cr' Jun 17 22:05:50.849: INFO: stderr: "" Jun 17 22:05:50.849: INFO: stdout: "e2e-test-crd-publish-openapi-6005-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n" Jun 17 22:05:50.849: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-535 --namespace=crd-publish-openapi-535 apply -f -' Jun 17 22:05:51.254: INFO: stderr: "" Jun 17 22:05:51.254: INFO: stdout: "e2e-test-crd-publish-openapi-6005-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n" Jun 17 22:05:51.254: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-535 --namespace=crd-publish-openapi-535 delete e2e-test-crd-publish-openapi-6005-crds test-cr' Jun 17 22:05:51.431: INFO: stderr: "" Jun 17 22:05:51.431: INFO: stdout: "e2e-test-crd-publish-openapi-6005-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR Jun 17 22:05:51.431: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-535 explain e2e-test-crd-publish-openapi-6005-crds' Jun 17 22:05:51.786: INFO: stderr: "" Jun 17 22:05:51.786: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-6005-crd\nVERSION: crd-publish-openapi-test-unknown-in-nested.example.com/v1\n\nDESCRIPTION:\n preserve-unknown-properties in nested field for Testing\n\nFIELDS:\n apiVersion\t\n APIVersion defines the versioned schema of this representation of an\n object. Servers should convert recognized schemas to the latest internal\n value, and may reject unrecognized values. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n kind\t\n Kind is a string value representing the REST resource this object\n represents. Servers may infer this from the endpoint the client submits\n requests to. Cannot be updated. In CamelCase. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n metadata\t\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n spec\t<>\n Specification of Waldo\n\n status\t\n Status of Waldo\n\n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 17 22:05:55.396: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-535" for this suite. • [SLOW TEST:13.418 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD preserving unknown fields in an embedded object [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance]","total":-1,"completed":14,"skipped":188,"failed":2,"failures":["[sig-network] Services should be able to create a functioning NodePort service [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 17 22:05:44.471: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Jun 17 22:05:44.749: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Jun 17 22:05:46.758: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63791100344, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63791100344, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63791100344, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63791100344, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} Jun 17 22:05:48.761: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63791100344, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63791100344, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63791100344, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63791100344, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jun 17 22:05:51.767: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Jun 17 22:05:51.771: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-8728-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource that should be mutated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 17 22:05:59.840: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-3644" for this suite. STEP: Destroying namespace "webhook-3644-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:15.401 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","total":-1,"completed":44,"skipped":488,"failed":0} SSSS ------------------------------ [BeforeEach] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 17 22:05:59.884: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Jun 17 22:06:03.941: INFO: Expected: &{OK} to match Container's Termination Message: OK -- STEP: delete the container [AfterEach] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 17 22:06:03.950: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-8072" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":-1,"completed":45,"skipped":492,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 17 22:05:55.478: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:186 [It] should be submitted and removed [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating the pod STEP: setting up watch STEP: submitting the pod to kubernetes Jun 17 22:05:55.508: INFO: observed the pod list STEP: verifying the pod is in kubernetes STEP: verifying pod creation was observed STEP: deleting the pod gracefully STEP: verifying pod deletion was observed [AfterEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 17 22:06:08.389: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-585" for this suite. • [SLOW TEST:12.920 seconds] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should be submitted and removed [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Pods should be submitted and removed [NodeConformance] [Conformance]","total":-1,"completed":15,"skipped":218,"failed":2,"failures":["[sig-network] Services should be able to create a functioning NodePort service [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]"]} SSS ------------------------------ [BeforeEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 17 22:05:50.771: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for ExternalName services [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a test externalName service STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-8854.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-8854.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-8854.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-8854.svc.cluster.local; sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Jun 17 22:05:56.819: INFO: DNS probes using dns-test-1cb0613f-f20d-4c69-9e39-fc60657f94fe succeeded STEP: deleting the pod STEP: changing the externalName to bar.example.com STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-8854.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-8854.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-8854.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-8854.svc.cluster.local; sleep 1; done STEP: creating a second pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Jun 17 22:06:02.859: INFO: DNS probes using dns-test-8e2005d6-d21b-4cfe-9040-688248c2dad6 succeeded STEP: deleting the pod STEP: changing the service to type=ClusterIP STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-8854.svc.cluster.local A > /results/wheezy_udp@dns-test-service-3.dns-8854.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-8854.svc.cluster.local A > /results/jessie_udp@dns-test-service-3.dns-8854.svc.cluster.local; sleep 1; done STEP: creating a third pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Jun 17 22:06:08.902: INFO: DNS probes using dns-test-5f3264c8-8581-4b76-901a-2baedf232c0f succeeded STEP: deleting the pod STEP: deleting the test externalName service [AfterEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 17 22:06:08.915: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-8854" for this suite. • [SLOW TEST:18.154 seconds] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should provide DNS for ExternalName services [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for ExternalName services [Conformance]","total":-1,"completed":20,"skipped":359,"failed":1,"failures":["[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]"]} [BeforeEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 17 22:06:08.927: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating projection with secret that has name projected-secret-test-bc2dba4c-9668-413c-b37d-4d60c7d63378 STEP: Creating a pod to test consume secrets Jun 17 22:06:08.967: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-88a71e3c-1c63-4015-a212-c24a3401e6a8" in namespace "projected-3648" to be "Succeeded or Failed" Jun 17 22:06:08.970: INFO: Pod "pod-projected-secrets-88a71e3c-1c63-4015-a212-c24a3401e6a8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.977695ms Jun 17 22:06:10.974: INFO: Pod "pod-projected-secrets-88a71e3c-1c63-4015-a212-c24a3401e6a8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007014942s Jun 17 22:06:12.979: INFO: Pod "pod-projected-secrets-88a71e3c-1c63-4015-a212-c24a3401e6a8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011232426s STEP: Saw pod success Jun 17 22:06:12.979: INFO: Pod "pod-projected-secrets-88a71e3c-1c63-4015-a212-c24a3401e6a8" satisfied condition "Succeeded or Failed" Jun 17 22:06:12.981: INFO: Trying to get logs from node node2 pod pod-projected-secrets-88a71e3c-1c63-4015-a212-c24a3401e6a8 container projected-secret-volume-test: STEP: delete the pod Jun 17 22:06:12.999: INFO: Waiting for pod pod-projected-secrets-88a71e3c-1c63-4015-a212-c24a3401e6a8 to disappear Jun 17 22:06:13.001: INFO: Pod pod-projected-secrets-88a71e3c-1c63-4015-a212-c24a3401e6a8 no longer exists [AfterEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 17 22:06:13.001: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3648" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":21,"skipped":359,"failed":1,"failures":["[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-apps] ReplicaSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 17 22:06:08.407: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] Replicaset should have a working scale subresource [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating replica set "test-rs" that asks for more than the allowed pod quota Jun 17 22:06:08.437: INFO: Pod name sample-pod: Found 0 pods out of 1 Jun 17 22:06:13.440: INFO: Pod name sample-pod: Found 1 pods out of 1 STEP: ensuring each pod is running STEP: getting scale subresource STEP: updating a scale subresource STEP: verifying the replicaset Spec.Replicas was modified STEP: Patch a scale subresource [AfterEach] [sig-apps] ReplicaSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 17 22:06:13.455: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-2669" for this suite. • [SLOW TEST:5.056 seconds] [sig-apps] ReplicaSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 Replicaset should have a working scale subresource [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] ReplicaSet Replicaset should have a working scale subresource [Conformance]","total":-1,"completed":16,"skipped":221,"failed":2,"failures":["[sig-network] Services should be able to create a functioning NodePort service [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]"]} SSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 17 22:06:13.072: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] pod should support shared volumes between containers [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating Pod STEP: Reading file content from the nginx-container Jun 17 22:06:19.116: INFO: ExecWithOptions {Command:[/bin/sh -c cat /usr/share/volumeshare/shareddata.txt] Namespace:emptydir-4101 PodName:pod-sharedvolume-0c1dadf6-a898-4111-83ae-495bd3c59779 ContainerName:busybox-main-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 17 22:06:19.116: INFO: >>> kubeConfig: /root/.kube/config Jun 17 22:06:19.199: INFO: Exec stderr: "" [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 17 22:06:19.199: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-4101" for this suite. • [SLOW TEST:6.138 seconds] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 pod should support shared volumes between containers [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance]","total":-1,"completed":22,"skipped":389,"failed":1,"failures":["[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 17 22:06:19.435: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:186 [It] should get a host IP [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating pod Jun 17 22:06:19.474: INFO: The status of Pod pod-hostip-8595c84e-e351-48a9-a13f-42acdc3da141 is Pending, waiting for it to be Running (with Ready = true) Jun 17 22:06:21.478: INFO: The status of Pod pod-hostip-8595c84e-e351-48a9-a13f-42acdc3da141 is Pending, waiting for it to be Running (with Ready = true) Jun 17 22:06:23.478: INFO: The status of Pod pod-hostip-8595c84e-e351-48a9-a13f-42acdc3da141 is Running (Ready = true) Jun 17 22:06:23.483: INFO: Pod pod-hostip-8595c84e-e351-48a9-a13f-42acdc3da141 has hostIP: 10.10.190.208 [AfterEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 17 22:06:23.483: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-626" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Pods should get a host IP [NodeConformance] [Conformance]","total":-1,"completed":23,"skipped":522,"failed":1,"failures":["[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 17 22:06:13.497: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a replication controller. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ReplicationController STEP: Ensuring resource quota status captures replication controller creation STEP: Deleting a ReplicationController STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 17 22:06:24.555: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-706" for this suite. • [SLOW TEST:11.067 seconds] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a replication controller. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance]","total":-1,"completed":17,"skipped":238,"failed":2,"failures":["[sig-network] Services should be able to create a functioning NodePort service [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]"]} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 17 22:06:04.010: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating pod pod-subpath-test-configmap-2lw4 STEP: Creating a pod to test atomic-volume-subpath Jun 17 22:06:04.062: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-2lw4" in namespace "subpath-3351" to be "Succeeded or Failed" Jun 17 22:06:04.066: INFO: Pod "pod-subpath-test-configmap-2lw4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.094553ms Jun 17 22:06:06.070: INFO: Pod "pod-subpath-test-configmap-2lw4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007678066s Jun 17 22:06:08.074: INFO: Pod "pod-subpath-test-configmap-2lw4": Phase="Running", Reason="", readiness=true. Elapsed: 4.012175895s Jun 17 22:06:10.081: INFO: Pod "pod-subpath-test-configmap-2lw4": Phase="Running", Reason="", readiness=true. Elapsed: 6.019035776s Jun 17 22:06:12.085: INFO: Pod "pod-subpath-test-configmap-2lw4": Phase="Running", Reason="", readiness=true. Elapsed: 8.023220934s Jun 17 22:06:14.088: INFO: Pod "pod-subpath-test-configmap-2lw4": Phase="Running", Reason="", readiness=true. Elapsed: 10.025638005s Jun 17 22:06:16.093: INFO: Pod "pod-subpath-test-configmap-2lw4": Phase="Running", Reason="", readiness=true. Elapsed: 12.0306268s Jun 17 22:06:18.098: INFO: Pod "pod-subpath-test-configmap-2lw4": Phase="Running", Reason="", readiness=true. Elapsed: 14.035614322s Jun 17 22:06:20.102: INFO: Pod "pod-subpath-test-configmap-2lw4": Phase="Running", Reason="", readiness=true. Elapsed: 16.040022934s Jun 17 22:06:22.107: INFO: Pod "pod-subpath-test-configmap-2lw4": Phase="Running", Reason="", readiness=true. Elapsed: 18.045049523s Jun 17 22:06:24.111: INFO: Pod "pod-subpath-test-configmap-2lw4": Phase="Running", Reason="", readiness=true. Elapsed: 20.048592547s Jun 17 22:06:26.114: INFO: Pod "pod-subpath-test-configmap-2lw4": Phase="Running", Reason="", readiness=true. Elapsed: 22.051522915s Jun 17 22:06:28.119: INFO: Pod "pod-subpath-test-configmap-2lw4": Phase="Running", Reason="", readiness=true. Elapsed: 24.05678133s Jun 17 22:06:30.121: INFO: Pod "pod-subpath-test-configmap-2lw4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.059329927s STEP: Saw pod success Jun 17 22:06:30.122: INFO: Pod "pod-subpath-test-configmap-2lw4" satisfied condition "Succeeded or Failed" Jun 17 22:06:30.124: INFO: Trying to get logs from node node2 pod pod-subpath-test-configmap-2lw4 container test-container-subpath-configmap-2lw4: STEP: delete the pod Jun 17 22:06:30.136: INFO: Waiting for pod pod-subpath-test-configmap-2lw4 to disappear Jun 17 22:06:30.138: INFO: Pod pod-subpath-test-configmap-2lw4 no longer exists STEP: Deleting pod pod-subpath-test-configmap-2lw4 Jun 17 22:06:30.138: INFO: Deleting pod "pod-subpath-test-configmap-2lw4" in namespace "subpath-3351" [AfterEach] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 17 22:06:30.140: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-3351" for this suite. • [SLOW TEST:26.138 seconds] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]","total":-1,"completed":46,"skipped":520,"failed":0} SSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 17 22:06:24.605: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Jun 17 22:06:25.062: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Jun 17 22:06:27.073: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63791100385, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63791100385, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63791100385, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63791100385, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jun 17 22:06:30.083: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate configmap [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Registering the mutating configmap webhook via the AdmissionRegistration API STEP: create a configmap that should be updated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 17 22:06:31.115: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-3621" for this suite. STEP: Destroying namespace "webhook-3621-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.541 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate configmap [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance]","total":-1,"completed":18,"skipped":259,"failed":2,"failures":["[sig-network] Services should be able to create a functioning NodePort service [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]"]} SSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 17 22:02:26.625: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:54 [It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating pod busybox-1d7e38a9-ea32-4597-ac48-fc08f0d0407d in namespace container-probe-6234 Jun 17 22:02:30.661: INFO: Started pod busybox-1d7e38a9-ea32-4597-ac48-fc08f0d0407d in namespace container-probe-6234 STEP: checking the pod's current state and verifying that restartCount is present Jun 17 22:02:30.664: INFO: Initial restart count of pod busybox-1d7e38a9-ea32-4597-ac48-fc08f0d0407d is 0 STEP: deleting the pod [AfterEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 17 22:06:31.176: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-6234" for this suite. • [SLOW TEST:244.559 seconds] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Probing container should *not* be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":-1,"completed":19,"skipped":286,"failed":0} SSSS ------------------------------ [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 17 22:04:32.686: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 [BeforeEach] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:105 STEP: Creating service test in namespace statefulset-6383 [It] should perform canary updates and phased rolling updates of template modifications [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a new StatefulSet Jun 17 22:04:32.720: INFO: Found 0 stateful pods, waiting for 3 Jun 17 22:04:42.724: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Jun 17 22:04:42.724: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Jun 17 22:04:42.724: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false Jun 17 22:04:52.723: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Jun 17 22:04:52.723: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Jun 17 22:04:52.723: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Updating stateful set template: update image from k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 to k8s.gcr.io/e2e-test-images/httpd:2.4.39-1 Jun 17 22:04:52.748: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Not applying an update when the partition is greater than the number of replicas STEP: Performing a canary update Jun 17 22:05:02.777: INFO: Updating stateful set ss2 Jun 17 22:05:02.782: INFO: Waiting for Pod statefulset-6383/ss2-2 to have revision ss2-5bbbc9fc94 update revision ss2-677d6db895 STEP: Restoring Pods to the correct revision when they are deleted Jun 17 22:05:12.803: INFO: Found 1 stateful pods, waiting for 3 Jun 17 22:05:22.808: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Jun 17 22:05:22.808: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Jun 17 22:05:22.808: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false Jun 17 22:05:32.807: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Jun 17 22:05:32.807: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Jun 17 22:05:32.807: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false Jun 17 22:05:42.808: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Jun 17 22:05:42.808: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Jun 17 22:05:42.808: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Performing a phased rolling update Jun 17 22:05:42.829: INFO: Updating stateful set ss2 Jun 17 22:05:42.834: INFO: Waiting for Pod statefulset-6383/ss2-1 to have revision ss2-5bbbc9fc94 update revision ss2-677d6db895 Jun 17 22:05:52.858: INFO: Updating stateful set ss2 Jun 17 22:05:52.863: INFO: Waiting for StatefulSet statefulset-6383/ss2 to complete update Jun 17 22:05:52.863: INFO: Waiting for Pod statefulset-6383/ss2-0 to have revision ss2-5bbbc9fc94 update revision ss2-677d6db895 [AfterEach] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:116 Jun 17 22:06:02.868: INFO: Deleting all statefulset in ns statefulset-6383 Jun 17 22:06:02.870: INFO: Scaling statefulset ss2 to 0 Jun 17 22:06:32.885: INFO: Waiting for statefulset status.replicas updated to 0 Jun 17 22:06:32.887: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 17 22:06:32.897: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-6383" for this suite. • [SLOW TEST:120.220 seconds] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:95 should perform canary updates and phased rolling updates of template modifications [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]","total":-1,"completed":34,"skipped":640,"failed":0} SSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 17 22:06:30.163: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41 [It] should provide container's memory limit [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test downward API volume plugin Jun 17 22:06:30.198: INFO: Waiting up to 5m0s for pod "downwardapi-volume-2316260e-c573-4f35-b3b3-308c3fac21d1" in namespace "projected-7194" to be "Succeeded or Failed" Jun 17 22:06:30.200: INFO: Pod "downwardapi-volume-2316260e-c573-4f35-b3b3-308c3fac21d1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025993ms Jun 17 22:06:32.206: INFO: Pod "downwardapi-volume-2316260e-c573-4f35-b3b3-308c3fac21d1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008078271s Jun 17 22:06:34.210: INFO: Pod "downwardapi-volume-2316260e-c573-4f35-b3b3-308c3fac21d1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012523212s STEP: Saw pod success Jun 17 22:06:34.211: INFO: Pod "downwardapi-volume-2316260e-c573-4f35-b3b3-308c3fac21d1" satisfied condition "Succeeded or Failed" Jun 17 22:06:34.213: INFO: Trying to get logs from node node1 pod downwardapi-volume-2316260e-c573-4f35-b3b3-308c3fac21d1 container client-container: STEP: delete the pod Jun 17 22:06:34.267: INFO: Waiting for pod downwardapi-volume-2316260e-c573-4f35-b3b3-308c3fac21d1 to disappear Jun 17 22:06:34.270: INFO: Pod downwardapi-volume-2316260e-c573-4f35-b3b3-308c3fac21d1 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 17 22:06:34.270: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7194" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance]","total":-1,"completed":47,"skipped":527,"failed":0} SSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 17 22:06:31.196: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating secret with name secret-test-ecf88e8c-645f-4943-a8c4-a6d2c5950df5 STEP: Creating a pod to test consume secrets Jun 17 22:06:31.248: INFO: Waiting up to 5m0s for pod "pod-secrets-d57933bd-ab9c-43a6-8af9-e53b2c56b30e" in namespace "secrets-6360" to be "Succeeded or Failed" Jun 17 22:06:31.251: INFO: Pod "pod-secrets-d57933bd-ab9c-43a6-8af9-e53b2c56b30e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.231233ms Jun 17 22:06:33.254: INFO: Pod "pod-secrets-d57933bd-ab9c-43a6-8af9-e53b2c56b30e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005580656s Jun 17 22:06:35.259: INFO: Pod "pod-secrets-d57933bd-ab9c-43a6-8af9-e53b2c56b30e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010574114s STEP: Saw pod success Jun 17 22:06:35.259: INFO: Pod "pod-secrets-d57933bd-ab9c-43a6-8af9-e53b2c56b30e" satisfied condition "Succeeded or Failed" Jun 17 22:06:35.262: INFO: Trying to get logs from node node2 pod pod-secrets-d57933bd-ab9c-43a6-8af9-e53b2c56b30e container secret-volume-test: STEP: delete the pod Jun 17 22:06:35.272: INFO: Waiting for pod pod-secrets-d57933bd-ab9c-43a6-8af9-e53b2c56b30e to disappear Jun 17 22:06:35.274: INFO: Pod pod-secrets-d57933bd-ab9c-43a6-8af9-e53b2c56b30e no longer exists [AfterEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 17 22:06:35.274: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-6360" for this suite. STEP: Destroying namespace "secret-namespace-8177" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]","total":-1,"completed":20,"skipped":290,"failed":0} SSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] Watchers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 17 22:06:35.305: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to start watching from a specific resource version [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating a new configmap STEP: modifying the configmap once STEP: modifying the configmap a second time STEP: deleting the configmap STEP: creating a watch on configmaps from the resource version returned by the first update STEP: Expecting to observe notifications for all changes to the configmap after the first update Jun 17 22:06:35.343: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version watch-8911 b50e32de-4021-4d13-af0c-3e7835a5fc8b 45500 0 2022-06-17 22:06:35 +0000 UTC map[watch-this-configmap:from-resource-version] map[] [] [] [{e2e.test Update v1 2022-06-17 22:06:35 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} Jun 17 22:06:35.344: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version watch-8911 b50e32de-4021-4d13-af0c-3e7835a5fc8b 45501 0 2022-06-17 22:06:35 +0000 UTC map[watch-this-configmap:from-resource-version] map[] [] [] [{e2e.test Update v1 2022-06-17 22:06:35 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 17 22:06:35.344: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-8911" for this suite. • ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance]","total":-1,"completed":21,"skipped":296,"failed":0} SSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 17 22:06:35.373: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be immutable if `immutable` field is set [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [AfterEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 17 22:06:35.417: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-341" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be immutable if `immutable` field is set [Conformance]","total":-1,"completed":22,"skipped":306,"failed":0} SSSS ------------------------------ [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 17 22:06:23.540: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:241 [It] should create and stop a working application [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating all guestbook components Jun 17 22:06:23.574: INFO: apiVersion: v1 kind: Service metadata: name: agnhost-replica labels: app: agnhost role: replica tier: backend spec: ports: - port: 6379 selector: app: agnhost role: replica tier: backend Jun 17 22:06:23.574: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-6955 create -f -' Jun 17 22:06:23.983: INFO: stderr: "" Jun 17 22:06:23.983: INFO: stdout: "service/agnhost-replica created\n" Jun 17 22:06:23.983: INFO: apiVersion: v1 kind: Service metadata: name: agnhost-primary labels: app: agnhost role: primary tier: backend spec: ports: - port: 6379 targetPort: 6379 selector: app: agnhost role: primary tier: backend Jun 17 22:06:23.983: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-6955 create -f -' Jun 17 22:06:24.327: INFO: stderr: "" Jun 17 22:06:24.327: INFO: stdout: "service/agnhost-primary created\n" Jun 17 22:06:24.327: INFO: apiVersion: v1 kind: Service metadata: name: frontend labels: app: guestbook tier: frontend spec: # if your cluster supports it, uncomment the following to automatically create # an external load-balanced IP for the frontend service. # type: LoadBalancer ports: - port: 80 selector: app: guestbook tier: frontend Jun 17 22:06:24.327: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-6955 create -f -' Jun 17 22:06:24.684: INFO: stderr: "" Jun 17 22:06:24.684: INFO: stdout: "service/frontend created\n" Jun 17 22:06:24.684: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: frontend spec: replicas: 3 selector: matchLabels: app: guestbook tier: frontend template: metadata: labels: app: guestbook tier: frontend spec: containers: - name: guestbook-frontend image: k8s.gcr.io/e2e-test-images/agnhost:2.32 args: [ "guestbook", "--backend-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 80 Jun 17 22:06:24.684: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-6955 create -f -' Jun 17 22:06:25.034: INFO: stderr: "" Jun 17 22:06:25.034: INFO: stdout: "deployment.apps/frontend created\n" Jun 17 22:06:25.034: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: agnhost-primary spec: replicas: 1 selector: matchLabels: app: agnhost role: primary tier: backend template: metadata: labels: app: agnhost role: primary tier: backend spec: containers: - name: primary image: k8s.gcr.io/e2e-test-images/agnhost:2.32 args: [ "guestbook", "--http-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 Jun 17 22:06:25.035: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-6955 create -f -' Jun 17 22:06:25.367: INFO: stderr: "" Jun 17 22:06:25.367: INFO: stdout: "deployment.apps/agnhost-primary created\n" Jun 17 22:06:25.367: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: agnhost-replica spec: replicas: 2 selector: matchLabels: app: agnhost role: replica tier: backend template: metadata: labels: app: agnhost role: replica tier: backend spec: containers: - name: replica image: k8s.gcr.io/e2e-test-images/agnhost:2.32 args: [ "guestbook", "--replicaof", "agnhost-primary", "--http-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 Jun 17 22:06:25.367: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-6955 create -f -' Jun 17 22:06:25.736: INFO: stderr: "" Jun 17 22:06:25.736: INFO: stdout: "deployment.apps/agnhost-replica created\n" STEP: validating guestbook app Jun 17 22:06:25.736: INFO: Waiting for all frontend pods to be Running. Jun 17 22:06:35.789: INFO: Waiting for frontend to serve content. Jun 17 22:06:35.797: INFO: Trying to add a new entry to the guestbook. Jun 17 22:06:35.805: INFO: Verifying that added entry can be retrieved. STEP: using delete to clean up resources Jun 17 22:06:35.811: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-6955 delete --grace-period=0 --force -f -' Jun 17 22:06:35.965: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jun 17 22:06:35.966: INFO: stdout: "service \"agnhost-replica\" force deleted\n" STEP: using delete to clean up resources Jun 17 22:06:35.966: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-6955 delete --grace-period=0 --force -f -' Jun 17 22:06:36.117: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jun 17 22:06:36.117: INFO: stdout: "service \"agnhost-primary\" force deleted\n" STEP: using delete to clean up resources Jun 17 22:06:36.117: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-6955 delete --grace-period=0 --force -f -' Jun 17 22:06:36.258: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jun 17 22:06:36.258: INFO: stdout: "service \"frontend\" force deleted\n" STEP: using delete to clean up resources Jun 17 22:06:36.258: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-6955 delete --grace-period=0 --force -f -' Jun 17 22:06:36.401: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jun 17 22:06:36.401: INFO: stdout: "deployment.apps \"frontend\" force deleted\n" STEP: using delete to clean up resources Jun 17 22:06:36.401: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-6955 delete --grace-period=0 --force -f -' Jun 17 22:06:36.533: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jun 17 22:06:36.533: INFO: stdout: "deployment.apps \"agnhost-primary\" force deleted\n" STEP: using delete to clean up resources Jun 17 22:06:36.534: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-6955 delete --grace-period=0 --force -f -' Jun 17 22:06:36.681: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jun 17 22:06:36.681: INFO: stdout: "deployment.apps \"agnhost-replica\" force deleted\n" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 17 22:06:36.681: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6955" for this suite. • [SLOW TEST:13.150 seconds] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Guestbook application /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:336 should create and stop a working application [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]","total":-1,"completed":24,"skipped":548,"failed":1,"failures":["[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]"]} SSSS ------------------------------ [BeforeEach] [sig-node] Docker Containers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 17 22:06:32.935: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test override command Jun 17 22:06:32.970: INFO: Waiting up to 5m0s for pod "client-containers-1e1bc05e-b0a3-40da-afd4-34ee6f0a7048" in namespace "containers-8846" to be "Succeeded or Failed" Jun 17 22:06:32.971: INFO: Pod "client-containers-1e1bc05e-b0a3-40da-afd4-34ee6f0a7048": Phase="Pending", Reason="", readiness=false. Elapsed: 1.900325ms Jun 17 22:06:34.975: INFO: Pod "client-containers-1e1bc05e-b0a3-40da-afd4-34ee6f0a7048": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005230606s Jun 17 22:06:36.980: INFO: Pod "client-containers-1e1bc05e-b0a3-40da-afd4-34ee6f0a7048": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01008822s STEP: Saw pod success Jun 17 22:06:36.980: INFO: Pod "client-containers-1e1bc05e-b0a3-40da-afd4-34ee6f0a7048" satisfied condition "Succeeded or Failed" Jun 17 22:06:36.982: INFO: Trying to get logs from node node2 pod client-containers-1e1bc05e-b0a3-40da-afd4-34ee6f0a7048 container agnhost-container: STEP: delete the pod Jun 17 22:06:37.004: INFO: Waiting for pod client-containers-1e1bc05e-b0a3-40da-afd4-34ee6f0a7048 to disappear Jun 17 22:06:37.006: INFO: Pod client-containers-1e1bc05e-b0a3-40da-afd4-34ee6f0a7048 no longer exists [AfterEach] [sig-node] Docker Containers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 17 22:06:37.006: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-8846" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]","total":-1,"completed":35,"skipped":652,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 17 22:05:38.656: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] listing custom resource definition objects works [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Jun 17 22:05:38.677: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 17 22:06:39.940: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-2586" for this suite. • [SLOW TEST:61.292 seconds] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Simple CustomResourceDefinition /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:48 listing custom resource definition objects works [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works [Conformance]","total":-1,"completed":22,"skipped":297,"failed":1,"failures":["[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]"]} SSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 17 22:06:34.298: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test downward api env vars Jun 17 22:06:34.332: INFO: Waiting up to 5m0s for pod "downward-api-f5cee999-f96b-4954-8619-449be13ef866" in namespace "downward-api-5894" to be "Succeeded or Failed" Jun 17 22:06:34.334: INFO: Pod "downward-api-f5cee999-f96b-4954-8619-449be13ef866": Phase="Pending", Reason="", readiness=false. Elapsed: 2.057755ms Jun 17 22:06:36.338: INFO: Pod "downward-api-f5cee999-f96b-4954-8619-449be13ef866": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005887799s Jun 17 22:06:38.341: INFO: Pod "downward-api-f5cee999-f96b-4954-8619-449be13ef866": Phase="Pending", Reason="", readiness=false. Elapsed: 4.009095229s Jun 17 22:06:40.345: INFO: Pod "downward-api-f5cee999-f96b-4954-8619-449be13ef866": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.012893747s STEP: Saw pod success Jun 17 22:06:40.345: INFO: Pod "downward-api-f5cee999-f96b-4954-8619-449be13ef866" satisfied condition "Succeeded or Failed" Jun 17 22:06:40.347: INFO: Trying to get logs from node node1 pod downward-api-f5cee999-f96b-4954-8619-449be13ef866 container dapi-container: STEP: delete the pod Jun 17 22:06:40.360: INFO: Waiting for pod downward-api-f5cee999-f96b-4954-8619-449be13ef866 to disappear Jun 17 22:06:40.362: INFO: Pod downward-api-f5cee999-f96b-4954-8619-449be13ef866 no longer exists [AfterEach] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 17 22:06:40.362: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-5894" for this suite. • [SLOW TEST:6.071 seconds] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]","total":-1,"completed":48,"skipped":537,"failed":0} SSSSSS ------------------------------ [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 17 22:06:37.075: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating configMap with name configmap-test-volume-8cc40517-c94a-43b4-8d58-0157574f1174 STEP: Creating a pod to test consume configMaps Jun 17 22:06:37.111: INFO: Waiting up to 5m0s for pod "pod-configmaps-e112440d-65eb-4448-bd53-0b18d7b7141e" in namespace "configmap-6306" to be "Succeeded or Failed" Jun 17 22:06:37.113: INFO: Pod "pod-configmaps-e112440d-65eb-4448-bd53-0b18d7b7141e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.280837ms Jun 17 22:06:39.116: INFO: Pod "pod-configmaps-e112440d-65eb-4448-bd53-0b18d7b7141e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005239592s Jun 17 22:06:41.119: INFO: Pod "pod-configmaps-e112440d-65eb-4448-bd53-0b18d7b7141e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.008298363s Jun 17 22:06:43.122: INFO: Pod "pod-configmaps-e112440d-65eb-4448-bd53-0b18d7b7141e": Phase="Pending", Reason="", readiness=false. Elapsed: 6.011348289s Jun 17 22:06:45.126: INFO: Pod "pod-configmaps-e112440d-65eb-4448-bd53-0b18d7b7141e": Phase="Pending", Reason="", readiness=false. Elapsed: 8.014943s Jun 17 22:06:47.129: INFO: Pod "pod-configmaps-e112440d-65eb-4448-bd53-0b18d7b7141e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.018235633s STEP: Saw pod success Jun 17 22:06:47.129: INFO: Pod "pod-configmaps-e112440d-65eb-4448-bd53-0b18d7b7141e" satisfied condition "Succeeded or Failed" Jun 17 22:06:47.131: INFO: Trying to get logs from node node2 pod pod-configmaps-e112440d-65eb-4448-bd53-0b18d7b7141e container agnhost-container: STEP: delete the pod Jun 17 22:06:47.269: INFO: Waiting for pod pod-configmaps-e112440d-65eb-4448-bd53-0b18d7b7141e to disappear Jun 17 22:06:47.270: INFO: Pod pod-configmaps-e112440d-65eb-4448-bd53-0b18d7b7141e no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 17 22:06:47.270: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-6306" for this suite. • [SLOW TEST:10.203 seconds] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":36,"skipped":683,"failed":0} SSSSS ------------------------------ [BeforeEach] [sig-node] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 17 22:06:35.434: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating secret secrets-4713/secret-test-6766d187-4fcd-499b-8a9b-72fc3c988744 STEP: Creating a pod to test consume secrets Jun 17 22:06:35.467: INFO: Waiting up to 5m0s for pod "pod-configmaps-e604025f-0fb1-4c86-94bd-c11d0024eeae" in namespace "secrets-4713" to be "Succeeded or Failed" Jun 17 22:06:35.470: INFO: Pod "pod-configmaps-e604025f-0fb1-4c86-94bd-c11d0024eeae": Phase="Pending", Reason="", readiness=false. Elapsed: 2.354498ms Jun 17 22:06:37.473: INFO: Pod "pod-configmaps-e604025f-0fb1-4c86-94bd-c11d0024eeae": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005972868s Jun 17 22:06:39.478: INFO: Pod "pod-configmaps-e604025f-0fb1-4c86-94bd-c11d0024eeae": Phase="Pending", Reason="", readiness=false. Elapsed: 4.010534677s Jun 17 22:06:41.482: INFO: Pod "pod-configmaps-e604025f-0fb1-4c86-94bd-c11d0024eeae": Phase="Pending", Reason="", readiness=false. Elapsed: 6.014697061s Jun 17 22:06:43.487: INFO: Pod "pod-configmaps-e604025f-0fb1-4c86-94bd-c11d0024eeae": Phase="Pending", Reason="", readiness=false. Elapsed: 8.019676647s Jun 17 22:06:45.491: INFO: Pod "pod-configmaps-e604025f-0fb1-4c86-94bd-c11d0024eeae": Phase="Pending", Reason="", readiness=false. Elapsed: 10.023567334s Jun 17 22:06:47.496: INFO: Pod "pod-configmaps-e604025f-0fb1-4c86-94bd-c11d0024eeae": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.028050802s STEP: Saw pod success Jun 17 22:06:47.496: INFO: Pod "pod-configmaps-e604025f-0fb1-4c86-94bd-c11d0024eeae" satisfied condition "Succeeded or Failed" Jun 17 22:06:47.498: INFO: Trying to get logs from node node2 pod pod-configmaps-e604025f-0fb1-4c86-94bd-c11d0024eeae container env-test: STEP: delete the pod Jun 17 22:06:47.510: INFO: Waiting for pod pod-configmaps-e604025f-0fb1-4c86-94bd-c11d0024eeae to disappear Jun 17 22:06:47.511: INFO: Pod pod-configmaps-e604025f-0fb1-4c86-94bd-c11d0024eeae no longer exists [AfterEach] [sig-node] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 17 22:06:47.512: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-4713" for this suite. • [SLOW TEST:12.084 seconds] [sig-node] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should be consumable via the environment [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 17 22:06:31.182: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD with validation schema [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Jun 17 22:06:31.205: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with known and required properties Jun 17 22:06:39.916: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2909 --namespace=crd-publish-openapi-2909 create -f -' Jun 17 22:06:40.422: INFO: stderr: "" Jun 17 22:06:40.422: INFO: stdout: "e2e-test-crd-publish-openapi-5351-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n" Jun 17 22:06:40.422: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2909 --namespace=crd-publish-openapi-2909 delete e2e-test-crd-publish-openapi-5351-crds test-foo' Jun 17 22:06:40.600: INFO: stderr: "" Jun 17 22:06:40.600: INFO: stdout: "e2e-test-crd-publish-openapi-5351-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n" Jun 17 22:06:40.600: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2909 --namespace=crd-publish-openapi-2909 apply -f -' Jun 17 22:06:40.979: INFO: stderr: "" Jun 17 22:06:40.979: INFO: stdout: "e2e-test-crd-publish-openapi-5351-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n" Jun 17 22:06:40.980: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2909 --namespace=crd-publish-openapi-2909 delete e2e-test-crd-publish-openapi-5351-crds test-foo' Jun 17 22:06:41.136: INFO: stderr: "" Jun 17 22:06:41.136: INFO: stdout: "e2e-test-crd-publish-openapi-5351-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n" STEP: client-side validation (kubectl create and apply) rejects request with unknown properties when disallowed by the schema Jun 17 22:06:41.136: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2909 --namespace=crd-publish-openapi-2909 create -f -' Jun 17 22:06:41.505: INFO: rc: 1 Jun 17 22:06:41.505: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2909 --namespace=crd-publish-openapi-2909 apply -f -' Jun 17 22:06:41.842: INFO: rc: 1 STEP: client-side validation (kubectl create and apply) rejects request without required properties Jun 17 22:06:41.842: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2909 --namespace=crd-publish-openapi-2909 create -f -' Jun 17 22:06:42.169: INFO: rc: 1 Jun 17 22:06:42.169: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2909 --namespace=crd-publish-openapi-2909 apply -f -' Jun 17 22:06:42.493: INFO: rc: 1 STEP: kubectl explain works to explain CR properties Jun 17 22:06:42.494: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2909 explain e2e-test-crd-publish-openapi-5351-crds' Jun 17 22:06:42.845: INFO: stderr: "" Jun 17 22:06:42.845: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-5351-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nDESCRIPTION:\n Foo CRD for Testing\n\nFIELDS:\n apiVersion\t\n APIVersion defines the versioned schema of this representation of an\n object. Servers should convert recognized schemas to the latest internal\n value, and may reject unrecognized values. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n kind\t\n Kind is a string value representing the REST resource this object\n represents. Servers may infer this from the endpoint the client submits\n requests to. Cannot be updated. In CamelCase. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n metadata\t\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n spec\t\n Specification of Foo\n\n status\t\n Status of Foo\n\n" STEP: kubectl explain works to explain CR properties recursively Jun 17 22:06:42.846: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2909 explain e2e-test-crd-publish-openapi-5351-crds.metadata' Jun 17 22:06:43.279: INFO: stderr: "" Jun 17 22:06:43.279: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-5351-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: metadata \n\nDESCRIPTION:\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n ObjectMeta is metadata that all persisted resources must have, which\n includes all objects users must create.\n\nFIELDS:\n annotations\t\n Annotations is an unstructured key value map stored with a resource that\n may be set by external tools to store and retrieve arbitrary metadata. They\n are not queryable and should be preserved when modifying objects. More\n info: http://kubernetes.io/docs/user-guide/annotations\n\n clusterName\t\n The name of the cluster which the object belongs to. This is used to\n distinguish resources with same name and namespace in different clusters.\n This field is not set anywhere right now and apiserver is going to ignore\n it if set in create or update request.\n\n creationTimestamp\t\n CreationTimestamp is a timestamp representing the server time when this\n object was created. It is not guaranteed to be set in happens-before order\n across separate operations. Clients may not set this value. It is\n represented in RFC3339 form and is in UTC.\n\n Populated by the system. Read-only. Null for lists. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n deletionGracePeriodSeconds\t\n Number of seconds allowed for this object to gracefully terminate before it\n will be removed from the system. Only set when deletionTimestamp is also\n set. May only be shortened. Read-only.\n\n deletionTimestamp\t\n DeletionTimestamp is RFC 3339 date and time at which this resource will be\n deleted. This field is set by the server when a graceful deletion is\n requested by the user, and is not directly settable by a client. The\n resource is expected to be deleted (no longer visible from resource lists,\n and not reachable by name) after the time in this field, once the\n finalizers list is empty. As long as the finalizers list contains items,\n deletion is blocked. Once the deletionTimestamp is set, this value may not\n be unset or be set further into the future, although it may be shortened or\n the resource may be deleted prior to this time. For example, a user may\n request that a pod is deleted in 30 seconds. The Kubelet will react by\n sending a graceful termination signal to the containers in the pod. After\n that 30 seconds, the Kubelet will send a hard termination signal (SIGKILL)\n to the container and after cleanup, remove the pod from the API. In the\n presence of network partitions, this object may still exist after this\n timestamp, until an administrator or automated process can determine the\n resource is fully terminated. If not set, graceful deletion of the object\n has not been requested.\n\n Populated by the system when a graceful deletion is requested. Read-only.\n More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n finalizers\t<[]string>\n Must be empty before the object is deleted from the registry. Each entry is\n an identifier for the responsible component that will remove the entry from\n the list. If the deletionTimestamp of the object is non-nil, entries in\n this list can only be removed. Finalizers may be processed and removed in\n any order. Order is NOT enforced because it introduces significant risk of\n stuck finalizers. finalizers is a shared field, any actor with permission\n can reorder it. If the finalizer list is processed in order, then this can\n lead to a situation in which the component responsible for the first\n finalizer in the list is waiting for a signal (field value, external\n system, or other) produced by a component responsible for a finalizer later\n in the list, resulting in a deadlock. Without enforced ordering finalizers\n are free to order amongst themselves and are not vulnerable to ordering\n changes in the list.\n\n generateName\t\n GenerateName is an optional prefix, used by the server, to generate a\n unique name ONLY IF the Name field has not been provided. If this field is\n used, the name returned to the client will be different than the name\n passed. This value will also be combined with a unique suffix. The provided\n value has the same validation rules as the Name field, and may be truncated\n by the length of the suffix required to make the value unique on the\n server.\n\n If this field is specified and the generated name exists, the server will\n NOT return a 409 - instead, it will either return 201 Created or 500 with\n Reason ServerTimeout indicating a unique name could not be found in the\n time allotted, and the client should retry (optionally after the time\n indicated in the Retry-After header).\n\n Applied only if Name is not specified. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#idempotency\n\n generation\t\n A sequence number representing a specific generation of the desired state.\n Populated by the system. Read-only.\n\n labels\t\n Map of string keys and values that can be used to organize and categorize\n (scope and select) objects. May match selectors of replication controllers\n and services. More info: http://kubernetes.io/docs/user-guide/labels\n\n managedFields\t<[]Object>\n ManagedFields maps workflow-id and version to the set of fields that are\n managed by that workflow. This is mostly for internal housekeeping, and\n users typically shouldn't need to set or understand this field. A workflow\n can be the user's name, a controller's name, or the name of a specific\n apply path like \"ci-cd\". The set of fields is always in the version that\n the workflow used when modifying the object.\n\n name\t\n Name must be unique within a namespace. Is required when creating\n resources, although some resources may allow a client to request the\n generation of an appropriate name automatically. Name is primarily intended\n for creation idempotence and configuration definition. Cannot be updated.\n More info: http://kubernetes.io/docs/user-guide/identifiers#names\n\n namespace\t\n Namespace defines the space within which each name must be unique. An empty\n namespace is equivalent to the \"default\" namespace, but \"default\" is the\n canonical representation. Not all objects are required to be scoped to a\n namespace - the value of this field for those objects will be empty.\n\n Must be a DNS_LABEL. Cannot be updated. More info:\n http://kubernetes.io/docs/user-guide/namespaces\n\n ownerReferences\t<[]Object>\n List of objects depended by this object. If ALL objects in the list have\n been deleted, this object will be garbage collected. If this object is\n managed by a controller, then an entry in this list will point to this\n controller, with the controller field set to true. There cannot be more\n than one managing controller.\n\n resourceVersion\t\n An opaque value that represents the internal version of this object that\n can be used by clients to determine when objects have changed. May be used\n for optimistic concurrency, change detection, and the watch operation on a\n resource or set of resources. Clients must treat these values as opaque and\n passed unmodified back to the server. They may only be valid for a\n particular resource or set of resources.\n\n Populated by the system. Read-only. Value must be treated as opaque by\n clients and . More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency\n\n selfLink\t\n SelfLink is a URL representing this object. Populated by the system.\n Read-only.\n\n DEPRECATED Kubernetes will stop propagating this field in 1.20 release and\n the field is planned to be removed in 1.21 release.\n\n uid\t\n UID is the unique in time and space value for this object. It is typically\n generated by the server on successful creation of a resource and is not\n allowed to change on PUT operations.\n\n Populated by the system. Read-only. More info:\n http://kubernetes.io/docs/user-guide/identifiers#uids\n\n" Jun 17 22:06:43.281: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2909 explain e2e-test-crd-publish-openapi-5351-crds.spec' Jun 17 22:06:43.649: INFO: stderr: "" Jun 17 22:06:43.649: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-5351-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: spec \n\nDESCRIPTION:\n Specification of Foo\n\nFIELDS:\n bars\t<[]Object>\n List of Bars and their specs.\n\n" Jun 17 22:06:43.649: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2909 explain e2e-test-crd-publish-openapi-5351-crds.spec.bars' Jun 17 22:06:44.036: INFO: stderr: "" Jun 17 22:06:44.036: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-5351-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: bars <[]Object>\n\nDESCRIPTION:\n List of Bars and their specs.\n\nFIELDS:\n age\t\n Age of Bar.\n\n bazs\t<[]string>\n List of Bazs.\n\n name\t -required-\n Name of Bar.\n\n" STEP: kubectl explain works to return error when explain is called on property that doesn't exist Jun 17 22:06:44.036: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2909 explain e2e-test-crd-publish-openapi-5351-crds.spec.bars2' Jun 17 22:06:44.393: INFO: rc: 1 [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 17 22:06:48.037: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-2909" for this suite. • [SLOW TEST:16.875 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD with validation schema [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance]","total":-1,"completed":19,"skipped":277,"failed":2,"failures":["[sig-network] Services should be able to create a functioning NodePort service [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]"]} S ------------------------------ [BeforeEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 17 22:06:40.384: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for the cluster [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-2202.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-2202.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Jun 17 22:06:52.451: INFO: DNS probes using dns-2202/dns-test-42fe7bf0-59a7-4d66-bbaa-abd425c41fbb succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 17 22:06:52.459: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-2202" for this suite. • [SLOW TEST:12.083 seconds] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should provide DNS for the cluster [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for the cluster [Conformance]","total":-1,"completed":49,"skipped":543,"failed":0} SSSSS ------------------------------ [BeforeEach] [sig-network] EndpointSlice /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 17 22:06:52.480: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename endpointslice STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] EndpointSlice /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/endpointslice.go:49 [It] should have Endpoints and EndpointSlices pointing to API Server [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [AfterEach] [sig-network] EndpointSlice /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 17 22:06:52.505: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "endpointslice-1234" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Secrets should be consumable via the environment [NodeConformance] [Conformance]","total":-1,"completed":23,"skipped":310,"failed":0} [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 17 22:06:47.520: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context STEP: Waiting for a default service account to be provisioned in namespace [It] should support pod.Spec.SecurityContext.RunAsUser And pod.Spec.SecurityContext.RunAsGroup [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test pod.Spec.SecurityContext.RunAsUser Jun 17 22:06:47.553: INFO: Waiting up to 5m0s for pod "security-context-da8e50eb-1621-441c-8883-5fd1f23a9e6f" in namespace "security-context-1897" to be "Succeeded or Failed" Jun 17 22:06:47.556: INFO: Pod "security-context-da8e50eb-1621-441c-8883-5fd1f23a9e6f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.323059ms Jun 17 22:06:49.559: INFO: Pod "security-context-da8e50eb-1621-441c-8883-5fd1f23a9e6f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006024156s Jun 17 22:06:51.564: INFO: Pod "security-context-da8e50eb-1621-441c-8883-5fd1f23a9e6f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.010418185s Jun 17 22:06:53.568: INFO: Pod "security-context-da8e50eb-1621-441c-8883-5fd1f23a9e6f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.014312623s STEP: Saw pod success Jun 17 22:06:53.568: INFO: Pod "security-context-da8e50eb-1621-441c-8883-5fd1f23a9e6f" satisfied condition "Succeeded or Failed" Jun 17 22:06:53.570: INFO: Trying to get logs from node node2 pod security-context-da8e50eb-1621-441c-8883-5fd1f23a9e6f container test-container: STEP: delete the pod Jun 17 22:06:53.581: INFO: Waiting for pod security-context-da8e50eb-1621-441c-8883-5fd1f23a9e6f to disappear Jun 17 22:06:53.583: INFO: Pod security-context-da8e50eb-1621-441c-8883-5fd1f23a9e6f no longer exists [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 17 22:06:53.583: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-1897" for this suite. • [SLOW TEST:6.070 seconds] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 should support pod.Spec.SecurityContext.RunAsUser And pod.Spec.SecurityContext.RunAsGroup [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Security Context should support pod.Spec.SecurityContext.RunAsUser And pod.Spec.SecurityContext.RunAsGroup [LinuxOnly] [Conformance]","total":-1,"completed":24,"skipped":310,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-apps] DisruptionController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 17 22:06:48.063: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename disruption STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] DisruptionController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/disruption.go:69 [It] should observe PodDisruptionBudget status updated [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Waiting for the pdb to be processed STEP: Waiting for all pods to be running Jun 17 22:06:50.125: INFO: running pods: 0 < 3 Jun 17 22:06:52.129: INFO: running pods: 0 < 3 Jun 17 22:06:54.129: INFO: running pods: 2 < 3 Jun 17 22:06:56.130: INFO: running pods: 2 < 3 [AfterEach] [sig-apps] DisruptionController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 17 22:06:58.132: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "disruption-3" for this suite. • [SLOW TEST:10.079 seconds] [sig-apps] DisruptionController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should observe PodDisruptionBudget status updated [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] DisruptionController should observe PodDisruptionBudget status updated [Conformance]","total":-1,"completed":20,"skipped":278,"failed":2,"failures":["[sig-network] Services should be able to create a functioning NodePort service [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]"]} SSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] KubeletManagedEtcHosts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 17 22:06:47.289: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts STEP: Waiting for a default service account to be provisioned in namespace [It] should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Setting up the test STEP: Creating hostNetwork=false pod Jun 17 22:06:47.327: INFO: The status of Pod test-pod is Pending, waiting for it to be Running (with Ready = true) Jun 17 22:06:49.331: INFO: The status of Pod test-pod is Pending, waiting for it to be Running (with Ready = true) Jun 17 22:06:51.331: INFO: The status of Pod test-pod is Pending, waiting for it to be Running (with Ready = true) Jun 17 22:06:53.331: INFO: The status of Pod test-pod is Running (Ready = true) STEP: Creating hostNetwork=true pod Jun 17 22:06:53.347: INFO: The status of Pod test-host-network-pod is Pending, waiting for it to be Running (with Ready = true) Jun 17 22:06:55.351: INFO: The status of Pod test-host-network-pod is Pending, waiting for it to be Running (with Ready = true) Jun 17 22:06:57.351: INFO: The status of Pod test-host-network-pod is Running (Ready = true) STEP: Running the test STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false Jun 17 22:06:57.353: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-6795 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 17 22:06:57.353: INFO: >>> kubeConfig: /root/.kube/config Jun 17 22:06:57.460: INFO: Exec stderr: "" Jun 17 22:06:57.460: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-6795 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 17 22:06:57.460: INFO: >>> kubeConfig: /root/.kube/config Jun 17 22:06:57.554: INFO: Exec stderr: "" Jun 17 22:06:57.554: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-6795 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 17 22:06:57.554: INFO: >>> kubeConfig: /root/.kube/config Jun 17 22:06:57.636: INFO: Exec stderr: "" Jun 17 22:06:57.636: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-6795 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 17 22:06:57.636: INFO: >>> kubeConfig: /root/.kube/config Jun 17 22:06:57.724: INFO: Exec stderr: "" STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount Jun 17 22:06:57.724: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-6795 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 17 22:06:57.724: INFO: >>> kubeConfig: /root/.kube/config Jun 17 22:06:57.806: INFO: Exec stderr: "" Jun 17 22:06:57.806: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-6795 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 17 22:06:57.806: INFO: >>> kubeConfig: /root/.kube/config Jun 17 22:06:57.894: INFO: Exec stderr: "" STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true Jun 17 22:06:57.895: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-6795 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 17 22:06:57.895: INFO: >>> kubeConfig: /root/.kube/config Jun 17 22:06:57.985: INFO: Exec stderr: "" Jun 17 22:06:57.985: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-6795 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 17 22:06:57.985: INFO: >>> kubeConfig: /root/.kube/config Jun 17 22:06:58.067: INFO: Exec stderr: "" Jun 17 22:06:58.067: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-6795 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 17 22:06:58.067: INFO: >>> kubeConfig: /root/.kube/config Jun 17 22:06:58.148: INFO: Exec stderr: "" Jun 17 22:06:58.148: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-6795 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 17 22:06:58.148: INFO: >>> kubeConfig: /root/.kube/config Jun 17 22:06:58.252: INFO: Exec stderr: "" [AfterEach] [sig-node] KubeletManagedEtcHosts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 17 22:06:58.252: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-kubelet-etc-hosts-6795" for this suite. • [SLOW TEST:10.971 seconds] [sig-node] KubeletManagedEtcHosts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":37,"skipped":688,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 17 22:06:53.684: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:186 [It] should support remote command execution over websockets [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Jun 17 22:06:53.705: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes Jun 17 22:06:53.719: INFO: The status of Pod pod-exec-websocket-91ce5548-7826-487a-9832-32ffe5c5b426 is Pending, waiting for it to be Running (with Ready = true) Jun 17 22:06:55.723: INFO: The status of Pod pod-exec-websocket-91ce5548-7826-487a-9832-32ffe5c5b426 is Pending, waiting for it to be Running (with Ready = true) Jun 17 22:06:57.723: INFO: The status of Pod pod-exec-websocket-91ce5548-7826-487a-9832-32ffe5c5b426 is Pending, waiting for it to be Running (with Ready = true) Jun 17 22:06:59.723: INFO: The status of Pod pod-exec-websocket-91ce5548-7826-487a-9832-32ffe5c5b426 is Running (Ready = true) [AfterEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 17 22:07:00.168: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-3542" for this suite. • [SLOW TEST:6.494 seconds] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should support remote command execution over websockets [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Pods should support remote command execution over websockets [NodeConformance] [Conformance]","total":-1,"completed":25,"skipped":360,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 17 22:06:58.176: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:86 [It] deployment should delete old replica sets [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Jun 17 22:06:58.204: INFO: Pod name cleanup-pod: Found 0 pods out of 1 Jun 17 22:07:03.209: INFO: Pod name cleanup-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Jun 17 22:07:03.209: INFO: Creating deployment test-cleanup-deployment STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:80 Jun 17 22:07:03.223: INFO: Deployment "test-cleanup-deployment": &Deployment{ObjectMeta:{test-cleanup-deployment deployment-5774 fd8d7c25-e90c-4acd-ba0c-19b7c61dff74 46411 1 2022-06-17 22:07:03 +0000 UTC map[name:cleanup-pod] map[] [] [] [{e2e.test Update apps/v1 2022-06-17 22:07:03 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.32 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc007e57d38 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:0,Replicas:0,UpdatedReplicas:0,AvailableReplicas:0,UnavailableReplicas:0,Conditions:[]DeploymentCondition{},ReadyReplicas:0,CollisionCount:nil,},} Jun 17 22:07:03.226: INFO: New ReplicaSet "test-cleanup-deployment-5b4d99b59b" of Deployment "test-cleanup-deployment": &ReplicaSet{ObjectMeta:{test-cleanup-deployment-5b4d99b59b deployment-5774 24572bbe-7be1-4a98-b7e2-f45f41306237 46416 1 2022-06-17 22:07:03 +0000 UTC map[name:cleanup-pod pod-template-hash:5b4d99b59b] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-cleanup-deployment fd8d7c25-e90c-4acd-ba0c-19b7c61dff74 0xc007eb0207 0xc007eb0208}] [] [{kube-controller-manager Update apps/v1 2022-06-17 22:07:03 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fd8d7c25-e90c-4acd-ba0c-19b7c61dff74\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod-template-hash: 5b4d99b59b,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod pod-template-hash:5b4d99b59b] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.32 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc007eb0298 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:0,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Jun 17 22:07:03.226: INFO: All old ReplicaSets of Deployment "test-cleanup-deployment": Jun 17 22:07:03.226: INFO: &ReplicaSet{ObjectMeta:{test-cleanup-controller deployment-5774 e86092db-2cda-40a8-a7be-6b5dacd670e9 46413 1 2022-06-17 22:06:58 +0000 UTC map[name:cleanup-pod pod:httpd] map[] [{apps/v1 Deployment test-cleanup-deployment fd8d7c25-e90c-4acd-ba0c-19b7c61dff74 0xc007eb00f7 0xc007eb00f8}] [] [{e2e.test Update apps/v1 2022-06-17 22:06:58 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2022-06-17 22:07:03 +0000 UTC FieldsV1 {"f:metadata":{"f:ownerReferences":{".":{},"k:{\"uid\":\"fd8d7c25-e90c-4acd-ba0c-19b7c61dff74\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod pod:httpd] map[] [] [] []} {[] [] [{httpd k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc007eb0198 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} Jun 17 22:07:03.229: INFO: Pod "test-cleanup-controller-npqhb" is available: &Pod{ObjectMeta:{test-cleanup-controller-npqhb test-cleanup-controller- deployment-5774 90e4f035-97d2-414f-873d-761628a0130f 46396 0 2022-06-17 22:06:58 +0000 UTC map[name:cleanup-pod pod:httpd] map[k8s.v1.cni.cncf.io/network-status:[{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.3.184" ], "mac": "ca:85:5d:60:bc:a3", "default": true, "dns": {} }] k8s.v1.cni.cncf.io/networks-status:[{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.3.184" ], "mac": "ca:85:5d:60:bc:a3", "default": true, "dns": {} }] kubernetes.io/psp:collectd] [{apps/v1 ReplicaSet test-cleanup-controller e86092db-2cda-40a8-a7be-6b5dacd670e9 0xc007eb06b7 0xc007eb06b8}] [] [{kube-controller-manager Update v1 2022-06-17 22:06:58 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e86092db-2cda-40a8-a7be-6b5dacd670e9\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {multus Update v1 2022-06-17 22:07:00 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:k8s.v1.cni.cncf.io/network-status":{},"f:k8s.v1.cni.cncf.io/networks-status":{}}}}} {kubelet Update v1 2022-06-17 22:07:03 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.3.184\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-rz9rn,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-rz9rn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-06-17 22:06:58 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-06-17 22:07:03 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-06-17 22:07:03 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-06-17 22:06:58 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.10.190.208,PodIP:10.244.3.184,StartTime:2022-06-17 22:06:58 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2022-06-17 22:07:01 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,ImageID:docker-pullable://k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50,ContainerID:docker://e683ae2e96d9823a117f930435204788dba499106b6d56320a306ac107b44699,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.3.184,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 17 22:07:03.229: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-5774" for this suite. • [SLOW TEST:5.062 seconds] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should delete old replica sets [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should delete old replica sets [Conformance]","total":-1,"completed":21,"skipped":295,"failed":2,"failures":["[sig-network] Services should be able to create a functioning NodePort service [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 17 22:06:58.325: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/init_container.go:162 [It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating the pod Jun 17 22:06:58.345: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [sig-node] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 17 22:07:05.617: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-8806" for this suite. • [SLOW TEST:7.301 seconds] [sig-node] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]","total":-1,"completed":38,"skipped":720,"failed":0} SSSSSS ------------------------------ [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 17 22:07:05.639: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746 [It] should provide secure master service [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 17 22:07:05.662: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-5264" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750 • ------------------------------ {"msg":"PASSED [sig-network] Services should provide secure master service [Conformance]","total":-1,"completed":39,"skipped":726,"failed":0} SSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 17 22:07:03.285: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] creating/deleting custom resource definition objects works [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Jun 17 22:07:03.307: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 17 22:07:09.322: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-1887" for this suite. • [SLOW TEST:6.047 seconds] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Simple CustomResourceDefinition /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:48 creating/deleting custom resource definition objects works [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance]","total":-1,"completed":22,"skipped":319,"failed":2,"failures":["[sig-network] Services should be able to create a functioning NodePort service [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]"]} SSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 17 22:07:05.687: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:186 [It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Jun 17 22:07:05.708: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes Jun 17 22:07:05.722: INFO: The status of Pod pod-logs-websocket-a01c13d9-12bc-4684-97bc-1cd5ca3fb07e is Pending, waiting for it to be Running (with Ready = true) Jun 17 22:07:07.725: INFO: The status of Pod pod-logs-websocket-a01c13d9-12bc-4684-97bc-1cd5ca3fb07e is Pending, waiting for it to be Running (with Ready = true) Jun 17 22:07:09.726: INFO: The status of Pod pod-logs-websocket-a01c13d9-12bc-4684-97bc-1cd5ca3fb07e is Running (Ready = true) [AfterEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 17 22:07:09.746: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-1564" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance]","total":-1,"completed":40,"skipped":733,"failed":0} SSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 17 22:06:36.701: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan pods created by rc if delete options say so [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods STEP: Gathering metrics Jun 17 22:07:16.778: INFO: The status of Pod kube-controller-manager-master3 is Running (Ready = true) Jun 17 22:07:16.847: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: Jun 17 22:07:16.847: INFO: Deleting pod "simpletest.rc-5hv8t" in namespace "gc-3051" Jun 17 22:07:16.853: INFO: Deleting pod "simpletest.rc-6p5kn" in namespace "gc-3051" Jun 17 22:07:16.858: INFO: Deleting pod "simpletest.rc-8xgsw" in namespace "gc-3051" Jun 17 22:07:16.863: INFO: Deleting pod "simpletest.rc-dpxbh" in namespace "gc-3051" Jun 17 22:07:16.869: INFO: Deleting pod "simpletest.rc-f2vpj" in namespace "gc-3051" Jun 17 22:07:16.874: INFO: Deleting pod "simpletest.rc-gp8hz" in namespace "gc-3051" Jun 17 22:07:16.880: INFO: Deleting pod "simpletest.rc-hng5j" in namespace "gc-3051" Jun 17 22:07:16.885: INFO: Deleting pod "simpletest.rc-njx7x" in namespace "gc-3051" Jun 17 22:07:16.890: INFO: Deleting pod "simpletest.rc-v87kl" in namespace "gc-3051" Jun 17 22:07:16.896: INFO: Deleting pod "simpletest.rc-vbdkn" in namespace "gc-3051" [AfterEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 17 22:07:16.901: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-3051" for this suite. • [SLOW TEST:40.207 seconds] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should orphan pods created by rc if delete options say so [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance]","total":-1,"completed":25,"skipped":552,"failed":1,"failures":["[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 17 22:07:00.276: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746 [It] should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating service in namespace services-8377 STEP: creating service affinity-clusterip-transition in namespace services-8377 STEP: creating replication controller affinity-clusterip-transition in namespace services-8377 I0617 22:07:00.308848 40 runners.go:190] Created replication controller with name: affinity-clusterip-transition, namespace: services-8377, replica count: 3 I0617 22:07:03.359451 40 runners.go:190] affinity-clusterip-transition Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0617 22:07:06.360135 40 runners.go:190] affinity-clusterip-transition Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jun 17 22:07:06.365: INFO: Creating new exec pod Jun 17 22:07:11.385: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8377 exec execpod-affinityfvjml -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-transition 80' Jun 17 22:07:11.986: INFO: stderr: "+ nc -v -t -w 2 affinity-clusterip-transition 80\n+ echo hostName\nConnection to affinity-clusterip-transition 80 port [tcp/http] succeeded!\n" Jun 17 22:07:11.986: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" Jun 17 22:07:11.986: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8377 exec execpod-affinityfvjml -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.233.18.97 80' Jun 17 22:07:12.231: INFO: stderr: "+ nc -v -t -w 2 10.233.18.97 80\n+ echo hostName\nConnection to 10.233.18.97 80 port [tcp/http] succeeded!\n" Jun 17 22:07:12.231: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" Jun 17 22:07:12.238: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8377 exec execpod-affinityfvjml -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://10.233.18.97:80/ ; done' Jun 17 22:07:12.616: INFO: stderr: "+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.18.97:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.18.97:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.18.97:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.18.97:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.18.97:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.18.97:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.18.97:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.18.97:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.18.97:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.18.97:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.18.97:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.18.97:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.18.97:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.18.97:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.18.97:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.18.97:80/\n" Jun 17 22:07:12.617: INFO: stdout: "\naffinity-clusterip-transition-tm9fz\naffinity-clusterip-transition-dt8n4\naffinity-clusterip-transition-tm9fz\naffinity-clusterip-transition-n8xnr\naffinity-clusterip-transition-tm9fz\naffinity-clusterip-transition-dt8n4\naffinity-clusterip-transition-dt8n4\naffinity-clusterip-transition-n8xnr\naffinity-clusterip-transition-dt8n4\naffinity-clusterip-transition-dt8n4\naffinity-clusterip-transition-dt8n4\naffinity-clusterip-transition-tm9fz\naffinity-clusterip-transition-dt8n4\naffinity-clusterip-transition-dt8n4\naffinity-clusterip-transition-dt8n4\naffinity-clusterip-transition-tm9fz" Jun 17 22:07:12.617: INFO: Received response from host: affinity-clusterip-transition-tm9fz Jun 17 22:07:12.617: INFO: Received response from host: affinity-clusterip-transition-dt8n4 Jun 17 22:07:12.617: INFO: Received response from host: affinity-clusterip-transition-tm9fz Jun 17 22:07:12.617: INFO: Received response from host: affinity-clusterip-transition-n8xnr Jun 17 22:07:12.617: INFO: Received response from host: affinity-clusterip-transition-tm9fz Jun 17 22:07:12.617: INFO: Received response from host: affinity-clusterip-transition-dt8n4 Jun 17 22:07:12.617: INFO: Received response from host: affinity-clusterip-transition-dt8n4 Jun 17 22:07:12.617: INFO: Received response from host: affinity-clusterip-transition-n8xnr Jun 17 22:07:12.617: INFO: Received response from host: affinity-clusterip-transition-dt8n4 Jun 17 22:07:12.617: INFO: Received response from host: affinity-clusterip-transition-dt8n4 Jun 17 22:07:12.617: INFO: Received response from host: affinity-clusterip-transition-dt8n4 Jun 17 22:07:12.617: INFO: Received response from host: affinity-clusterip-transition-tm9fz Jun 17 22:07:12.617: INFO: Received response from host: affinity-clusterip-transition-dt8n4 Jun 17 22:07:12.617: INFO: Received response from host: affinity-clusterip-transition-dt8n4 Jun 17 22:07:12.617: INFO: Received response from host: affinity-clusterip-transition-dt8n4 Jun 17 22:07:12.617: INFO: Received response from host: affinity-clusterip-transition-tm9fz Jun 17 22:07:12.624: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8377 exec execpod-affinityfvjml -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://10.233.18.97:80/ ; done' Jun 17 22:07:12.936: INFO: stderr: "+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.18.97:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.18.97:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.18.97:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.18.97:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.18.97:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.18.97:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.18.97:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.18.97:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.18.97:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.18.97:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.18.97:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.18.97:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.18.97:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.18.97:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.18.97:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.18.97:80/\n" Jun 17 22:07:12.936: INFO: stdout: "\naffinity-clusterip-transition-n8xnr\naffinity-clusterip-transition-n8xnr\naffinity-clusterip-transition-n8xnr\naffinity-clusterip-transition-n8xnr\naffinity-clusterip-transition-n8xnr\naffinity-clusterip-transition-n8xnr\naffinity-clusterip-transition-n8xnr\naffinity-clusterip-transition-n8xnr\naffinity-clusterip-transition-n8xnr\naffinity-clusterip-transition-n8xnr\naffinity-clusterip-transition-n8xnr\naffinity-clusterip-transition-n8xnr\naffinity-clusterip-transition-n8xnr\naffinity-clusterip-transition-n8xnr\naffinity-clusterip-transition-n8xnr\naffinity-clusterip-transition-n8xnr" Jun 17 22:07:12.936: INFO: Received response from host: affinity-clusterip-transition-n8xnr Jun 17 22:07:12.936: INFO: Received response from host: affinity-clusterip-transition-n8xnr Jun 17 22:07:12.936: INFO: Received response from host: affinity-clusterip-transition-n8xnr Jun 17 22:07:12.936: INFO: Received response from host: affinity-clusterip-transition-n8xnr Jun 17 22:07:12.936: INFO: Received response from host: affinity-clusterip-transition-n8xnr Jun 17 22:07:12.936: INFO: Received response from host: affinity-clusterip-transition-n8xnr Jun 17 22:07:12.936: INFO: Received response from host: affinity-clusterip-transition-n8xnr Jun 17 22:07:12.936: INFO: Received response from host: affinity-clusterip-transition-n8xnr Jun 17 22:07:12.936: INFO: Received response from host: affinity-clusterip-transition-n8xnr Jun 17 22:07:12.936: INFO: Received response from host: affinity-clusterip-transition-n8xnr Jun 17 22:07:12.936: INFO: Received response from host: affinity-clusterip-transition-n8xnr Jun 17 22:07:12.936: INFO: Received response from host: affinity-clusterip-transition-n8xnr Jun 17 22:07:12.936: INFO: Received response from host: affinity-clusterip-transition-n8xnr Jun 17 22:07:12.936: INFO: Received response from host: affinity-clusterip-transition-n8xnr Jun 17 22:07:12.936: INFO: Received response from host: affinity-clusterip-transition-n8xnr Jun 17 22:07:12.936: INFO: Received response from host: affinity-clusterip-transition-n8xnr Jun 17 22:07:12.936: INFO: Cleaning up the exec pod STEP: deleting ReplicationController affinity-clusterip-transition in namespace services-8377, will wait for the garbage collector to delete the pods Jun 17 22:07:13.001: INFO: Deleting ReplicationController affinity-clusterip-transition took: 3.808197ms Jun 17 22:07:13.102: INFO: Terminating ReplicationController affinity-clusterip-transition pods took: 100.690015ms [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 17 22:07:28.412: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-8377" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750 • [SLOW TEST:28.145 seconds] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","total":-1,"completed":26,"skipped":413,"failed":0} SSSSSSSS ------------------------------ [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 17 22:07:28.442: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:186 [It] should run through the lifecycle of Pods and PodStatus [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating a Pod with a static label STEP: watching for Pod to be ready Jun 17 22:07:28.490: INFO: observed Pod pod-test in namespace pods-5024 in phase Pending with labels: map[test-pod-static:true] & conditions [] Jun 17 22:07:28.492: INFO: observed Pod pod-test in namespace pods-5024 in phase Pending with labels: map[test-pod-static:true] & conditions [{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-06-17 22:07:28 +0000 UTC }] Jun 17 22:07:28.769: INFO: observed Pod pod-test in namespace pods-5024 in phase Pending with labels: map[test-pod-static:true] & conditions [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-06-17 22:07:28 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-06-17 22:07:28 +0000 UTC ContainersNotReady containers with unready status: [pod-test]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-06-17 22:07:28 +0000 UTC ContainersNotReady containers with unready status: [pod-test]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-06-17 22:07:28 +0000 UTC }] Jun 17 22:07:30.213: INFO: observed Pod pod-test in namespace pods-5024 in phase Pending with labels: map[test-pod-static:true] & conditions [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-06-17 22:07:28 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-06-17 22:07:28 +0000 UTC ContainersNotReady containers with unready status: [pod-test]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-06-17 22:07:28 +0000 UTC ContainersNotReady containers with unready status: [pod-test]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-06-17 22:07:28 +0000 UTC }] Jun 17 22:07:31.738: INFO: Found Pod pod-test in namespace pods-5024 in phase Running with labels: map[test-pod-static:true] & conditions [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-06-17 22:07:28 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2022-06-17 22:07:31 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-06-17 22:07:31 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-06-17 22:07:28 +0000 UTC }] STEP: patching the Pod with a new Label and updated data Jun 17 22:07:31.752: INFO: observed event type ADDED STEP: getting the Pod and ensuring that it's patched STEP: getting the PodStatus STEP: replacing the Pod's status Ready condition to False STEP: check the Pod again to ensure its Ready conditions are False STEP: deleting the Pod via a Collection with a LabelSelector STEP: watching for the Pod to be deleted Jun 17 22:07:31.772: INFO: observed event type ADDED Jun 17 22:07:31.772: INFO: observed event type MODIFIED Jun 17 22:07:31.772: INFO: observed event type MODIFIED Jun 17 22:07:31.772: INFO: observed event type MODIFIED Jun 17 22:07:31.772: INFO: observed event type MODIFIED Jun 17 22:07:31.772: INFO: observed event type MODIFIED Jun 17 22:07:31.773: INFO: observed event type MODIFIED Jun 17 22:07:31.773: INFO: observed event type MODIFIED [AfterEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 17 22:07:31.773: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-5024" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Pods should run through the lifecycle of Pods and PodStatus [Conformance]","total":-1,"completed":27,"skipped":421,"failed":0} SSSS ------------------------------ [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 17 22:06:39.975: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:54 [It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating pod busybox-84e1dc51-1e06-4a97-b8f5-61ed14804362 in namespace container-probe-2144 Jun 17 22:06:44.014: INFO: Started pod busybox-84e1dc51-1e06-4a97-b8f5-61ed14804362 in namespace container-probe-2144 STEP: checking the pod's current state and verifying that restartCount is present Jun 17 22:06:44.016: INFO: Initial restart count of pod busybox-84e1dc51-1e06-4a97-b8f5-61ed14804362 is 0 Jun 17 22:07:32.168: INFO: Restart count of pod container-probe-2144/busybox-84e1dc51-1e06-4a97-b8f5-61ed14804362 is now 1 (48.151478306s elapsed) STEP: deleting the pod [AfterEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 17 22:07:32.175: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-2144" for this suite. • [SLOW TEST:52.208 seconds] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Probing container should be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":-1,"completed":23,"skipped":312,"failed":1,"failures":["[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]"]} SSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 17 22:07:09.367: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:86 [It] deployment should support rollover [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Jun 17 22:07:09.395: INFO: Pod name rollover-pod: Found 0 pods out of 1 Jun 17 22:07:14.401: INFO: Pod name rollover-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Jun 17 22:07:14.402: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready Jun 17 22:07:16.408: INFO: Creating deployment "test-rollover-deployment" Jun 17 22:07:16.416: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations Jun 17 22:07:18.421: INFO: Check revision of new replica set for deployment "test-rollover-deployment" Jun 17 22:07:18.433: INFO: Ensure that both replica sets have 1 created replica Jun 17 22:07:18.440: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update Jun 17 22:07:18.448: INFO: Updating deployment test-rollover-deployment Jun 17 22:07:18.448: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller Jun 17 22:07:20.453: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2 Jun 17 22:07:20.458: INFO: Make sure deployment "test-rollover-deployment" is complete Jun 17 22:07:20.462: INFO: all replica sets need to contain the pod-template-hash label Jun 17 22:07:20.463: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63791100436, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63791100436, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63791100438, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63791100436, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-98c5f4599\" is progressing."}}, CollisionCount:(*int32)(nil)} Jun 17 22:07:22.469: INFO: all replica sets need to contain the pod-template-hash label Jun 17 22:07:22.469: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63791100436, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63791100436, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63791100438, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63791100436, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-98c5f4599\" is progressing."}}, CollisionCount:(*int32)(nil)} Jun 17 22:07:24.471: INFO: all replica sets need to contain the pod-template-hash label Jun 17 22:07:24.471: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63791100436, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63791100436, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63791100443, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63791100436, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-98c5f4599\" is progressing."}}, CollisionCount:(*int32)(nil)} Jun 17 22:07:26.469: INFO: all replica sets need to contain the pod-template-hash label Jun 17 22:07:26.470: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63791100436, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63791100436, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63791100443, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63791100436, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-98c5f4599\" is progressing."}}, CollisionCount:(*int32)(nil)} Jun 17 22:07:28.469: INFO: all replica sets need to contain the pod-template-hash label Jun 17 22:07:28.469: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63791100436, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63791100436, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63791100443, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63791100436, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-98c5f4599\" is progressing."}}, CollisionCount:(*int32)(nil)} Jun 17 22:07:30.469: INFO: all replica sets need to contain the pod-template-hash label Jun 17 22:07:30.470: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63791100436, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63791100436, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63791100443, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63791100436, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-98c5f4599\" is progressing."}}, CollisionCount:(*int32)(nil)} Jun 17 22:07:32.470: INFO: all replica sets need to contain the pod-template-hash label Jun 17 22:07:32.470: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63791100436, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63791100436, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63791100443, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63791100436, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-98c5f4599\" is progressing."}}, CollisionCount:(*int32)(nil)} Jun 17 22:07:34.469: INFO: Jun 17 22:07:34.469: INFO: Ensure that both old replica sets have no replicas [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:80 Jun 17 22:07:34.477: INFO: Deployment "test-rollover-deployment": &Deployment{ObjectMeta:{test-rollover-deployment deployment-9422 4159f7a3-76ed-477e-8b25-f30819360d2d 47102 2 2022-06-17 22:07:16 +0000 UTC map[name:rollover-pod] map[deployment.kubernetes.io/revision:2] [] [] [{e2e.test Update apps/v1 2022-06-17 22:07:18 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:minReadySeconds":{},"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2022-06-17 22:07:33 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:availableReplicas":{},"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{},"f:updatedReplicas":{}}}}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.32 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc0080b4328 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2022-06-17 22:07:16 +0000 UTC,LastTransitionTime:2022-06-17 22:07:16 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rollover-deployment-98c5f4599" has successfully progressed.,LastUpdateTime:2022-06-17 22:07:33 +0000 UTC,LastTransitionTime:2022-06-17 22:07:16 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} Jun 17 22:07:34.481: INFO: New ReplicaSet "test-rollover-deployment-98c5f4599" of Deployment "test-rollover-deployment": &ReplicaSet{ObjectMeta:{test-rollover-deployment-98c5f4599 deployment-9422 b1c96635-bf12-4626-b1c0-08b471d378ac 47086 2 2022-06-17 22:07:18 +0000 UTC map[name:rollover-pod pod-template-hash:98c5f4599] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-rollover-deployment 4159f7a3-76ed-477e-8b25-f30819360d2d 0xc00810e750 0xc00810e751}] [] [{kube-controller-manager Update apps/v1 2022-06-17 22:07:33 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"4159f7a3-76ed-477e-8b25-f30819360d2d\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:minReadySeconds":{},"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 98c5f4599,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod-template-hash:98c5f4599] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.32 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc00810e7c8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} Jun 17 22:07:34.482: INFO: All old ReplicaSets of Deployment "test-rollover-deployment": Jun 17 22:07:34.482: INFO: &ReplicaSet{ObjectMeta:{test-rollover-controller deployment-9422 d9f0692c-8ac0-4f27-96e5-308eed6d2fec 47100 2 2022-06-17 22:07:09 +0000 UTC map[name:rollover-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2] [{apps/v1 Deployment test-rollover-deployment 4159f7a3-76ed-477e-8b25-f30819360d2d 0xc00810e547 0xc00810e548}] [] [{e2e.test Update apps/v1 2022-06-17 22:07:09 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2022-06-17 22:07:33 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"4159f7a3-76ed-477e-8b25-f30819360d2d\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{}},"f:status":{"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod:httpd] map[] [] [] []} {[] [] [{httpd k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc00810e5e8 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Jun 17 22:07:34.482: INFO: &ReplicaSet{ObjectMeta:{test-rollover-deployment-78bc8b888c deployment-9422 60bc383f-fcb2-4258-beb4-94ae57ec16f9 46823 2 2022-06-17 22:07:16 +0000 UTC map[name:rollover-pod pod-template-hash:78bc8b888c] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-rollover-deployment 4159f7a3-76ed-477e-8b25-f30819360d2d 0xc00810e657 0xc00810e658}] [] [{kube-controller-manager Update apps/v1 2022-06-17 22:07:18 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"4159f7a3-76ed-477e-8b25-f30819360d2d\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:minReadySeconds":{},"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"redis-slave\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 78bc8b888c,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod-template-hash:78bc8b888c] map[] [] [] []} {[] [] [{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc00810e6e8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Jun 17 22:07:34.485: INFO: Pod "test-rollover-deployment-98c5f4599-hqxds" is available: &Pod{ObjectMeta:{test-rollover-deployment-98c5f4599-hqxds test-rollover-deployment-98c5f4599- deployment-9422 e9d18f4f-3002-481f-a276-f9d395eacbad 46952 0 2022-06-17 22:07:18 +0000 UTC map[name:rollover-pod pod-template-hash:98c5f4599] map[k8s.v1.cni.cncf.io/network-status:[{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.3.191" ], "mac": "6a:20:21:67:40:33", "default": true, "dns": {} }] k8s.v1.cni.cncf.io/networks-status:[{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.3.191" ], "mac": "6a:20:21:67:40:33", "default": true, "dns": {} }] kubernetes.io/psp:collectd] [{apps/v1 ReplicaSet test-rollover-deployment-98c5f4599 b1c96635-bf12-4626-b1c0-08b471d378ac 0xc007d8440f 0xc007d84420}] [] [{kube-controller-manager Update v1 2022-06-17 22:07:18 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b1c96635-bf12-4626-b1c0-08b471d378ac\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {multus Update v1 2022-06-17 22:07:21 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:k8s.v1.cni.cncf.io/network-status":{},"f:k8s.v1.cni.cncf.io/networks-status":{}}}}} {kubelet Update v1 2022-06-17 22:07:23 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.3.191\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-cqtms,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:k8s.gcr.io/e2e-test-images/agnhost:2.32,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-cqtms,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-06-17 22:07:18 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-06-17 22:07:23 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-06-17 22:07:23 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-06-17 22:07:18 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.10.190.208,PodIP:10.244.3.191,StartTime:2022-06-17 22:07:18 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2022-06-17 22:07:22 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/agnhost:2.32,ImageID:docker-pullable://k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1,ContainerID:docker://a8975d74b6732e4fdec3f9af8866f17d6fd061c0381f149c9d499e30e2601d54,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.3.191,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 17 22:07:34.485: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-9422" for this suite. • [SLOW TEST:25.127 seconds] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support rollover [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should support rollover [Conformance]","total":-1,"completed":23,"skipped":334,"failed":2,"failures":["[sig-network] Services should be able to create a functioning NodePort service [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]"]} SSS ------------------------------ [BeforeEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 17 22:07:31.789: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating secret with name secret-test-map-b6847a9e-fe53-4b9d-9ce6-cac68a75d403 STEP: Creating a pod to test consume secrets Jun 17 22:07:31.826: INFO: Waiting up to 5m0s for pod "pod-secrets-7416e2e1-31d0-479e-a16c-93b4dcd58b4e" in namespace "secrets-7021" to be "Succeeded or Failed" Jun 17 22:07:31.831: INFO: Pod "pod-secrets-7416e2e1-31d0-479e-a16c-93b4dcd58b4e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.745276ms Jun 17 22:07:33.835: INFO: Pod "pod-secrets-7416e2e1-31d0-479e-a16c-93b4dcd58b4e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00861433s Jun 17 22:07:35.839: INFO: Pod "pod-secrets-7416e2e1-31d0-479e-a16c-93b4dcd58b4e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.012646048s Jun 17 22:07:37.842: INFO: Pod "pod-secrets-7416e2e1-31d0-479e-a16c-93b4dcd58b4e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.015901831s STEP: Saw pod success Jun 17 22:07:37.842: INFO: Pod "pod-secrets-7416e2e1-31d0-479e-a16c-93b4dcd58b4e" satisfied condition "Succeeded or Failed" Jun 17 22:07:37.844: INFO: Trying to get logs from node node2 pod pod-secrets-7416e2e1-31d0-479e-a16c-93b4dcd58b4e container secret-volume-test: STEP: delete the pod Jun 17 22:07:37.857: INFO: Waiting for pod pod-secrets-7416e2e1-31d0-479e-a16c-93b4dcd58b4e to disappear Jun 17 22:07:37.860: INFO: Pod pod-secrets-7416e2e1-31d0-479e-a16c-93b4dcd58b4e no longer exists [AfterEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 17 22:07:37.860: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-7021" for this suite. • [SLOW TEST:6.080 seconds] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ [BeforeEach] [sig-node] Docker Containers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 17 22:07:32.211: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should use the image defaults if command and args are blank [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [AfterEach] [sig-node] Docker Containers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 17 22:07:38.256: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-9517" for this suite. • [SLOW TEST:6.055 seconds] [sig-node] Docker Containers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should use the image defaults if command and args are blank [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance]","total":-1,"completed":24,"skipped":326,"failed":1,"failures":["[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-apps] CronJob /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 17 22:07:38.367: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename cronjob STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] CronJob /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/cronjob.go:63 W0617 22:07:38.390591 32 warnings.go:70] batch/v1beta1 CronJob is deprecated in v1.21+, unavailable in v1.25+; use batch/v1 CronJob [It] should support CronJob API operations [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a cronjob STEP: creating STEP: getting STEP: listing STEP: watching Jun 17 22:07:38.397: INFO: starting watch STEP: cluster-wide listing STEP: cluster-wide watching Jun 17 22:07:38.401: INFO: starting watch STEP: patching STEP: updating Jun 17 22:07:38.414: INFO: waiting for watch events with expected annotations Jun 17 22:07:38.414: INFO: saw patched and updated annotations STEP: patching /status STEP: updating /status STEP: get /status STEP: deleting STEP: deleting a collection [AfterEach] [sig-apps] CronJob /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 17 22:07:38.454: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "cronjob-9438" for this suite. • ------------------------------ {"msg":"PASSED [sig-apps] CronJob should support CronJob API operations [Conformance]","total":-1,"completed":25,"skipped":383,"failed":1,"failures":["[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 17 22:07:17.005: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/lifecycle_hook.go:52 STEP: create the container to handle the HTTPGet hook request. Jun 17 22:07:17.041: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Jun 17 22:07:19.045: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Jun 17 22:07:21.046: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Jun 17 22:07:23.045: INFO: The status of Pod pod-handle-http-request is Running (Ready = true) [It] should execute prestop exec hook properly [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: create the pod with lifecycle hook Jun 17 22:07:23.061: INFO: The status of Pod pod-with-prestop-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jun 17 22:07:25.064: INFO: The status of Pod pod-with-prestop-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jun 17 22:07:27.065: INFO: The status of Pod pod-with-prestop-exec-hook is Running (Ready = true) STEP: delete the pod with lifecycle hook Jun 17 22:07:27.073: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jun 17 22:07:27.076: INFO: Pod pod-with-prestop-exec-hook still exists Jun 17 22:07:29.076: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jun 17 22:07:29.079: INFO: Pod pod-with-prestop-exec-hook still exists Jun 17 22:07:31.077: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jun 17 22:07:31.081: INFO: Pod pod-with-prestop-exec-hook still exists Jun 17 22:07:33.076: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jun 17 22:07:33.080: INFO: Pod pod-with-prestop-exec-hook still exists Jun 17 22:07:35.077: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jun 17 22:07:35.080: INFO: Pod pod-with-prestop-exec-hook still exists Jun 17 22:07:37.076: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jun 17 22:07:37.080: INFO: Pod pod-with-prestop-exec-hook still exists Jun 17 22:07:39.077: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jun 17 22:07:39.080: INFO: Pod pod-with-prestop-exec-hook still exists Jun 17 22:07:41.079: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jun 17 22:07:41.083: INFO: Pod pod-with-prestop-exec-hook no longer exists STEP: check prestop hook [AfterEach] [sig-node] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 17 22:07:41.107: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-3854" for this suite. • [SLOW TEST:24.112 seconds] [sig-node] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 when create a pod with lifecycle hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/lifecycle_hook.go:43 should execute prestop exec hook properly [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]","total":-1,"completed":26,"skipped":609,"failed":1,"failures":["[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]"]} [BeforeEach] [sig-api-machinery] Servers with support for Table transformation /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 17 22:07:41.119: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename tables STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Servers with support for Table transformation /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/table_conversion.go:47 [It] should return a 406 for a backend which does not implement metadata [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [AfterEach] [sig-api-machinery] Servers with support for Table transformation /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 17 22:07:41.148: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "tables-6722" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":-1,"completed":28,"skipped":425,"failed":0} [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 17 22:07:37.872: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test emptydir 0644 on tmpfs Jun 17 22:07:37.905: INFO: Waiting up to 5m0s for pod "pod-706ef105-ee31-4602-9487-b3536fc9d40f" in namespace "emptydir-6297" to be "Succeeded or Failed" Jun 17 22:07:37.907: INFO: Pod "pod-706ef105-ee31-4602-9487-b3536fc9d40f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.405961ms Jun 17 22:07:39.911: INFO: Pod "pod-706ef105-ee31-4602-9487-b3536fc9d40f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006239514s Jun 17 22:07:41.916: INFO: Pod "pod-706ef105-ee31-4602-9487-b3536fc9d40f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.010646725s Jun 17 22:07:43.921: INFO: Pod "pod-706ef105-ee31-4602-9487-b3536fc9d40f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.01647704s Jun 17 22:07:45.927: INFO: Pod "pod-706ef105-ee31-4602-9487-b3536fc9d40f": Phase="Pending", Reason="", readiness=false. Elapsed: 8.022418797s Jun 17 22:07:47.931: INFO: Pod "pod-706ef105-ee31-4602-9487-b3536fc9d40f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.026340581s STEP: Saw pod success Jun 17 22:07:47.931: INFO: Pod "pod-706ef105-ee31-4602-9487-b3536fc9d40f" satisfied condition "Succeeded or Failed" Jun 17 22:07:47.933: INFO: Trying to get logs from node node2 pod pod-706ef105-ee31-4602-9487-b3536fc9d40f container test-container: STEP: delete the pod Jun 17 22:07:47.963: INFO: Waiting for pod pod-706ef105-ee31-4602-9487-b3536fc9d40f to disappear Jun 17 22:07:47.964: INFO: Pod pod-706ef105-ee31-4602-9487-b3536fc9d40f no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 17 22:07:47.965: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-6297" for this suite. • [SLOW TEST:10.101 seconds] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":29,"skipped":425,"failed":0} SSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 17 22:07:34.504: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:86 [It] deployment should support proportional scaling [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Jun 17 22:07:34.529: INFO: Creating deployment "webserver-deployment" Jun 17 22:07:34.531: INFO: Waiting for observed generation 1 Jun 17 22:07:36.538: INFO: Waiting for all required pods to come up Jun 17 22:07:36.542: INFO: Pod name httpd: Found 10 pods out of 10 STEP: ensuring each pod is running Jun 17 22:07:46.553: INFO: Waiting for deployment "webserver-deployment" to complete Jun 17 22:07:46.557: INFO: Updating deployment "webserver-deployment" with a non-existent image Jun 17 22:07:46.565: INFO: Updating deployment webserver-deployment Jun 17 22:07:46.565: INFO: Waiting for observed generation 2 Jun 17 22:07:48.570: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8 Jun 17 22:07:48.572: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8 Jun 17 22:07:48.574: INFO: Waiting for the first rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas Jun 17 22:07:48.581: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0 Jun 17 22:07:48.581: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5 Jun 17 22:07:48.583: INFO: Waiting for the second rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas Jun 17 22:07:48.587: INFO: Verifying that deployment "webserver-deployment" has minimum required number of available replicas Jun 17 22:07:48.587: INFO: Scaling up the deployment "webserver-deployment" from 10 to 30 Jun 17 22:07:48.595: INFO: Updating deployment webserver-deployment Jun 17 22:07:48.595: INFO: Waiting for the replicasets of deployment "webserver-deployment" to have desired number of replicas Jun 17 22:07:48.599: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20 Jun 17 22:07:48.602: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13 [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:80 Jun 17 22:07:48.606: INFO: Deployment "webserver-deployment": &Deployment{ObjectMeta:{webserver-deployment deployment-9697 70c66e0d-25af-4f34-8d06-096f8d97a4cd 47599 3 2022-06-17 22:07:34 +0000 UTC map[name:httpd] map[deployment.kubernetes.io/revision:2] [] [] [{e2e.test Update apps/v1 2022-06-17 22:07:34 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2022-06-17 22:07:46 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:availableReplicas":{},"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{},"f:unavailableReplicas":{},"f:updatedReplicas":{}}}}]},Spec:DeploymentSpec{Replicas:*30,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd] map[] [] [] []} {[] [] [{httpd webserver:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc001cb0d98 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:13,UpdatedReplicas:5,AvailableReplicas:8,UnavailableReplicas:5,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2022-06-17 22:07:44 +0000 UTC,LastTransitionTime:2022-06-17 22:07:44 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "webserver-deployment-795d758f88" is progressing.,LastUpdateTime:2022-06-17 22:07:46 +0000 UTC,LastTransitionTime:2022-06-17 22:07:34 +0000 UTC,},},ReadyReplicas:8,CollisionCount:nil,},} Jun 17 22:07:48.609: INFO: New ReplicaSet "webserver-deployment-795d758f88" of Deployment "webserver-deployment": &ReplicaSet{ObjectMeta:{webserver-deployment-795d758f88 deployment-9697 aba43e64-8662-4422-b615-e138a47f6ab0 47602 3 2022-06-17 22:07:46 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment webserver-deployment 70c66e0d-25af-4f34-8d06-096f8d97a4cd 0xc001cb1177 0xc001cb1178}] [] [{kube-controller-manager Update apps/v1 2022-06-17 22:07:46 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"70c66e0d-25af-4f34-8d06-096f8d97a4cd\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*13,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: 795d758f88,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [] [] []} {[] [] [{httpd webserver:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc001cb11f8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:5,FullyLabeledReplicas:5,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Jun 17 22:07:48.609: INFO: All old ReplicaSets of Deployment "webserver-deployment": Jun 17 22:07:48.609: INFO: &ReplicaSet{ObjectMeta:{webserver-deployment-847dcfb7fb deployment-9697 ba0c709b-9564-4072-acff-3ccc48ecd39f 47600 3 2022-06-17 22:07:34 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment webserver-deployment 70c66e0d-25af-4f34-8d06-096f8d97a4cd 0xc001cb1257 0xc001cb1258}] [] [{kube-controller-manager Update apps/v1 2022-06-17 22:07:39 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"70c66e0d-25af-4f34-8d06-096f8d97a4cd\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*20,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: 847dcfb7fb,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[] [] [] []} {[] [] [{httpd k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc001cb12c8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:8,FullyLabeledReplicas:8,ObservedGeneration:2,ReadyReplicas:8,AvailableReplicas:8,Conditions:[]ReplicaSetCondition{},},} Jun 17 22:07:48.614: INFO: Pod "webserver-deployment-795d758f88-4zz8h" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-4zz8h webserver-deployment-795d758f88- deployment-9697 8cdf6b30-d7bc-45f8-be77-9c33830a9dad 47606 0 2022-06-17 22:07:48 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[kubernetes.io/psp:collectd] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 aba43e64-8662-4422-b615-e138a47f6ab0 0xc0065772cf 0xc0065772e0}] [] [{kube-controller-manager Update v1 2022-06-17 22:07:48 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"aba43e64-8662-4422-b615-e138a47f6ab0\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-w5l6v,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-w5l6v,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jun 17 22:07:48.615: INFO: Pod "webserver-deployment-795d758f88-59nd9" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-59nd9 webserver-deployment-795d758f88- deployment-9697 b235b43a-40a6-417b-8014-00a74f027bdc 47580 0 2022-06-17 22:07:46 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[k8s.v1.cni.cncf.io/network-status:[{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.4.242" ], "mac": "3e:ee:44:a2:c5:97", "default": true, "dns": {} }] k8s.v1.cni.cncf.io/networks-status:[{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.4.242" ], "mac": "3e:ee:44:a2:c5:97", "default": true, "dns": {} }] kubernetes.io/psp:collectd] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 aba43e64-8662-4422-b615-e138a47f6ab0 0xc00657741f 0xc006577430}] [] [{kube-controller-manager Update v1 2022-06-17 22:07:46 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"aba43e64-8662-4422-b615-e138a47f6ab0\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2022-06-17 22:07:46 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:hostIP":{},"f:startTime":{}}}} {multus Update v1 2022-06-17 22:07:48 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:k8s.v1.cni.cncf.io/network-status":{},"f:k8s.v1.cni.cncf.io/networks-status":{}}},"f:status":{"f:containerStatuses":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-t4g2m,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-t4g2m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node1,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-06-17 22:07:46 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-06-17 22:07:46 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-06-17 22:07:46 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-06-17 22:07:46 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.10.190.207,PodIP:,StartTime:2022-06-17 22:07:46 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:nil,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jun 17 22:07:48.615: INFO: Pod "webserver-deployment-795d758f88-dnwm4" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-dnwm4 webserver-deployment-795d758f88- deployment-9697 95db2ca8-1aac-4cfc-ac78-02f2e373b660 47564 0 2022-06-17 22:07:46 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[kubernetes.io/psp:collectd] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 aba43e64-8662-4422-b615-e138a47f6ab0 0xc00657761f 0xc006577630}] [] [{kube-controller-manager Update v1 2022-06-17 22:07:46 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"aba43e64-8662-4422-b615-e138a47f6ab0\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2022-06-17 22:07:47 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-bntfg,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-bntfg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-06-17 22:07:46 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-06-17 22:07:46 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-06-17 22:07:46 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-06-17 22:07:46 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.10.190.208,PodIP:,StartTime:2022-06-17 22:07:46 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jun 17 22:07:48.615: INFO: Pod "webserver-deployment-795d758f88-j6rkz" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-j6rkz webserver-deployment-795d758f88- deployment-9697 9fb4a8d8-3e3a-464e-bb8a-fd5f5edc0b77 47557 0 2022-06-17 22:07:46 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[kubernetes.io/psp:collectd] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 aba43e64-8662-4422-b615-e138a47f6ab0 0xc00657781f 0xc006577830}] [] [{kube-controller-manager Update v1 2022-06-17 22:07:46 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"aba43e64-8662-4422-b615-e138a47f6ab0\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2022-06-17 22:07:46 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-bhng7,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-bhng7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-06-17 22:07:46 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-06-17 22:07:46 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-06-17 22:07:46 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-06-17 22:07:46 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.10.190.208,PodIP:,StartTime:2022-06-17 22:07:46 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jun 17 22:07:48.616: INFO: Pod "webserver-deployment-795d758f88-jk5zg" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-jk5zg webserver-deployment-795d758f88- deployment-9697 07334d8e-a2cf-419a-a773-9b97b6280d0e 47541 0 2022-06-17 22:07:46 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[kubernetes.io/psp:collectd] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 aba43e64-8662-4422-b615-e138a47f6ab0 0xc0065779ff 0xc006577a10}] [] [{kube-controller-manager Update v1 2022-06-17 22:07:46 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"aba43e64-8662-4422-b615-e138a47f6ab0\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2022-06-17 22:07:46 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-sc2bg,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-sc2bg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-06-17 22:07:46 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-06-17 22:07:46 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-06-17 22:07:46 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-06-17 22:07:46 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.10.190.208,PodIP:,StartTime:2022-06-17 22:07:46 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jun 17 22:07:48.616: INFO: Pod "webserver-deployment-795d758f88-m9jwp" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-m9jwp webserver-deployment-795d758f88- deployment-9697 c7068192-585d-4697-94be-00e2d759cb01 47534 0 2022-06-17 22:07:46 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[kubernetes.io/psp:collectd] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 aba43e64-8662-4422-b615-e138a47f6ab0 0xc006577bdf 0xc006577bf0}] [] [{kube-controller-manager Update v1 2022-06-17 22:07:46 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"aba43e64-8662-4422-b615-e138a47f6ab0\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2022-06-17 22:07:46 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-g4z2h,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-g4z2h,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-06-17 22:07:46 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-06-17 22:07:46 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-06-17 22:07:46 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-06-17 22:07:46 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.10.190.208,PodIP:,StartTime:2022-06-17 22:07:46 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jun 17 22:07:48.616: INFO: Pod "webserver-deployment-847dcfb7fb-4pc28" is available: &Pod{ObjectMeta:{webserver-deployment-847dcfb7fb-4pc28 webserver-deployment-847dcfb7fb- deployment-9697 9266d59d-059d-4e23-bc58-3d3d7aee79cc 47332 0 2022-06-17 22:07:34 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[k8s.v1.cni.cncf.io/network-status:[{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.4.240" ], "mac": "0e:5f:5a:c6:31:03", "default": true, "dns": {} }] k8s.v1.cni.cncf.io/networks-status:[{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.4.240" ], "mac": "0e:5f:5a:c6:31:03", "default": true, "dns": {} }] kubernetes.io/psp:collectd] [{apps/v1 ReplicaSet webserver-deployment-847dcfb7fb ba0c709b-9564-4072-acff-3ccc48ecd39f 0xc006577dbf 0xc006577dd0}] [] [{kube-controller-manager Update v1 2022-06-17 22:07:34 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ba0c709b-9564-4072-acff-3ccc48ecd39f\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {multus Update v1 2022-06-17 22:07:38 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:k8s.v1.cni.cncf.io/network-status":{},"f:k8s.v1.cni.cncf.io/networks-status":{}}}}} {kubelet Update v1 2022-06-17 22:07:39 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.4.240\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-q8t2z,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-q8t2z,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node1,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-06-17 22:07:34 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-06-17 22:07:39 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-06-17 22:07:39 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-06-17 22:07:34 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.10.190.207,PodIP:10.244.4.240,StartTime:2022-06-17 22:07:34 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2022-06-17 22:07:39 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,ImageID:docker-pullable://k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50,ContainerID:docker://640a3e77fa90a22ba685378de493bfffff58f264b4d9111fa8d660e63107e546,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.4.240,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Jun 17 22:07:48.617: INFO: Pod "webserver-deployment-847dcfb7fb-849hc" is available: &Pod{ObjectMeta:{webserver-deployment-847dcfb7fb-849hc webserver-deployment-847dcfb7fb- deployment-9697 cd95bfc2-96a6-4ce3-8fc3-f5d75b286805 47409 0 2022-06-17 22:07:34 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[k8s.v1.cni.cncf.io/network-status:[{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.3.197" ], "mac": "8e:34:b2:94:bb:c2", "default": true, "dns": {} }] k8s.v1.cni.cncf.io/networks-status:[{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.3.197" ], "mac": "8e:34:b2:94:bb:c2", "default": true, "dns": {} }] kubernetes.io/psp:collectd] [{apps/v1 ReplicaSet webserver-deployment-847dcfb7fb ba0c709b-9564-4072-acff-3ccc48ecd39f 0xc006577fbf 0xc006577fd0}] [] [{kube-controller-manager Update v1 2022-06-17 22:07:34 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ba0c709b-9564-4072-acff-3ccc48ecd39f\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {multus Update v1 2022-06-17 22:07:39 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:k8s.v1.cni.cncf.io/network-status":{},"f:k8s.v1.cni.cncf.io/networks-status":{}}}}} {kubelet Update v1 2022-06-17 22:07:42 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.3.197\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-jxplv,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-jxplv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-06-17 22:07:34 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-06-17 22:07:42 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-06-17 22:07:42 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-06-17 22:07:34 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.10.190.208,PodIP:10.244.3.197,StartTime:2022-06-17 22:07:34 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2022-06-17 22:07:41 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,ImageID:docker-pullable://k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50,ContainerID:docker://5af047c797333f796bb36dfa1e374533c9c910fa6d2f733f592b13432a6322e5,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.3.197,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Jun 17 22:07:48.617: INFO: Pod "webserver-deployment-847dcfb7fb-ftkkw" is not available: &Pod{ObjectMeta:{webserver-deployment-847dcfb7fb-ftkkw webserver-deployment-847dcfb7fb- deployment-9697 3a1425e4-7d64-4691-9fa5-83d725e6d6d3 47610 0 2022-06-17 22:07:48 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[kubernetes.io/psp:collectd] [{apps/v1 ReplicaSet webserver-deployment-847dcfb7fb ba0c709b-9564-4072-acff-3ccc48ecd39f 0xc003e126af 0xc003e126c0}] [] [{kube-controller-manager Update v1 2022-06-17 22:07:48 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ba0c709b-9564-4072-acff-3ccc48ecd39f\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-p7jpx,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-p7jpx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jun 17 22:07:48.617: INFO: Pod "webserver-deployment-847dcfb7fb-p8cps" is available: &Pod{ObjectMeta:{webserver-deployment-847dcfb7fb-p8cps webserver-deployment-847dcfb7fb- deployment-9697 be2c6e77-e1d2-4af0-bdcc-c071e1ad3d5d 47458 0 2022-06-17 22:07:34 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[k8s.v1.cni.cncf.io/network-status:[{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.3.199" ], "mac": "02:62:3b:07:dc:d9", "default": true, "dns": {} }] k8s.v1.cni.cncf.io/networks-status:[{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.3.199" ], "mac": "02:62:3b:07:dc:d9", "default": true, "dns": {} }] kubernetes.io/psp:collectd] [{apps/v1 ReplicaSet webserver-deployment-847dcfb7fb ba0c709b-9564-4072-acff-3ccc48ecd39f 0xc003e127ef 0xc003e12800}] [] [{kube-controller-manager Update v1 2022-06-17 22:07:34 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ba0c709b-9564-4072-acff-3ccc48ecd39f\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {multus Update v1 2022-06-17 22:07:40 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:k8s.v1.cni.cncf.io/network-status":{},"f:k8s.v1.cni.cncf.io/networks-status":{}}}}} {kubelet Update v1 2022-06-17 22:07:44 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.3.199\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-97w6w,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-97w6w,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-06-17 22:07:34 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-06-17 22:07:44 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-06-17 22:07:44 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-06-17 22:07:34 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.10.190.208,PodIP:10.244.3.199,StartTime:2022-06-17 22:07:34 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2022-06-17 22:07:42 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,ImageID:docker-pullable://k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50,ContainerID:docker://2163f7c890e851183ba5519dbba4c34970fd15b834b79546274e80e21c796745,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.3.199,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Jun 17 22:07:48.618: INFO: Pod "webserver-deployment-847dcfb7fb-qfzbg" is available: &Pod{ObjectMeta:{webserver-deployment-847dcfb7fb-qfzbg webserver-deployment-847dcfb7fb- deployment-9697 6f26446a-b47d-47f1-a293-edf535d8d5c8 47438 0 2022-06-17 22:07:34 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[k8s.v1.cni.cncf.io/network-status:[{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.3.196" ], "mac": "2e:b1:07:57:37:c0", "default": true, "dns": {} }] k8s.v1.cni.cncf.io/networks-status:[{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.3.196" ], "mac": "2e:b1:07:57:37:c0", "default": true, "dns": {} }] kubernetes.io/psp:collectd] [{apps/v1 ReplicaSet webserver-deployment-847dcfb7fb ba0c709b-9564-4072-acff-3ccc48ecd39f 0xc003e129ef 0xc003e12a00}] [] [{kube-controller-manager Update v1 2022-06-17 22:07:34 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ba0c709b-9564-4072-acff-3ccc48ecd39f\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {multus Update v1 2022-06-17 22:07:37 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:k8s.v1.cni.cncf.io/network-status":{},"f:k8s.v1.cni.cncf.io/networks-status":{}}}}} {kubelet Update v1 2022-06-17 22:07:43 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.3.196\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-cxwzl,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-cxwzl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-06-17 22:07:34 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-06-17 22:07:42 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-06-17 22:07:42 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-06-17 22:07:34 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.10.190.208,PodIP:10.244.3.196,StartTime:2022-06-17 22:07:34 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2022-06-17 22:07:38 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,ImageID:docker-pullable://k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50,ContainerID:docker://65669ee257cbb8420917586cb829a1266a6747f2d661b82302e64030b14bbef2,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.3.196,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Jun 17 22:07:48.618: INFO: Pod "webserver-deployment-847dcfb7fb-r9768" is available: &Pod{ObjectMeta:{webserver-deployment-847dcfb7fb-r9768 webserver-deployment-847dcfb7fb- deployment-9697 fabbe814-f0b8-444e-9878-c8bc52f51eee 47349 0 2022-06-17 22:07:34 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[k8s.v1.cni.cncf.io/network-status:[{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.4.241" ], "mac": "8a:f3:98:24:33:74", "default": true, "dns": {} }] k8s.v1.cni.cncf.io/networks-status:[{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.4.241" ], "mac": "8a:f3:98:24:33:74", "default": true, "dns": {} }] kubernetes.io/psp:collectd] [{apps/v1 ReplicaSet webserver-deployment-847dcfb7fb ba0c709b-9564-4072-acff-3ccc48ecd39f 0xc003e12bef 0xc003e12c00}] [] [{kube-controller-manager Update v1 2022-06-17 22:07:34 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ba0c709b-9564-4072-acff-3ccc48ecd39f\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {multus Update v1 2022-06-17 22:07:38 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:k8s.v1.cni.cncf.io/network-status":{},"f:k8s.v1.cni.cncf.io/networks-status":{}}}}} {kubelet Update v1 2022-06-17 22:07:40 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.4.241\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-7x4hp,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-7x4hp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node1,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-06-17 22:07:34 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-06-17 22:07:39 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-06-17 22:07:39 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-06-17 22:07:34 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.10.190.207,PodIP:10.244.4.241,StartTime:2022-06-17 22:07:34 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2022-06-17 22:07:39 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,ImageID:docker-pullable://k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50,ContainerID:docker://9f8b539f6c232d8d7a9b2d313c0afba992e022783fb0657ece770abb8464071d,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.4.241,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Jun 17 22:07:48.618: INFO: Pod "webserver-deployment-847dcfb7fb-rlpqt" is available: &Pod{ObjectMeta:{webserver-deployment-847dcfb7fb-rlpqt webserver-deployment-847dcfb7fb- deployment-9697 4ecc4af3-2711-40f9-983c-542d726c5eee 47452 0 2022-06-17 22:07:34 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[k8s.v1.cni.cncf.io/network-status:[{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.3.201" ], "mac": "56:c0:87:80:b4:a7", "default": true, "dns": {} }] k8s.v1.cni.cncf.io/networks-status:[{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.3.201" ], "mac": "56:c0:87:80:b4:a7", "default": true, "dns": {} }] kubernetes.io/psp:collectd] [{apps/v1 ReplicaSet webserver-deployment-847dcfb7fb ba0c709b-9564-4072-acff-3ccc48ecd39f 0xc003e12def 0xc003e12e00}] [] [{kube-controller-manager Update v1 2022-06-17 22:07:34 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ba0c709b-9564-4072-acff-3ccc48ecd39f\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {multus Update v1 2022-06-17 22:07:41 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:k8s.v1.cni.cncf.io/network-status":{},"f:k8s.v1.cni.cncf.io/networks-status":{}}}}} {kubelet Update v1 2022-06-17 22:07:43 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.3.201\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-7mbzs,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-7mbzs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-06-17 22:07:34 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-06-17 22:07:43 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-06-17 22:07:43 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-06-17 22:07:34 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.10.190.208,PodIP:10.244.3.201,StartTime:2022-06-17 22:07:34 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2022-06-17 22:07:43 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,ImageID:docker-pullable://k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50,ContainerID:docker://5e263d4f6394f70b2b3adb12c6cd7ca69f6abe4c9ebda0ffb1a86f36192a1629,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.3.201,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Jun 17 22:07:48.619: INFO: Pod "webserver-deployment-847dcfb7fb-w95sf" is available: &Pod{ObjectMeta:{webserver-deployment-847dcfb7fb-w95sf webserver-deployment-847dcfb7fb- deployment-9697 9f542652-bcde-4878-9521-8f8cc1d151a2 47342 0 2022-06-17 22:07:34 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[k8s.v1.cni.cncf.io/network-status:[{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.4.239" ], "mac": "2e:80:40:37:fe:72", "default": true, "dns": {} }] k8s.v1.cni.cncf.io/networks-status:[{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.4.239" ], "mac": "2e:80:40:37:fe:72", "default": true, "dns": {} }] kubernetes.io/psp:collectd] [{apps/v1 ReplicaSet webserver-deployment-847dcfb7fb ba0c709b-9564-4072-acff-3ccc48ecd39f 0xc003e12fef 0xc003e13000}] [] [{kube-controller-manager Update v1 2022-06-17 22:07:34 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ba0c709b-9564-4072-acff-3ccc48ecd39f\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {multus Update v1 2022-06-17 22:07:38 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:k8s.v1.cni.cncf.io/network-status":{},"f:k8s.v1.cni.cncf.io/networks-status":{}}}}} {kubelet Update v1 2022-06-17 22:07:40 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.4.239\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-n5z6p,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-n5z6p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node1,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-06-17 22:07:34 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-06-17 22:07:39 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-06-17 22:07:39 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-06-17 22:07:34 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.10.190.207,PodIP:10.244.4.239,StartTime:2022-06-17 22:07:34 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2022-06-17 22:07:39 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,ImageID:docker-pullable://k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50,ContainerID:docker://cd8f4dae612e360788089739abc1e3f3ec496c3bf407ff91005d6f7fb9b9c377,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.4.239,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Jun 17 22:07:48.619: INFO: Pod "webserver-deployment-847dcfb7fb-wb5vg" is not available: &Pod{ObjectMeta:{webserver-deployment-847dcfb7fb-wb5vg webserver-deployment-847dcfb7fb- deployment-9697 a149aa6e-4906-4867-b1f7-f0235dfb1c18 47607 0 2022-06-17 22:07:48 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[kubernetes.io/psp:collectd] [{apps/v1 ReplicaSet webserver-deployment-847dcfb7fb ba0c709b-9564-4072-acff-3ccc48ecd39f 0xc003e131ef 0xc003e13200}] [] [{kube-controller-manager Update v1 2022-06-17 22:07:48 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ba0c709b-9564-4072-acff-3ccc48ecd39f\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-zq72z,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-zq72z,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-06-17 22:07:48 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jun 17 22:07:48.619: INFO: Pod "webserver-deployment-847dcfb7fb-zkz48" is available: &Pod{ObjectMeta:{webserver-deployment-847dcfb7fb-zkz48 webserver-deployment-847dcfb7fb- deployment-9697 cfb7fe1d-6b16-4e17-9e32-d14c5cc61045 47330 0 2022-06-17 22:07:34 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[k8s.v1.cni.cncf.io/network-status:[{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.4.238" ], "mac": "b2:2f:d6:6f:33:2a", "default": true, "dns": {} }] k8s.v1.cni.cncf.io/networks-status:[{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.4.238" ], "mac": "b2:2f:d6:6f:33:2a", "default": true, "dns": {} }] kubernetes.io/psp:collectd] [{apps/v1 ReplicaSet webserver-deployment-847dcfb7fb ba0c709b-9564-4072-acff-3ccc48ecd39f 0xc003e1335f 0xc003e13370}] [] [{kube-controller-manager Update v1 2022-06-17 22:07:34 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ba0c709b-9564-4072-acff-3ccc48ecd39f\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {multus Update v1 2022-06-17 22:07:37 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:k8s.v1.cni.cncf.io/network-status":{},"f:k8s.v1.cni.cncf.io/networks-status":{}}}}} {kubelet Update v1 2022-06-17 22:07:39 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.4.238\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-ddhnb,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-ddhnb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node1,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-06-17 22:07:34 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-06-17 22:07:39 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-06-17 22:07:39 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-06-17 22:07:34 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.10.190.207,PodIP:10.244.4.238,StartTime:2022-06-17 22:07:34 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2022-06-17 22:07:39 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,ImageID:docker-pullable://k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50,ContainerID:docker://bb885572144371f831cd9e8ec1622f8022bbaf279f1662e2e07d34184ef5c796,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.4.238,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 17 22:07:48.619: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-9697" for this suite. • [SLOW TEST:14.126 seconds] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support proportional scaling [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should support proportional scaling [Conformance]","total":-1,"completed":24,"skipped":337,"failed":2,"failures":["[sig-network] Services should be able to create a functioning NodePort service [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]"]} SSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 17 22:07:48.677: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/kubelet.go:38 [BeforeEach] when scheduling a busybox command that always fails in a pod /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/kubelet.go:82 [It] should be possible to delete [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [AfterEach] [sig-node] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 17 22:07:48.723: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-4675" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance]","total":-1,"completed":25,"skipped":357,"failed":2,"failures":["[sig-network] Services should be able to create a functioning NodePort service [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ {"msg":"PASSED [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]","total":-1,"completed":31,"skipped":477,"failed":0} [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 17 22:05:21.433: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:54 [It] should have monotonically increasing restart count [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating pod liveness-9d921226-5ef5-4b95-8fd5-73a7ea4da2c5 in namespace container-probe-4718 Jun 17 22:05:27.531: INFO: Started pod liveness-9d921226-5ef5-4b95-8fd5-73a7ea4da2c5 in namespace container-probe-4718 STEP: checking the pod's current state and verifying that restartCount is present Jun 17 22:05:27.533: INFO: Initial restart count of pod liveness-9d921226-5ef5-4b95-8fd5-73a7ea4da2c5 is 0 Jun 17 22:05:43.568: INFO: Restart count of pod container-probe-4718/liveness-9d921226-5ef5-4b95-8fd5-73a7ea4da2c5 is now 1 (16.034884004s elapsed) Jun 17 22:06:03.609: INFO: Restart count of pod container-probe-4718/liveness-9d921226-5ef5-4b95-8fd5-73a7ea4da2c5 is now 2 (36.07572851s elapsed) Jun 17 22:06:23.652: INFO: Restart count of pod container-probe-4718/liveness-9d921226-5ef5-4b95-8fd5-73a7ea4da2c5 is now 3 (56.11915769s elapsed) Jun 17 22:06:47.704: INFO: Restart count of pod container-probe-4718/liveness-9d921226-5ef5-4b95-8fd5-73a7ea4da2c5 is now 4 (1m20.170745993s elapsed) Jun 17 22:07:51.862: INFO: Restart count of pod container-probe-4718/liveness-9d921226-5ef5-4b95-8fd5-73a7ea4da2c5 is now 5 (2m24.329119144s elapsed) STEP: deleting the pod [AfterEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 17 22:07:51.869: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-4718" for this suite. • [SLOW TEST:150.443 seconds] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should have monotonically increasing restart count [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","total":-1,"completed":32,"skipped":477,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 17 22:07:51.934: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should run through the lifecycle of a ServiceAccount [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating a ServiceAccount STEP: watching for the ServiceAccount to be added STEP: patching the ServiceAccount STEP: finding ServiceAccount in list of all ServiceAccounts (by LabelSelector) STEP: deleting the ServiceAccount [AfterEach] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 17 22:07:51.974: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-6732" for this suite. • ------------------------------ {"msg":"PASSED [sig-auth] ServiceAccounts should run through the lifecycle of a ServiceAccount [Conformance]","total":-1,"completed":33,"skipped":507,"failed":0} SSSS ------------------------------ {"msg":"PASSED [sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","total":-1,"completed":50,"skipped":548,"failed":0} [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 17 22:06:52.515: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:54 [It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [AfterEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 17 22:07:52.553: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-5006" for this suite. • [SLOW TEST:60.045 seconds] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]","total":-1,"completed":51,"skipped":548,"failed":0} SSSSS ------------------------------ [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 17 22:07:47.997: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Jun 17 22:07:48.247: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Jun 17 22:07:50.255: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63791100468, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63791100468, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63791100468, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63791100468, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} Jun 17 22:07:52.259: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63791100468, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63791100468, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63791100468, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63791100468, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} Jun 17 22:07:54.258: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63791100468, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63791100468, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63791100468, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63791100468, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} Jun 17 22:07:56.260: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63791100468, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63791100468, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63791100468, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63791100468, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jun 17 22:07:59.269: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Registering a validating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API STEP: Registering a mutating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API STEP: Creating a dummy validating-webhook-configuration object STEP: Deleting the validating-webhook-configuration, which should be possible to remove STEP: Creating a dummy mutating-webhook-configuration object STEP: Deleting the mutating-webhook-configuration, which should be possible to remove [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 17 22:07:59.319: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-4402" for this suite. STEP: Destroying namespace "webhook-4402-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:11.352 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","total":-1,"completed":30,"skipped":437,"failed":0} SSSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 17 22:07:51.991: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41 [It] should provide container's cpu request [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test downward API volume plugin Jun 17 22:07:52.024: INFO: Waiting up to 5m0s for pod "downwardapi-volume-53d6838a-9968-48c8-854e-94bcf2db1873" in namespace "projected-3678" to be "Succeeded or Failed" Jun 17 22:07:52.026: INFO: Pod "downwardapi-volume-53d6838a-9968-48c8-854e-94bcf2db1873": Phase="Pending", Reason="", readiness=false. Elapsed: 2.219639ms Jun 17 22:07:54.029: INFO: Pod "downwardapi-volume-53d6838a-9968-48c8-854e-94bcf2db1873": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005713912s Jun 17 22:07:56.035: INFO: Pod "downwardapi-volume-53d6838a-9968-48c8-854e-94bcf2db1873": Phase="Pending", Reason="", readiness=false. Elapsed: 4.011273259s Jun 17 22:07:58.038: INFO: Pod "downwardapi-volume-53d6838a-9968-48c8-854e-94bcf2db1873": Phase="Pending", Reason="", readiness=false. Elapsed: 6.01461593s Jun 17 22:08:00.043: INFO: Pod "downwardapi-volume-53d6838a-9968-48c8-854e-94bcf2db1873": Phase="Pending", Reason="", readiness=false. Elapsed: 8.019059129s Jun 17 22:08:02.048: INFO: Pod "downwardapi-volume-53d6838a-9968-48c8-854e-94bcf2db1873": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.024650736s STEP: Saw pod success Jun 17 22:08:02.048: INFO: Pod "downwardapi-volume-53d6838a-9968-48c8-854e-94bcf2db1873" satisfied condition "Succeeded or Failed" Jun 17 22:08:02.051: INFO: Trying to get logs from node node2 pod downwardapi-volume-53d6838a-9968-48c8-854e-94bcf2db1873 container client-container: STEP: delete the pod Jun 17 22:08:02.064: INFO: Waiting for pod downwardapi-volume-53d6838a-9968-48c8-854e-94bcf2db1873 to disappear Jun 17 22:08:02.066: INFO: Pod downwardapi-volume-53d6838a-9968-48c8-854e-94bcf2db1873 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 17 22:08:02.066: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3678" for this suite. • [SLOW TEST:10.082 seconds] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should provide container's cpu request [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance]","total":-1,"completed":34,"skipped":511,"failed":0} SS ------------------------------ [BeforeEach] [sig-apps] ReplicaSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 17 22:07:52.572: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation and release no longer matching pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Given a Pod with a 'name' label pod-adoption-release is created Jun 17 22:07:52.607: INFO: The status of Pod pod-adoption-release is Pending, waiting for it to be Running (with Ready = true) Jun 17 22:07:54.611: INFO: The status of Pod pod-adoption-release is Pending, waiting for it to be Running (with Ready = true) Jun 17 22:07:56.615: INFO: The status of Pod pod-adoption-release is Pending, waiting for it to be Running (with Ready = true) Jun 17 22:07:58.614: INFO: The status of Pod pod-adoption-release is Pending, waiting for it to be Running (with Ready = true) Jun 17 22:08:00.614: INFO: The status of Pod pod-adoption-release is Running (Ready = true) STEP: When a replicaset with a matching selector is created STEP: Then the orphan pod is adopted STEP: When the matched label of one of its pods change Jun 17 22:08:01.630: INFO: Pod name pod-adoption-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicaSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 17 22:08:02.646: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-9473" for this suite. • [SLOW TEST:10.083 seconds] [sig-apps] ReplicaSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching pods on creation and release no longer matching pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance]","total":-1,"completed":52,"skipped":553,"failed":0} SSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 17 22:07:48.816: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:126 STEP: Setting up server cert STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication STEP: Deploying the custom resource conversion webhook pod STEP: Wait for the deployment to be ready Jun 17 22:07:49.271: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set Jun 17 22:07:51.280: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63791100469, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63791100469, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63791100469, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63791100469, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-697cdbd8f4\" is progressing."}}, CollisionCount:(*int32)(nil)} Jun 17 22:07:53.284: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63791100469, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63791100469, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63791100469, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63791100469, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-697cdbd8f4\" is progressing."}}, CollisionCount:(*int32)(nil)} Jun 17 22:07:55.284: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63791100469, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63791100469, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63791100469, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63791100469, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-697cdbd8f4\" is progressing."}}, CollisionCount:(*int32)(nil)} Jun 17 22:07:57.283: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63791100469, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63791100469, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63791100469, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63791100469, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-697cdbd8f4\" is progressing."}}, CollisionCount:(*int32)(nil)} Jun 17 22:07:59.283: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63791100469, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63791100469, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63791100469, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63791100469, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-697cdbd8f4\" is progressing."}}, CollisionCount:(*int32)(nil)} Jun 17 22:08:01.285: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63791100469, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63791100469, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63791100469, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63791100469, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-697cdbd8f4\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jun 17 22:08:04.291: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1 [It] should be able to convert from CR v1 to CR v2 [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Jun 17 22:08:04.294: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating a v1 custom resource STEP: v2 custom resource should be converted [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 17 22:08:12.376: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-webhook-5291" for this suite. [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:137 • [SLOW TEST:23.588 seconds] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to convert from CR v1 to CR v2 [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","total":-1,"completed":26,"skipped":405,"failed":2,"failures":["[sig-network] Services should be able to create a functioning NodePort service [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]"]} SSS ------------------------------ [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 17 22:07:59.365: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:86 [It] should run the lifecycle of a Deployment [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating a Deployment STEP: waiting for Deployment to be created STEP: waiting for all Replicas to be Ready Jun 17 22:07:59.399: INFO: observed Deployment test-deployment in namespace deployment-1068 with ReadyReplicas 0 and labels map[test-deployment-static:true] Jun 17 22:07:59.399: INFO: observed Deployment test-deployment in namespace deployment-1068 with ReadyReplicas 0 and labels map[test-deployment-static:true] Jun 17 22:07:59.402: INFO: observed Deployment test-deployment in namespace deployment-1068 with ReadyReplicas 0 and labels map[test-deployment-static:true] Jun 17 22:07:59.402: INFO: observed Deployment test-deployment in namespace deployment-1068 with ReadyReplicas 0 and labels map[test-deployment-static:true] Jun 17 22:07:59.409: INFO: observed Deployment test-deployment in namespace deployment-1068 with ReadyReplicas 0 and labels map[test-deployment-static:true] Jun 17 22:07:59.409: INFO: observed Deployment test-deployment in namespace deployment-1068 with ReadyReplicas 0 and labels map[test-deployment-static:true] Jun 17 22:07:59.429: INFO: observed Deployment test-deployment in namespace deployment-1068 with ReadyReplicas 0 and labels map[test-deployment-static:true] Jun 17 22:07:59.429: INFO: observed Deployment test-deployment in namespace deployment-1068 with ReadyReplicas 0 and labels map[test-deployment-static:true] Jun 17 22:08:04.159: INFO: observed Deployment test-deployment in namespace deployment-1068 with ReadyReplicas 1 and labels map[test-deployment-static:true] Jun 17 22:08:04.159: INFO: observed Deployment test-deployment in namespace deployment-1068 with ReadyReplicas 1 and labels map[test-deployment-static:true] Jun 17 22:08:04.558: INFO: observed Deployment test-deployment in namespace deployment-1068 with ReadyReplicas 2 and labels map[test-deployment-static:true] STEP: patching the Deployment Jun 17 22:08:04.564: INFO: observed event type ADDED STEP: waiting for Replicas to scale Jun 17 22:08:04.565: INFO: observed Deployment test-deployment in namespace deployment-1068 with ReadyReplicas 0 Jun 17 22:08:04.565: INFO: observed Deployment test-deployment in namespace deployment-1068 with ReadyReplicas 0 Jun 17 22:08:04.565: INFO: observed Deployment test-deployment in namespace deployment-1068 with ReadyReplicas 0 Jun 17 22:08:04.565: INFO: observed Deployment test-deployment in namespace deployment-1068 with ReadyReplicas 0 Jun 17 22:08:04.565: INFO: observed Deployment test-deployment in namespace deployment-1068 with ReadyReplicas 0 Jun 17 22:08:04.565: INFO: observed Deployment test-deployment in namespace deployment-1068 with ReadyReplicas 0 Jun 17 22:08:04.565: INFO: observed Deployment test-deployment in namespace deployment-1068 with ReadyReplicas 0 Jun 17 22:08:04.566: INFO: observed Deployment test-deployment in namespace deployment-1068 with ReadyReplicas 0 Jun 17 22:08:04.566: INFO: observed Deployment test-deployment in namespace deployment-1068 with ReadyReplicas 1 Jun 17 22:08:04.566: INFO: observed Deployment test-deployment in namespace deployment-1068 with ReadyReplicas 1 Jun 17 22:08:04.566: INFO: observed Deployment test-deployment in namespace deployment-1068 with ReadyReplicas 2 Jun 17 22:08:04.566: INFO: observed Deployment test-deployment in namespace deployment-1068 with ReadyReplicas 2 Jun 17 22:08:04.566: INFO: observed Deployment test-deployment in namespace deployment-1068 with ReadyReplicas 2 Jun 17 22:08:04.566: INFO: observed Deployment test-deployment in namespace deployment-1068 with ReadyReplicas 2 Jun 17 22:08:04.569: INFO: observed Deployment test-deployment in namespace deployment-1068 with ReadyReplicas 2 Jun 17 22:08:04.569: INFO: observed Deployment test-deployment in namespace deployment-1068 with ReadyReplicas 2 Jun 17 22:08:04.575: INFO: observed Deployment test-deployment in namespace deployment-1068 with ReadyReplicas 2 Jun 17 22:08:04.575: INFO: observed Deployment test-deployment in namespace deployment-1068 with ReadyReplicas 2 Jun 17 22:08:04.586: INFO: observed Deployment test-deployment in namespace deployment-1068 with ReadyReplicas 1 Jun 17 22:08:04.586: INFO: observed Deployment test-deployment in namespace deployment-1068 with ReadyReplicas 1 Jun 17 22:08:04.589: INFO: observed Deployment test-deployment in namespace deployment-1068 with ReadyReplicas 1 Jun 17 22:08:04.589: INFO: observed Deployment test-deployment in namespace deployment-1068 with ReadyReplicas 1 Jun 17 22:08:07.576: INFO: observed Deployment test-deployment in namespace deployment-1068 with ReadyReplicas 2 Jun 17 22:08:07.576: INFO: observed Deployment test-deployment in namespace deployment-1068 with ReadyReplicas 2 Jun 17 22:08:07.588: INFO: observed Deployment test-deployment in namespace deployment-1068 with ReadyReplicas 1 STEP: listing Deployments Jun 17 22:08:07.590: INFO: Found test-deployment with labels: map[test-deployment:patched test-deployment-static:true] STEP: updating the Deployment Jun 17 22:08:07.602: INFO: observed Deployment test-deployment in namespace deployment-1068 with ReadyReplicas 1 STEP: fetching the DeploymentStatus Jun 17 22:08:07.609: INFO: observed Deployment test-deployment in namespace deployment-1068 with ReadyReplicas 1 and labels map[test-deployment:updated test-deployment-static:true] Jun 17 22:08:07.609: INFO: observed Deployment test-deployment in namespace deployment-1068 with ReadyReplicas 1 and labels map[test-deployment:updated test-deployment-static:true] Jun 17 22:08:07.614: INFO: observed Deployment test-deployment in namespace deployment-1068 with ReadyReplicas 1 and labels map[test-deployment:updated test-deployment-static:true] Jun 17 22:08:07.621: INFO: observed Deployment test-deployment in namespace deployment-1068 with ReadyReplicas 1 and labels map[test-deployment:updated test-deployment-static:true] Jun 17 22:08:07.625: INFO: observed Deployment test-deployment in namespace deployment-1068 with ReadyReplicas 1 and labels map[test-deployment:updated test-deployment-static:true] Jun 17 22:08:12.414: INFO: observed Deployment test-deployment in namespace deployment-1068 with ReadyReplicas 2 and labels map[test-deployment:updated test-deployment-static:true] Jun 17 22:08:12.434: INFO: observed Deployment test-deployment in namespace deployment-1068 with ReadyReplicas 3 and labels map[test-deployment:updated test-deployment-static:true] Jun 17 22:08:12.457: INFO: observed Deployment test-deployment in namespace deployment-1068 with ReadyReplicas 2 and labels map[test-deployment:updated test-deployment-static:true] Jun 17 22:08:12.463: INFO: observed Deployment test-deployment in namespace deployment-1068 with ReadyReplicas 2 and labels map[test-deployment:updated test-deployment-static:true] Jun 17 22:08:18.322: INFO: observed Deployment test-deployment in namespace deployment-1068 with ReadyReplicas 3 and labels map[test-deployment:updated test-deployment-static:true] STEP: patching the DeploymentStatus STEP: fetching the DeploymentStatus Jun 17 22:08:18.359: INFO: observed Deployment test-deployment in namespace deployment-1068 with ReadyReplicas 1 Jun 17 22:08:18.359: INFO: observed Deployment test-deployment in namespace deployment-1068 with ReadyReplicas 1 Jun 17 22:08:18.359: INFO: observed Deployment test-deployment in namespace deployment-1068 with ReadyReplicas 1 Jun 17 22:08:18.360: INFO: observed Deployment test-deployment in namespace deployment-1068 with ReadyReplicas 1 Jun 17 22:08:18.360: INFO: observed Deployment test-deployment in namespace deployment-1068 with ReadyReplicas 1 Jun 17 22:08:18.360: INFO: observed Deployment test-deployment in namespace deployment-1068 with ReadyReplicas 2 Jun 17 22:08:18.360: INFO: observed Deployment test-deployment in namespace deployment-1068 with ReadyReplicas 3 Jun 17 22:08:18.360: INFO: observed Deployment test-deployment in namespace deployment-1068 with ReadyReplicas 2 Jun 17 22:08:18.360: INFO: observed Deployment test-deployment in namespace deployment-1068 with ReadyReplicas 2 Jun 17 22:08:18.360: INFO: observed Deployment test-deployment in namespace deployment-1068 with ReadyReplicas 3 STEP: deleting the Deployment Jun 17 22:08:18.367: INFO: observed event type MODIFIED Jun 17 22:08:18.367: INFO: observed event type MODIFIED Jun 17 22:08:18.367: INFO: observed event type MODIFIED Jun 17 22:08:18.367: INFO: observed event type MODIFIED Jun 17 22:08:18.367: INFO: observed event type MODIFIED Jun 17 22:08:18.367: INFO: observed event type MODIFIED Jun 17 22:08:18.368: INFO: observed event type MODIFIED Jun 17 22:08:18.368: INFO: observed event type MODIFIED Jun 17 22:08:18.368: INFO: observed event type MODIFIED Jun 17 22:08:18.368: INFO: observed event type MODIFIED Jun 17 22:08:18.368: INFO: observed event type MODIFIED [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:80 Jun 17 22:08:18.369: INFO: Log out all the ReplicaSets if there is no deployment created Jun 17 22:08:18.372: INFO: ReplicaSet "test-deployment-748588b7cd": &ReplicaSet{ObjectMeta:{test-deployment-748588b7cd deployment-1068 bac89911-b526-45e3-8f05-d054cf82d5c8 48576 4 2022-06-17 22:08:04 +0000 UTC map[pod-template-hash:748588b7cd test-deployment-static:true] map[deployment.kubernetes.io/desired-replicas:2 deployment.kubernetes.io/max-replicas:3 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-deployment a2243a3d-f483-4605-a436-4e0c3ec3d6fb 0xc000495bb7 0xc000495bb8}] [] [{kube-controller-manager Update apps/v1 2022-06-17 22:08:18 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:pod-template-hash":{},"f:test-deployment-static":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a2243a3d-f483-4605-a436-4e0c3ec3d6fb\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:pod-template-hash":{},"f:test-deployment-static":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"test-deployment\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{pod-template-hash: 748588b7cd,test-deployment-static: true,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[pod-template-hash:748588b7cd test-deployment-static:true] map[] [] [] []} {[] [] [{test-deployment k8s.gcr.io/pause:3.4.1 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc000495cb0 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:4,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Jun 17 22:08:18.375: INFO: ReplicaSet "test-deployment-7b4c744884": &ReplicaSet{ObjectMeta:{test-deployment-7b4c744884 deployment-1068 a317646d-184d-4ea1-8b1f-876b2ea37eed 48352 3 2022-06-17 22:07:59 +0000 UTC map[pod-template-hash:7b4c744884 test-deployment-static:true] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-deployment a2243a3d-f483-4605-a436-4e0c3ec3d6fb 0xc000495dd7 0xc000495dd8}] [] [{kube-controller-manager Update apps/v1 2022-06-17 22:08:07 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:pod-template-hash":{},"f:test-deployment-static":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a2243a3d-f483-4605-a436-4e0c3ec3d6fb\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:pod-template-hash":{},"f:test-deployment-static":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"test-deployment\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{pod-template-hash: 7b4c744884,test-deployment-static: true,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[pod-template-hash:7b4c744884 test-deployment-static:true] map[] [] [] []} {[] [] [{test-deployment k8s.gcr.io/e2e-test-images/agnhost:2.32 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc000495e70 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:3,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Jun 17 22:08:18.377: INFO: ReplicaSet "test-deployment-85d87c6f4b": &ReplicaSet{ObjectMeta:{test-deployment-85d87c6f4b deployment-1068 07fc1b90-82e8-4b23-a8ed-595e68409b1b 48567 2 2022-06-17 22:08:07 +0000 UTC map[pod-template-hash:85d87c6f4b test-deployment-static:true] map[deployment.kubernetes.io/desired-replicas:2 deployment.kubernetes.io/max-replicas:3 deployment.kubernetes.io/revision:3] [{apps/v1 Deployment test-deployment a2243a3d-f483-4605-a436-4e0c3ec3d6fb 0xc000495ef7 0xc000495ef8}] [] [{kube-controller-manager Update apps/v1 2022-06-17 22:08:12 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:pod-template-hash":{},"f:test-deployment-static":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a2243a3d-f483-4605-a436-4e0c3ec3d6fb\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:pod-template-hash":{},"f:test-deployment-static":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"test-deployment\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*2,Selector:&v1.LabelSelector{MatchLabels:map[string]string{pod-template-hash: 85d87c6f4b,test-deployment-static: true,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[pod-template-hash:85d87c6f4b test-deployment-static:true] map[] [] [] []} {[] [] [{test-deployment k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc000e0e0b0 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:2,FullyLabeledReplicas:2,ObservedGeneration:2,ReadyReplicas:2,AvailableReplicas:2,Conditions:[]ReplicaSetCondition{},},} Jun 17 22:08:18.381: INFO: pod: "test-deployment-85d87c6f4b-ds495": &Pod{ObjectMeta:{test-deployment-85d87c6f4b-ds495 test-deployment-85d87c6f4b- deployment-1068 0475558a-38b6-48f2-a94e-ff99a28eeb0d 48471 0 2022-06-17 22:08:07 +0000 UTC map[pod-template-hash:85d87c6f4b test-deployment-static:true] map[k8s.v1.cni.cncf.io/network-status:[{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.3.217" ], "mac": "12:bc:ea:5c:de:08", "default": true, "dns": {} }] k8s.v1.cni.cncf.io/networks-status:[{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.3.217" ], "mac": "12:bc:ea:5c:de:08", "default": true, "dns": {} }] kubernetes.io/psp:collectd] [{apps/v1 ReplicaSet test-deployment-85d87c6f4b 07fc1b90-82e8-4b23-a8ed-595e68409b1b 0xc000e0eac7 0xc000e0eac8}] [] [{kube-controller-manager Update v1 2022-06-17 22:08:07 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:pod-template-hash":{},"f:test-deployment-static":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"07fc1b90-82e8-4b23-a8ed-595e68409b1b\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"test-deployment\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {multus Update v1 2022-06-17 22:08:11 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:k8s.v1.cni.cncf.io/network-status":{},"f:k8s.v1.cni.cncf.io/networks-status":{}}}}} {kubelet Update v1 2022-06-17 22:08:12 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.3.217\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-c2ggk,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:test-deployment,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-c2ggk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-06-17 22:08:07 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-06-17 22:08:12 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-06-17 22:08:12 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-06-17 22:08:07 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.10.190.208,PodIP:10.244.3.217,StartTime:2022-06-17 22:08:07 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:test-deployment,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2022-06-17 22:08:12 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,ImageID:docker-pullable://k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50,ContainerID:docker://ef904bb877fb563576d51b18ce050cec70080be4ee728e4381cd0bb2aafc5bbe,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.3.217,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Jun 17 22:08:18.381: INFO: pod: "test-deployment-85d87c6f4b-s8j9t": &Pod{ObjectMeta:{test-deployment-85d87c6f4b-s8j9t test-deployment-85d87c6f4b- deployment-1068 4680e2b9-b21f-47a3-a78d-0871c953c907 48566 0 2022-06-17 22:08:12 +0000 UTC map[pod-template-hash:85d87c6f4b test-deployment-static:true] map[k8s.v1.cni.cncf.io/network-status:[{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.3.220" ], "mac": "66:49:73:dc:db:d1", "default": true, "dns": {} }] k8s.v1.cni.cncf.io/networks-status:[{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.3.220" ], "mac": "66:49:73:dc:db:d1", "default": true, "dns": {} }] kubernetes.io/psp:collectd] [{apps/v1 ReplicaSet test-deployment-85d87c6f4b 07fc1b90-82e8-4b23-a8ed-595e68409b1b 0xc000e0ecbf 0xc000e0ecd0}] [] [{kube-controller-manager Update v1 2022-06-17 22:08:12 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:pod-template-hash":{},"f:test-deployment-static":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"07fc1b90-82e8-4b23-a8ed-595e68409b1b\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"test-deployment\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {multus Update v1 2022-06-17 22:08:15 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:k8s.v1.cni.cncf.io/network-status":{},"f:k8s.v1.cni.cncf.io/networks-status":{}}}}} {kubelet Update v1 2022-06-17 22:08:18 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.3.220\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-p5v6f,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:test-deployment,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-p5v6f,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-06-17 22:08:12 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-06-17 22:08:18 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-06-17 22:08:18 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-06-17 22:08:12 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.10.190.208,PodIP:10.244.3.220,StartTime:2022-06-17 22:08:12 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:test-deployment,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2022-06-17 22:08:17 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,ImageID:docker-pullable://k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50,ContainerID:docker://2f647698b6d6c83aba141553b0782a24b691967191de0895faad3d59da662556,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.3.220,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 17 22:08:18.381: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-1068" for this suite. • [SLOW TEST:19.024 seconds] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run the lifecycle of a Deployment [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ [BeforeEach] version v1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 17 22:08:02.080: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy through a service and a pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: starting an echo server on multiple ports STEP: creating replication controller proxy-service-d5zgr in namespace proxy-2066 I0617 22:08:02.110030 31 runners.go:190] Created replication controller with name: proxy-service-d5zgr, namespace: proxy-2066, replica count: 1 I0617 22:08:03.160505 31 runners.go:190] proxy-service-d5zgr Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0617 22:08:04.161788 31 runners.go:190] proxy-service-d5zgr Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0617 22:08:05.164434 31 runners.go:190] proxy-service-d5zgr Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0617 22:08:06.164834 31 runners.go:190] proxy-service-d5zgr Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jun 17 22:08:06.167: INFO: setup took 4.067975558s, starting test cases STEP: running 16 cases, 20 attempts per case, 320 total attempts Jun 17 22:08:06.171: INFO: (0) /api/v1/namespaces/proxy-2066/pods/http:proxy-service-d5zgr-v855d:1080/proxy/: ... (200; 3.579972ms) Jun 17 22:08:06.171: INFO: (0) /api/v1/namespaces/proxy-2066/pods/http:proxy-service-d5zgr-v855d:162/proxy/: bar (200; 3.589075ms) Jun 17 22:08:06.172: INFO: (0) /api/v1/namespaces/proxy-2066/pods/proxy-service-d5zgr-v855d/proxy/: test (200; 4.06472ms) Jun 17 22:08:06.172: INFO: (0) /api/v1/namespaces/proxy-2066/pods/proxy-service-d5zgr-v855d:162/proxy/: bar (200; 3.989555ms) Jun 17 22:08:06.172: INFO: (0) /api/v1/namespaces/proxy-2066/pods/http:proxy-service-d5zgr-v855d:160/proxy/: foo (200; 3.904503ms) Jun 17 22:08:06.172: INFO: (0) /api/v1/namespaces/proxy-2066/pods/proxy-service-d5zgr-v855d:1080/proxy/: test<... (200; 4.007011ms) Jun 17 22:08:06.172: INFO: (0) /api/v1/namespaces/proxy-2066/pods/proxy-service-d5zgr-v855d:160/proxy/: foo (200; 4.006717ms) Jun 17 22:08:06.172: INFO: (0) /api/v1/namespaces/proxy-2066/services/proxy-service-d5zgr:portname1/proxy/: foo (200; 4.377988ms) Jun 17 22:08:06.172: INFO: (0) /api/v1/namespaces/proxy-2066/services/proxy-service-d5zgr:portname2/proxy/: bar (200; 4.497456ms) Jun 17 22:08:06.172: INFO: (0) /api/v1/namespaces/proxy-2066/services/http:proxy-service-d5zgr:portname2/proxy/: bar (200; 4.470082ms) Jun 17 22:08:06.174: INFO: (0) /api/v1/namespaces/proxy-2066/services/http:proxy-service-d5zgr:portname1/proxy/: foo (200; 6.384732ms) Jun 17 22:08:06.177: INFO: (0) /api/v1/namespaces/proxy-2066/pods/https:proxy-service-d5zgr-v855d:443/proxy/: test<... (200; 2.916364ms) Jun 17 22:08:06.182: INFO: (1) /api/v1/namespaces/proxy-2066/pods/proxy-service-d5zgr-v855d/proxy/: test (200; 2.938059ms) Jun 17 22:08:06.182: INFO: (1) /api/v1/namespaces/proxy-2066/pods/http:proxy-service-d5zgr-v855d:1080/proxy/: ... (200; 3.060987ms) Jun 17 22:08:06.182: INFO: (1) /api/v1/namespaces/proxy-2066/pods/https:proxy-service-d5zgr-v855d:462/proxy/: tls qux (200; 3.114425ms) Jun 17 22:08:06.182: INFO: (1) /api/v1/namespaces/proxy-2066/services/proxy-service-d5zgr:portname1/proxy/: foo (200; 3.2342ms) Jun 17 22:08:06.183: INFO: (1) /api/v1/namespaces/proxy-2066/pods/proxy-service-d5zgr-v855d:160/proxy/: foo (200; 3.37931ms) Jun 17 22:08:06.183: INFO: (1) /api/v1/namespaces/proxy-2066/services/http:proxy-service-d5zgr:portname1/proxy/: foo (200; 3.668057ms) Jun 17 22:08:06.183: INFO: (1) /api/v1/namespaces/proxy-2066/services/http:proxy-service-d5zgr:portname2/proxy/: bar (200; 3.660633ms) Jun 17 22:08:06.183: INFO: (1) /api/v1/namespaces/proxy-2066/services/proxy-service-d5zgr:portname2/proxy/: bar (200; 3.767653ms) Jun 17 22:08:06.183: INFO: (1) /api/v1/namespaces/proxy-2066/services/https:proxy-service-d5zgr:tlsportname2/proxy/: tls qux (200; 4.035724ms) Jun 17 22:08:06.183: INFO: (1) /api/v1/namespaces/proxy-2066/services/https:proxy-service-d5zgr:tlsportname1/proxy/: tls baz (200; 4.019959ms) Jun 17 22:08:06.186: INFO: (2) /api/v1/namespaces/proxy-2066/pods/http:proxy-service-d5zgr-v855d:160/proxy/: foo (200; 2.507048ms) Jun 17 22:08:06.186: INFO: (2) /api/v1/namespaces/proxy-2066/pods/http:proxy-service-d5zgr-v855d:162/proxy/: bar (200; 2.490973ms) Jun 17 22:08:06.186: INFO: (2) /api/v1/namespaces/proxy-2066/pods/https:proxy-service-d5zgr-v855d:460/proxy/: tls baz (200; 2.584014ms) Jun 17 22:08:06.186: INFO: (2) /api/v1/namespaces/proxy-2066/pods/https:proxy-service-d5zgr-v855d:462/proxy/: tls qux (200; 2.548493ms) Jun 17 22:08:06.186: INFO: (2) /api/v1/namespaces/proxy-2066/pods/proxy-service-d5zgr-v855d:160/proxy/: foo (200; 2.709239ms) Jun 17 22:08:06.186: INFO: (2) /api/v1/namespaces/proxy-2066/pods/proxy-service-d5zgr-v855d:1080/proxy/: test<... (200; 2.656254ms) Jun 17 22:08:06.186: INFO: (2) /api/v1/namespaces/proxy-2066/pods/proxy-service-d5zgr-v855d/proxy/: test (200; 2.791996ms) Jun 17 22:08:06.186: INFO: (2) /api/v1/namespaces/proxy-2066/pods/https:proxy-service-d5zgr-v855d:443/proxy/: ... (200; 3.123581ms) Jun 17 22:08:06.187: INFO: (2) /api/v1/namespaces/proxy-2066/services/proxy-service-d5zgr:portname2/proxy/: bar (200; 3.594ms) Jun 17 22:08:06.187: INFO: (2) /api/v1/namespaces/proxy-2066/services/proxy-service-d5zgr:portname1/proxy/: foo (200; 3.671796ms) Jun 17 22:08:06.187: INFO: (2) /api/v1/namespaces/proxy-2066/services/https:proxy-service-d5zgr:tlsportname1/proxy/: tls baz (200; 3.758232ms) Jun 17 22:08:06.187: INFO: (2) /api/v1/namespaces/proxy-2066/services/http:proxy-service-d5zgr:portname2/proxy/: bar (200; 3.768268ms) Jun 17 22:08:06.187: INFO: (2) /api/v1/namespaces/proxy-2066/services/http:proxy-service-d5zgr:portname1/proxy/: foo (200; 3.909094ms) Jun 17 22:08:06.188: INFO: (2) /api/v1/namespaces/proxy-2066/services/https:proxy-service-d5zgr:tlsportname2/proxy/: tls qux (200; 4.37702ms) Jun 17 22:08:06.190: INFO: (3) /api/v1/namespaces/proxy-2066/pods/https:proxy-service-d5zgr-v855d:462/proxy/: tls qux (200; 2.041173ms) Jun 17 22:08:06.190: INFO: (3) /api/v1/namespaces/proxy-2066/pods/proxy-service-d5zgr-v855d:1080/proxy/: test<... (200; 2.16016ms) Jun 17 22:08:06.190: INFO: (3) /api/v1/namespaces/proxy-2066/pods/proxy-service-d5zgr-v855d:162/proxy/: bar (200; 2.192493ms) Jun 17 22:08:06.191: INFO: (3) /api/v1/namespaces/proxy-2066/pods/http:proxy-service-d5zgr-v855d:162/proxy/: bar (200; 2.616401ms) Jun 17 22:08:06.191: INFO: (3) /api/v1/namespaces/proxy-2066/pods/https:proxy-service-d5zgr-v855d:460/proxy/: tls baz (200; 2.514444ms) Jun 17 22:08:06.191: INFO: (3) /api/v1/namespaces/proxy-2066/pods/https:proxy-service-d5zgr-v855d:443/proxy/: test (200; 2.941346ms) Jun 17 22:08:06.191: INFO: (3) /api/v1/namespaces/proxy-2066/pods/proxy-service-d5zgr-v855d:160/proxy/: foo (200; 2.885948ms) Jun 17 22:08:06.192: INFO: (3) /api/v1/namespaces/proxy-2066/services/proxy-service-d5zgr:portname2/proxy/: bar (200; 4.530559ms) Jun 17 22:08:06.193: INFO: (3) /api/v1/namespaces/proxy-2066/services/http:proxy-service-d5zgr:portname2/proxy/: bar (200; 4.514577ms) Jun 17 22:08:06.193: INFO: (3) /api/v1/namespaces/proxy-2066/pods/http:proxy-service-d5zgr-v855d:160/proxy/: foo (200; 4.416509ms) Jun 17 22:08:06.193: INFO: (3) /api/v1/namespaces/proxy-2066/pods/http:proxy-service-d5zgr-v855d:1080/proxy/: ... (200; 4.413038ms) Jun 17 22:08:06.193: INFO: (3) /api/v1/namespaces/proxy-2066/services/https:proxy-service-d5zgr:tlsportname1/proxy/: tls baz (200; 5.148487ms) Jun 17 22:08:06.193: INFO: (3) /api/v1/namespaces/proxy-2066/services/proxy-service-d5zgr:portname1/proxy/: foo (200; 5.298038ms) Jun 17 22:08:06.194: INFO: (3) /api/v1/namespaces/proxy-2066/services/http:proxy-service-d5zgr:portname1/proxy/: foo (200; 5.710222ms) Jun 17 22:08:06.194: INFO: (3) /api/v1/namespaces/proxy-2066/services/https:proxy-service-d5zgr:tlsportname2/proxy/: tls qux (200; 5.74229ms) Jun 17 22:08:06.197: INFO: (4) /api/v1/namespaces/proxy-2066/pods/proxy-service-d5zgr-v855d:162/proxy/: bar (200; 2.533102ms) Jun 17 22:08:06.197: INFO: (4) /api/v1/namespaces/proxy-2066/pods/proxy-service-d5zgr-v855d:1080/proxy/: test<... (200; 2.515224ms) Jun 17 22:08:06.197: INFO: (4) /api/v1/namespaces/proxy-2066/pods/https:proxy-service-d5zgr-v855d:462/proxy/: tls qux (200; 2.590086ms) Jun 17 22:08:06.197: INFO: (4) /api/v1/namespaces/proxy-2066/pods/https:proxy-service-d5zgr-v855d:443/proxy/: test (200; 2.92356ms) Jun 17 22:08:06.197: INFO: (4) /api/v1/namespaces/proxy-2066/pods/http:proxy-service-d5zgr-v855d:1080/proxy/: ... (200; 3.218798ms) Jun 17 22:08:06.197: INFO: (4) /api/v1/namespaces/proxy-2066/pods/http:proxy-service-d5zgr-v855d:160/proxy/: foo (200; 3.205348ms) Jun 17 22:08:06.197: INFO: (4) /api/v1/namespaces/proxy-2066/services/http:proxy-service-d5zgr:portname1/proxy/: foo (200; 3.473252ms) Jun 17 22:08:06.198: INFO: (4) /api/v1/namespaces/proxy-2066/services/http:proxy-service-d5zgr:portname2/proxy/: bar (200; 3.654163ms) Jun 17 22:08:06.198: INFO: (4) /api/v1/namespaces/proxy-2066/services/https:proxy-service-d5zgr:tlsportname1/proxy/: tls baz (200; 3.709917ms) Jun 17 22:08:06.198: INFO: (4) /api/v1/namespaces/proxy-2066/services/proxy-service-d5zgr:portname2/proxy/: bar (200; 4.05482ms) Jun 17 22:08:06.198: INFO: (4) /api/v1/namespaces/proxy-2066/services/proxy-service-d5zgr:portname1/proxy/: foo (200; 4.290291ms) Jun 17 22:08:06.200: INFO: (5) /api/v1/namespaces/proxy-2066/pods/proxy-service-d5zgr-v855d:1080/proxy/: test<... (200; 1.857844ms) Jun 17 22:08:06.201: INFO: (5) /api/v1/namespaces/proxy-2066/pods/proxy-service-d5zgr-v855d:162/proxy/: bar (200; 2.151044ms) Jun 17 22:08:06.201: INFO: (5) /api/v1/namespaces/proxy-2066/pods/http:proxy-service-d5zgr-v855d:1080/proxy/: ... (200; 2.133722ms) Jun 17 22:08:06.201: INFO: (5) /api/v1/namespaces/proxy-2066/pods/https:proxy-service-d5zgr-v855d:462/proxy/: tls qux (200; 2.171075ms) Jun 17 22:08:06.201: INFO: (5) /api/v1/namespaces/proxy-2066/pods/proxy-service-d5zgr-v855d/proxy/: test (200; 2.675463ms) Jun 17 22:08:06.201: INFO: (5) /api/v1/namespaces/proxy-2066/pods/http:proxy-service-d5zgr-v855d:160/proxy/: foo (200; 2.820092ms) Jun 17 22:08:06.201: INFO: (5) /api/v1/namespaces/proxy-2066/pods/proxy-service-d5zgr-v855d:160/proxy/: foo (200; 2.767984ms) Jun 17 22:08:06.201: INFO: (5) /api/v1/namespaces/proxy-2066/pods/https:proxy-service-d5zgr-v855d:460/proxy/: tls baz (200; 2.823643ms) Jun 17 22:08:06.202: INFO: (5) /api/v1/namespaces/proxy-2066/pods/https:proxy-service-d5zgr-v855d:443/proxy/: test<... (200; 3.092756ms) Jun 17 22:08:06.206: INFO: (6) /api/v1/namespaces/proxy-2066/pods/proxy-service-d5zgr-v855d/proxy/: test (200; 3.224963ms) Jun 17 22:08:06.206: INFO: (6) /api/v1/namespaces/proxy-2066/pods/http:proxy-service-d5zgr-v855d:1080/proxy/: ... (200; 3.045396ms) Jun 17 22:08:06.206: INFO: (6) /api/v1/namespaces/proxy-2066/pods/proxy-service-d5zgr-v855d:160/proxy/: foo (200; 3.20146ms) Jun 17 22:08:06.207: INFO: (6) /api/v1/namespaces/proxy-2066/services/proxy-service-d5zgr:portname1/proxy/: foo (200; 3.784776ms) Jun 17 22:08:06.207: INFO: (6) /api/v1/namespaces/proxy-2066/services/https:proxy-service-d5zgr:tlsportname2/proxy/: tls qux (200; 3.740454ms) Jun 17 22:08:06.207: INFO: (6) /api/v1/namespaces/proxy-2066/services/http:proxy-service-d5zgr:portname1/proxy/: foo (200; 3.847529ms) Jun 17 22:08:06.207: INFO: (6) /api/v1/namespaces/proxy-2066/services/proxy-service-d5zgr:portname2/proxy/: bar (200; 3.817077ms) Jun 17 22:08:06.207: INFO: (6) /api/v1/namespaces/proxy-2066/services/http:proxy-service-d5zgr:portname2/proxy/: bar (200; 4.381523ms) Jun 17 22:08:06.207: INFO: (6) /api/v1/namespaces/proxy-2066/services/https:proxy-service-d5zgr:tlsportname1/proxy/: tls baz (200; 4.2445ms) Jun 17 22:08:06.210: INFO: (7) /api/v1/namespaces/proxy-2066/pods/proxy-service-d5zgr-v855d:1080/proxy/: test<... (200; 2.202527ms) Jun 17 22:08:06.210: INFO: (7) /api/v1/namespaces/proxy-2066/pods/proxy-service-d5zgr-v855d:162/proxy/: bar (200; 2.35431ms) Jun 17 22:08:06.210: INFO: (7) /api/v1/namespaces/proxy-2066/pods/http:proxy-service-d5zgr-v855d:162/proxy/: bar (200; 2.605764ms) Jun 17 22:08:06.210: INFO: (7) /api/v1/namespaces/proxy-2066/pods/http:proxy-service-d5zgr-v855d:1080/proxy/: ... (200; 2.827302ms) Jun 17 22:08:06.210: INFO: (7) /api/v1/namespaces/proxy-2066/pods/https:proxy-service-d5zgr-v855d:460/proxy/: tls baz (200; 2.637061ms) Jun 17 22:08:06.210: INFO: (7) /api/v1/namespaces/proxy-2066/pods/https:proxy-service-d5zgr-v855d:443/proxy/: test (200; 2.85303ms) Jun 17 22:08:06.211: INFO: (7) /api/v1/namespaces/proxy-2066/pods/https:proxy-service-d5zgr-v855d:462/proxy/: tls qux (200; 3.148064ms) Jun 17 22:08:06.211: INFO: (7) /api/v1/namespaces/proxy-2066/services/http:proxy-service-d5zgr:portname2/proxy/: bar (200; 3.402897ms) Jun 17 22:08:06.211: INFO: (7) /api/v1/namespaces/proxy-2066/pods/proxy-service-d5zgr-v855d:160/proxy/: foo (200; 3.253891ms) Jun 17 22:08:06.211: INFO: (7) /api/v1/namespaces/proxy-2066/services/proxy-service-d5zgr:portname1/proxy/: foo (200; 3.630879ms) Jun 17 22:08:06.211: INFO: (7) /api/v1/namespaces/proxy-2066/pods/http:proxy-service-d5zgr-v855d:160/proxy/: foo (200; 3.38452ms) Jun 17 22:08:06.211: INFO: (7) /api/v1/namespaces/proxy-2066/services/http:proxy-service-d5zgr:portname1/proxy/: foo (200; 3.616716ms) Jun 17 22:08:06.211: INFO: (7) /api/v1/namespaces/proxy-2066/services/proxy-service-d5zgr:portname2/proxy/: bar (200; 3.794581ms) Jun 17 22:08:06.212: INFO: (7) /api/v1/namespaces/proxy-2066/services/https:proxy-service-d5zgr:tlsportname1/proxy/: tls baz (200; 4.122283ms) Jun 17 22:08:06.212: INFO: (7) /api/v1/namespaces/proxy-2066/services/https:proxy-service-d5zgr:tlsportname2/proxy/: tls qux (200; 4.247157ms) Jun 17 22:08:06.214: INFO: (8) /api/v1/namespaces/proxy-2066/pods/https:proxy-service-d5zgr-v855d:462/proxy/: tls qux (200; 2.019924ms) Jun 17 22:08:06.214: INFO: (8) /api/v1/namespaces/proxy-2066/pods/proxy-service-d5zgr-v855d:160/proxy/: foo (200; 2.139943ms) Jun 17 22:08:06.214: INFO: (8) /api/v1/namespaces/proxy-2066/pods/proxy-service-d5zgr-v855d:1080/proxy/: test<... (200; 2.239912ms) Jun 17 22:08:06.214: INFO: (8) /api/v1/namespaces/proxy-2066/pods/proxy-service-d5zgr-v855d/proxy/: test (200; 2.233095ms) Jun 17 22:08:06.215: INFO: (8) /api/v1/namespaces/proxy-2066/pods/http:proxy-service-d5zgr-v855d:162/proxy/: bar (200; 2.583883ms) Jun 17 22:08:06.215: INFO: (8) /api/v1/namespaces/proxy-2066/pods/https:proxy-service-d5zgr-v855d:460/proxy/: tls baz (200; 2.572058ms) Jun 17 22:08:06.215: INFO: (8) /api/v1/namespaces/proxy-2066/pods/http:proxy-service-d5zgr-v855d:1080/proxy/: ... (200; 2.653177ms) Jun 17 22:08:06.215: INFO: (8) /api/v1/namespaces/proxy-2066/pods/http:proxy-service-d5zgr-v855d:160/proxy/: foo (200; 2.683034ms) Jun 17 22:08:06.215: INFO: (8) /api/v1/namespaces/proxy-2066/pods/https:proxy-service-d5zgr-v855d:443/proxy/: test (200; 2.500306ms) Jun 17 22:08:06.219: INFO: (9) /api/v1/namespaces/proxy-2066/pods/https:proxy-service-d5zgr-v855d:460/proxy/: tls baz (200; 2.430565ms) Jun 17 22:08:06.219: INFO: (9) /api/v1/namespaces/proxy-2066/pods/proxy-service-d5zgr-v855d:160/proxy/: foo (200; 2.611927ms) Jun 17 22:08:06.219: INFO: (9) /api/v1/namespaces/proxy-2066/pods/http:proxy-service-d5zgr-v855d:1080/proxy/: ... (200; 2.704751ms) Jun 17 22:08:06.219: INFO: (9) /api/v1/namespaces/proxy-2066/pods/https:proxy-service-d5zgr-v855d:443/proxy/: test<... (200; 2.808673ms) Jun 17 22:08:06.219: INFO: (9) /api/v1/namespaces/proxy-2066/services/proxy-service-d5zgr:portname1/proxy/: foo (200; 3.113928ms) Jun 17 22:08:06.219: INFO: (9) /api/v1/namespaces/proxy-2066/pods/http:proxy-service-d5zgr-v855d:160/proxy/: foo (200; 3.026137ms) Jun 17 22:08:06.219: INFO: (9) /api/v1/namespaces/proxy-2066/pods/proxy-service-d5zgr-v855d:162/proxy/: bar (200; 2.988488ms) Jun 17 22:08:06.220: INFO: (9) /api/v1/namespaces/proxy-2066/services/proxy-service-d5zgr:portname2/proxy/: bar (200; 3.646473ms) Jun 17 22:08:06.220: INFO: (9) /api/v1/namespaces/proxy-2066/services/https:proxy-service-d5zgr:tlsportname2/proxy/: tls qux (200; 3.91397ms) Jun 17 22:08:06.220: INFO: (9) /api/v1/namespaces/proxy-2066/services/http:proxy-service-d5zgr:portname1/proxy/: foo (200; 3.800819ms) Jun 17 22:08:06.220: INFO: (9) /api/v1/namespaces/proxy-2066/services/http:proxy-service-d5zgr:portname2/proxy/: bar (200; 4.010932ms) Jun 17 22:08:06.220: INFO: (9) /api/v1/namespaces/proxy-2066/services/https:proxy-service-d5zgr:tlsportname1/proxy/: tls baz (200; 4.11001ms) Jun 17 22:08:06.223: INFO: (10) /api/v1/namespaces/proxy-2066/pods/https:proxy-service-d5zgr-v855d:460/proxy/: tls baz (200; 2.187803ms) Jun 17 22:08:06.223: INFO: (10) /api/v1/namespaces/proxy-2066/pods/proxy-service-d5zgr-v855d:160/proxy/: foo (200; 2.276589ms) Jun 17 22:08:06.223: INFO: (10) /api/v1/namespaces/proxy-2066/pods/proxy-service-d5zgr-v855d:162/proxy/: bar (200; 2.364154ms) Jun 17 22:08:06.223: INFO: (10) /api/v1/namespaces/proxy-2066/pods/http:proxy-service-d5zgr-v855d:160/proxy/: foo (200; 2.778032ms) Jun 17 22:08:06.223: INFO: (10) /api/v1/namespaces/proxy-2066/services/http:proxy-service-d5zgr:portname1/proxy/: foo (200; 3.085399ms) Jun 17 22:08:06.223: INFO: (10) /api/v1/namespaces/proxy-2066/pods/http:proxy-service-d5zgr-v855d:162/proxy/: bar (200; 2.992716ms) Jun 17 22:08:06.223: INFO: (10) /api/v1/namespaces/proxy-2066/pods/https:proxy-service-d5zgr-v855d:443/proxy/: test<... (200; 2.957989ms) Jun 17 22:08:06.223: INFO: (10) /api/v1/namespaces/proxy-2066/pods/proxy-service-d5zgr-v855d/proxy/: test (200; 3.038851ms) Jun 17 22:08:06.224: INFO: (10) /api/v1/namespaces/proxy-2066/services/http:proxy-service-d5zgr:portname2/proxy/: bar (200; 3.449674ms) Jun 17 22:08:06.224: INFO: (10) /api/v1/namespaces/proxy-2066/services/https:proxy-service-d5zgr:tlsportname2/proxy/: tls qux (200; 3.458558ms) Jun 17 22:08:06.224: INFO: (10) /api/v1/namespaces/proxy-2066/pods/https:proxy-service-d5zgr-v855d:462/proxy/: tls qux (200; 3.53694ms) Jun 17 22:08:06.224: INFO: (10) /api/v1/namespaces/proxy-2066/services/proxy-service-d5zgr:portname1/proxy/: foo (200; 3.621149ms) Jun 17 22:08:06.224: INFO: (10) /api/v1/namespaces/proxy-2066/pods/http:proxy-service-d5zgr-v855d:1080/proxy/: ... (200; 3.573351ms) Jun 17 22:08:06.224: INFO: (10) /api/v1/namespaces/proxy-2066/services/https:proxy-service-d5zgr:tlsportname1/proxy/: tls baz (200; 3.706613ms) Jun 17 22:08:06.225: INFO: (10) /api/v1/namespaces/proxy-2066/services/proxy-service-d5zgr:portname2/proxy/: bar (200; 4.014199ms) Jun 17 22:08:06.226: INFO: (11) /api/v1/namespaces/proxy-2066/pods/https:proxy-service-d5zgr-v855d:443/proxy/: ... (200; 2.087016ms) Jun 17 22:08:06.227: INFO: (11) /api/v1/namespaces/proxy-2066/pods/proxy-service-d5zgr-v855d:1080/proxy/: test<... (200; 2.218119ms) Jun 17 22:08:06.227: INFO: (11) /api/v1/namespaces/proxy-2066/pods/https:proxy-service-d5zgr-v855d:462/proxy/: tls qux (200; 2.207656ms) Jun 17 22:08:06.227: INFO: (11) /api/v1/namespaces/proxy-2066/pods/http:proxy-service-d5zgr-v855d:160/proxy/: foo (200; 2.649442ms) Jun 17 22:08:06.227: INFO: (11) /api/v1/namespaces/proxy-2066/pods/https:proxy-service-d5zgr-v855d:460/proxy/: tls baz (200; 2.715881ms) Jun 17 22:08:06.228: INFO: (11) /api/v1/namespaces/proxy-2066/pods/http:proxy-service-d5zgr-v855d:162/proxy/: bar (200; 2.943686ms) Jun 17 22:08:06.228: INFO: (11) /api/v1/namespaces/proxy-2066/pods/proxy-service-d5zgr-v855d/proxy/: test (200; 3.546123ms) Jun 17 22:08:06.228: INFO: (11) /api/v1/namespaces/proxy-2066/services/proxy-service-d5zgr:portname2/proxy/: bar (200; 3.778682ms) Jun 17 22:08:06.228: INFO: (11) /api/v1/namespaces/proxy-2066/services/http:proxy-service-d5zgr:portname2/proxy/: bar (200; 3.63482ms) Jun 17 22:08:06.228: INFO: (11) /api/v1/namespaces/proxy-2066/pods/proxy-service-d5zgr-v855d:162/proxy/: bar (200; 3.674619ms) Jun 17 22:08:06.228: INFO: (11) /api/v1/namespaces/proxy-2066/pods/proxy-service-d5zgr-v855d:160/proxy/: foo (200; 3.734911ms) Jun 17 22:08:06.229: INFO: (11) /api/v1/namespaces/proxy-2066/services/proxy-service-d5zgr:portname1/proxy/: foo (200; 3.886336ms) Jun 17 22:08:06.229: INFO: (11) /api/v1/namespaces/proxy-2066/services/https:proxy-service-d5zgr:tlsportname1/proxy/: tls baz (200; 3.899645ms) Jun 17 22:08:06.229: INFO: (11) /api/v1/namespaces/proxy-2066/services/https:proxy-service-d5zgr:tlsportname2/proxy/: tls qux (200; 4.150452ms) Jun 17 22:08:06.229: INFO: (11) /api/v1/namespaces/proxy-2066/services/http:proxy-service-d5zgr:portname1/proxy/: foo (200; 4.28355ms) Jun 17 22:08:06.231: INFO: (12) /api/v1/namespaces/proxy-2066/pods/http:proxy-service-d5zgr-v855d:160/proxy/: foo (200; 2.118158ms) Jun 17 22:08:06.231: INFO: (12) /api/v1/namespaces/proxy-2066/pods/https:proxy-service-d5zgr-v855d:443/proxy/: test (200; 2.226459ms) Jun 17 22:08:06.232: INFO: (12) /api/v1/namespaces/proxy-2066/pods/proxy-service-d5zgr-v855d:162/proxy/: bar (200; 2.347535ms) Jun 17 22:08:06.232: INFO: (12) /api/v1/namespaces/proxy-2066/pods/https:proxy-service-d5zgr-v855d:460/proxy/: tls baz (200; 2.242268ms) Jun 17 22:08:06.232: INFO: (12) /api/v1/namespaces/proxy-2066/pods/https:proxy-service-d5zgr-v855d:462/proxy/: tls qux (200; 2.359334ms) Jun 17 22:08:06.232: INFO: (12) /api/v1/namespaces/proxy-2066/pods/http:proxy-service-d5zgr-v855d:162/proxy/: bar (200; 2.640023ms) Jun 17 22:08:06.232: INFO: (12) /api/v1/namespaces/proxy-2066/pods/proxy-service-d5zgr-v855d:1080/proxy/: test<... (200; 2.626565ms) Jun 17 22:08:06.232: INFO: (12) /api/v1/namespaces/proxy-2066/pods/http:proxy-service-d5zgr-v855d:1080/proxy/: ... (200; 2.665461ms) Jun 17 22:08:06.232: INFO: (12) /api/v1/namespaces/proxy-2066/services/http:proxy-service-d5zgr:portname1/proxy/: foo (200; 3.01844ms) Jun 17 22:08:06.232: INFO: (12) /api/v1/namespaces/proxy-2066/pods/proxy-service-d5zgr-v855d:160/proxy/: foo (200; 2.982905ms) Jun 17 22:08:06.233: INFO: (12) /api/v1/namespaces/proxy-2066/services/https:proxy-service-d5zgr:tlsportname1/proxy/: tls baz (200; 3.535154ms) Jun 17 22:08:06.233: INFO: (12) /api/v1/namespaces/proxy-2066/services/http:proxy-service-d5zgr:portname2/proxy/: bar (200; 3.486291ms) Jun 17 22:08:06.233: INFO: (12) /api/v1/namespaces/proxy-2066/services/proxy-service-d5zgr:portname1/proxy/: foo (200; 3.491307ms) Jun 17 22:08:06.233: INFO: (12) /api/v1/namespaces/proxy-2066/services/proxy-service-d5zgr:portname2/proxy/: bar (200; 3.79449ms) Jun 17 22:08:06.233: INFO: (12) /api/v1/namespaces/proxy-2066/services/https:proxy-service-d5zgr:tlsportname2/proxy/: tls qux (200; 3.861857ms) Jun 17 22:08:06.236: INFO: (13) /api/v1/namespaces/proxy-2066/pods/proxy-service-d5zgr-v855d:1080/proxy/: test<... (200; 2.604726ms) Jun 17 22:08:06.236: INFO: (13) /api/v1/namespaces/proxy-2066/pods/https:proxy-service-d5zgr-v855d:460/proxy/: tls baz (200; 2.609933ms) Jun 17 22:08:06.236: INFO: (13) /api/v1/namespaces/proxy-2066/pods/https:proxy-service-d5zgr-v855d:443/proxy/: ... (200; 2.66754ms) Jun 17 22:08:06.236: INFO: (13) /api/v1/namespaces/proxy-2066/pods/proxy-service-d5zgr-v855d:162/proxy/: bar (200; 2.711324ms) Jun 17 22:08:06.236: INFO: (13) /api/v1/namespaces/proxy-2066/services/proxy-service-d5zgr:portname1/proxy/: foo (200; 3.091092ms) Jun 17 22:08:06.236: INFO: (13) /api/v1/namespaces/proxy-2066/pods/proxy-service-d5zgr-v855d/proxy/: test (200; 2.978996ms) Jun 17 22:08:06.237: INFO: (13) /api/v1/namespaces/proxy-2066/pods/proxy-service-d5zgr-v855d:160/proxy/: foo (200; 3.061524ms) Jun 17 22:08:06.237: INFO: (13) /api/v1/namespaces/proxy-2066/pods/http:proxy-service-d5zgr-v855d:162/proxy/: bar (200; 2.97492ms) Jun 17 22:08:06.237: INFO: (13) /api/v1/namespaces/proxy-2066/pods/https:proxy-service-d5zgr-v855d:462/proxy/: tls qux (200; 3.051389ms) Jun 17 22:08:06.237: INFO: (13) /api/v1/namespaces/proxy-2066/services/http:proxy-service-d5zgr:portname2/proxy/: bar (200; 3.23085ms) Jun 17 22:08:06.237: INFO: (13) /api/v1/namespaces/proxy-2066/pods/http:proxy-service-d5zgr-v855d:160/proxy/: foo (200; 3.292459ms) Jun 17 22:08:06.237: INFO: (13) /api/v1/namespaces/proxy-2066/services/http:proxy-service-d5zgr:portname1/proxy/: foo (200; 3.747448ms) Jun 17 22:08:06.237: INFO: (13) /api/v1/namespaces/proxy-2066/services/https:proxy-service-d5zgr:tlsportname1/proxy/: tls baz (200; 3.912522ms) Jun 17 22:08:06.238: INFO: (13) /api/v1/namespaces/proxy-2066/services/https:proxy-service-d5zgr:tlsportname2/proxy/: tls qux (200; 4.242926ms) Jun 17 22:08:06.238: INFO: (13) /api/v1/namespaces/proxy-2066/services/proxy-service-d5zgr:portname2/proxy/: bar (200; 4.167846ms) Jun 17 22:08:06.240: INFO: (14) /api/v1/namespaces/proxy-2066/pods/http:proxy-service-d5zgr-v855d:160/proxy/: foo (200; 1.983024ms) Jun 17 22:08:06.240: INFO: (14) /api/v1/namespaces/proxy-2066/pods/https:proxy-service-d5zgr-v855d:443/proxy/: ... (200; 2.496987ms) Jun 17 22:08:06.240: INFO: (14) /api/v1/namespaces/proxy-2066/pods/proxy-service-d5zgr-v855d:1080/proxy/: test<... (200; 2.482597ms) Jun 17 22:08:06.241: INFO: (14) /api/v1/namespaces/proxy-2066/pods/http:proxy-service-d5zgr-v855d:162/proxy/: bar (200; 2.760309ms) Jun 17 22:08:06.241: INFO: (14) /api/v1/namespaces/proxy-2066/pods/proxy-service-d5zgr-v855d/proxy/: test (200; 2.85317ms) Jun 17 22:08:06.241: INFO: (14) /api/v1/namespaces/proxy-2066/pods/https:proxy-service-d5zgr-v855d:462/proxy/: tls qux (200; 3.143758ms) Jun 17 22:08:06.241: INFO: (14) /api/v1/namespaces/proxy-2066/services/https:proxy-service-d5zgr:tlsportname2/proxy/: tls qux (200; 3.389258ms) Jun 17 22:08:06.241: INFO: (14) /api/v1/namespaces/proxy-2066/pods/proxy-service-d5zgr-v855d:162/proxy/: bar (200; 3.496071ms) Jun 17 22:08:06.242: INFO: (14) /api/v1/namespaces/proxy-2066/services/proxy-service-d5zgr:portname1/proxy/: foo (200; 3.611291ms) Jun 17 22:08:06.242: INFO: (14) /api/v1/namespaces/proxy-2066/services/http:proxy-service-d5zgr:portname2/proxy/: bar (200; 4.059729ms) Jun 17 22:08:06.242: INFO: (14) /api/v1/namespaces/proxy-2066/services/http:proxy-service-d5zgr:portname1/proxy/: foo (200; 4.009678ms) Jun 17 22:08:06.242: INFO: (14) /api/v1/namespaces/proxy-2066/services/proxy-service-d5zgr:portname2/proxy/: bar (200; 4.28309ms) Jun 17 22:08:06.242: INFO: (14) /api/v1/namespaces/proxy-2066/services/https:proxy-service-d5zgr:tlsportname1/proxy/: tls baz (200; 4.536834ms) Jun 17 22:08:06.245: INFO: (15) /api/v1/namespaces/proxy-2066/pods/https:proxy-service-d5zgr-v855d:443/proxy/: ... (200; 2.133022ms) Jun 17 22:08:06.245: INFO: (15) /api/v1/namespaces/proxy-2066/pods/http:proxy-service-d5zgr-v855d:162/proxy/: bar (200; 2.1906ms) Jun 17 22:08:06.245: INFO: (15) /api/v1/namespaces/proxy-2066/pods/proxy-service-d5zgr-v855d:1080/proxy/: test<... (200; 2.738215ms) Jun 17 22:08:06.245: INFO: (15) /api/v1/namespaces/proxy-2066/pods/http:proxy-service-d5zgr-v855d:160/proxy/: foo (200; 2.692414ms) Jun 17 22:08:06.245: INFO: (15) /api/v1/namespaces/proxy-2066/pods/proxy-service-d5zgr-v855d:162/proxy/: bar (200; 2.665048ms) Jun 17 22:08:06.245: INFO: (15) /api/v1/namespaces/proxy-2066/pods/https:proxy-service-d5zgr-v855d:460/proxy/: tls baz (200; 2.907398ms) Jun 17 22:08:06.246: INFO: (15) /api/v1/namespaces/proxy-2066/pods/https:proxy-service-d5zgr-v855d:462/proxy/: tls qux (200; 2.834117ms) Jun 17 22:08:06.246: INFO: (15) /api/v1/namespaces/proxy-2066/pods/proxy-service-d5zgr-v855d/proxy/: test (200; 3.225964ms) Jun 17 22:08:06.246: INFO: (15) /api/v1/namespaces/proxy-2066/pods/proxy-service-d5zgr-v855d:160/proxy/: foo (200; 3.314843ms) Jun 17 22:08:06.246: INFO: (15) /api/v1/namespaces/proxy-2066/services/proxy-service-d5zgr:portname1/proxy/: foo (200; 3.351046ms) Jun 17 22:08:06.246: INFO: (15) /api/v1/namespaces/proxy-2066/services/https:proxy-service-d5zgr:tlsportname1/proxy/: tls baz (200; 3.540884ms) Jun 17 22:08:06.247: INFO: (15) /api/v1/namespaces/proxy-2066/services/proxy-service-d5zgr:portname2/proxy/: bar (200; 3.88596ms) Jun 17 22:08:06.247: INFO: (15) /api/v1/namespaces/proxy-2066/services/http:proxy-service-d5zgr:portname1/proxy/: foo (200; 3.865534ms) Jun 17 22:08:06.247: INFO: (15) /api/v1/namespaces/proxy-2066/services/https:proxy-service-d5zgr:tlsportname2/proxy/: tls qux (200; 3.931412ms) Jun 17 22:08:06.247: INFO: (15) /api/v1/namespaces/proxy-2066/services/http:proxy-service-d5zgr:portname2/proxy/: bar (200; 4.250942ms) Jun 17 22:08:06.249: INFO: (16) /api/v1/namespaces/proxy-2066/pods/http:proxy-service-d5zgr-v855d:162/proxy/: bar (200; 1.881156ms) Jun 17 22:08:06.249: INFO: (16) /api/v1/namespaces/proxy-2066/pods/https:proxy-service-d5zgr-v855d:443/proxy/: test (200; 2.98218ms) Jun 17 22:08:06.250: INFO: (16) /api/v1/namespaces/proxy-2066/pods/proxy-service-d5zgr-v855d:1080/proxy/: test<... (200; 2.942371ms) Jun 17 22:08:06.250: INFO: (16) /api/v1/namespaces/proxy-2066/services/proxy-service-d5zgr:portname1/proxy/: foo (200; 3.125668ms) Jun 17 22:08:06.250: INFO: (16) /api/v1/namespaces/proxy-2066/pods/http:proxy-service-d5zgr-v855d:1080/proxy/: ... (200; 2.96035ms) Jun 17 22:08:06.250: INFO: (16) /api/v1/namespaces/proxy-2066/pods/https:proxy-service-d5zgr-v855d:460/proxy/: tls baz (200; 3.044539ms) Jun 17 22:08:06.250: INFO: (16) /api/v1/namespaces/proxy-2066/services/http:proxy-service-d5zgr:portname2/proxy/: bar (200; 3.272412ms) Jun 17 22:08:06.250: INFO: (16) /api/v1/namespaces/proxy-2066/services/https:proxy-service-d5zgr:tlsportname2/proxy/: tls qux (200; 3.304622ms) Jun 17 22:08:06.251: INFO: (16) /api/v1/namespaces/proxy-2066/pods/proxy-service-d5zgr-v855d:160/proxy/: foo (200; 3.475329ms) Jun 17 22:08:06.251: INFO: (16) /api/v1/namespaces/proxy-2066/services/http:proxy-service-d5zgr:portname1/proxy/: foo (200; 3.860899ms) Jun 17 22:08:06.251: INFO: (16) /api/v1/namespaces/proxy-2066/services/https:proxy-service-d5zgr:tlsportname1/proxy/: tls baz (200; 3.936502ms) Jun 17 22:08:06.251: INFO: (16) /api/v1/namespaces/proxy-2066/services/proxy-service-d5zgr:portname2/proxy/: bar (200; 3.848497ms) Jun 17 22:08:06.253: INFO: (17) /api/v1/namespaces/proxy-2066/pods/https:proxy-service-d5zgr-v855d:462/proxy/: tls qux (200; 1.767637ms) Jun 17 22:08:06.253: INFO: (17) /api/v1/namespaces/proxy-2066/pods/https:proxy-service-d5zgr-v855d:460/proxy/: tls baz (200; 2.005162ms) Jun 17 22:08:06.254: INFO: (17) /api/v1/namespaces/proxy-2066/pods/proxy-service-d5zgr-v855d/proxy/: test (200; 2.434356ms) Jun 17 22:08:06.254: INFO: (17) /api/v1/namespaces/proxy-2066/pods/proxy-service-d5zgr-v855d:160/proxy/: foo (200; 2.623511ms) Jun 17 22:08:06.254: INFO: (17) /api/v1/namespaces/proxy-2066/pods/http:proxy-service-d5zgr-v855d:160/proxy/: foo (200; 2.5614ms) Jun 17 22:08:06.254: INFO: (17) /api/v1/namespaces/proxy-2066/pods/http:proxy-service-d5zgr-v855d:162/proxy/: bar (200; 2.59738ms) Jun 17 22:08:06.254: INFO: (17) /api/v1/namespaces/proxy-2066/pods/proxy-service-d5zgr-v855d:1080/proxy/: test<... (200; 2.958696ms) Jun 17 22:08:06.254: INFO: (17) /api/v1/namespaces/proxy-2066/pods/https:proxy-service-d5zgr-v855d:443/proxy/: ... (200; 3.053004ms) Jun 17 22:08:06.255: INFO: (17) /api/v1/namespaces/proxy-2066/services/http:proxy-service-d5zgr:portname1/proxy/: foo (200; 3.260866ms) Jun 17 22:08:06.255: INFO: (17) /api/v1/namespaces/proxy-2066/pods/proxy-service-d5zgr-v855d:162/proxy/: bar (200; 3.256604ms) Jun 17 22:08:06.255: INFO: (17) /api/v1/namespaces/proxy-2066/services/proxy-service-d5zgr:portname1/proxy/: foo (200; 3.491692ms) Jun 17 22:08:06.255: INFO: (17) /api/v1/namespaces/proxy-2066/services/https:proxy-service-d5zgr:tlsportname1/proxy/: tls baz (200; 3.822378ms) Jun 17 22:08:06.255: INFO: (17) /api/v1/namespaces/proxy-2066/services/https:proxy-service-d5zgr:tlsportname2/proxy/: tls qux (200; 3.853739ms) Jun 17 22:08:06.255: INFO: (17) /api/v1/namespaces/proxy-2066/services/proxy-service-d5zgr:portname2/proxy/: bar (200; 3.832755ms) Jun 17 22:08:06.255: INFO: (17) /api/v1/namespaces/proxy-2066/services/http:proxy-service-d5zgr:portname2/proxy/: bar (200; 4.067148ms) Jun 17 22:08:06.258: INFO: (18) /api/v1/namespaces/proxy-2066/pods/https:proxy-service-d5zgr-v855d:460/proxy/: tls baz (200; 2.34959ms) Jun 17 22:08:06.258: INFO: (18) /api/v1/namespaces/proxy-2066/pods/proxy-service-d5zgr-v855d:160/proxy/: foo (200; 2.341489ms) Jun 17 22:08:06.258: INFO: (18) /api/v1/namespaces/proxy-2066/pods/https:proxy-service-d5zgr-v855d:462/proxy/: tls qux (200; 2.431488ms) Jun 17 22:08:06.258: INFO: (18) /api/v1/namespaces/proxy-2066/pods/http:proxy-service-d5zgr-v855d:162/proxy/: bar (200; 2.35783ms) Jun 17 22:08:06.258: INFO: (18) /api/v1/namespaces/proxy-2066/pods/proxy-service-d5zgr-v855d/proxy/: test (200; 2.843756ms) Jun 17 22:08:06.259: INFO: (18) /api/v1/namespaces/proxy-2066/pods/proxy-service-d5zgr-v855d:162/proxy/: bar (200; 2.793393ms) Jun 17 22:08:06.259: INFO: (18) /api/v1/namespaces/proxy-2066/pods/http:proxy-service-d5zgr-v855d:160/proxy/: foo (200; 2.955539ms) Jun 17 22:08:06.259: INFO: (18) /api/v1/namespaces/proxy-2066/pods/proxy-service-d5zgr-v855d:1080/proxy/: test<... (200; 2.886684ms) Jun 17 22:08:06.259: INFO: (18) /api/v1/namespaces/proxy-2066/pods/https:proxy-service-d5zgr-v855d:443/proxy/: ... (200; 3.237139ms) Jun 17 22:08:06.259: INFO: (18) /api/v1/namespaces/proxy-2066/services/http:proxy-service-d5zgr:portname2/proxy/: bar (200; 3.34576ms) Jun 17 22:08:06.259: INFO: (18) /api/v1/namespaces/proxy-2066/services/http:proxy-service-d5zgr:portname1/proxy/: foo (200; 3.518684ms) Jun 17 22:08:06.260: INFO: (18) /api/v1/namespaces/proxy-2066/services/proxy-service-d5zgr:portname1/proxy/: foo (200; 4.05914ms) Jun 17 22:08:06.260: INFO: (18) /api/v1/namespaces/proxy-2066/services/https:proxy-service-d5zgr:tlsportname2/proxy/: tls qux (200; 4.07642ms) Jun 17 22:08:06.260: INFO: (18) /api/v1/namespaces/proxy-2066/services/proxy-service-d5zgr:portname2/proxy/: bar (200; 4.015696ms) Jun 17 22:08:06.260: INFO: (18) /api/v1/namespaces/proxy-2066/services/https:proxy-service-d5zgr:tlsportname1/proxy/: tls baz (200; 4.359654ms) Jun 17 22:08:06.262: INFO: (19) /api/v1/namespaces/proxy-2066/pods/https:proxy-service-d5zgr-v855d:443/proxy/: test<... (200; 2.137437ms) Jun 17 22:08:06.263: INFO: (19) /api/v1/namespaces/proxy-2066/pods/proxy-service-d5zgr-v855d:162/proxy/: bar (200; 2.309222ms) Jun 17 22:08:06.263: INFO: (19) /api/v1/namespaces/proxy-2066/pods/proxy-service-d5zgr-v855d/proxy/: test (200; 2.532395ms) Jun 17 22:08:06.263: INFO: (19) /api/v1/namespaces/proxy-2066/pods/http:proxy-service-d5zgr-v855d:162/proxy/: bar (200; 2.707455ms) Jun 17 22:08:06.263: INFO: (19) /api/v1/namespaces/proxy-2066/pods/proxy-service-d5zgr-v855d:160/proxy/: foo (200; 2.677867ms) Jun 17 22:08:06.263: INFO: (19) /api/v1/namespaces/proxy-2066/services/proxy-service-d5zgr:portname1/proxy/: foo (200; 3.175241ms) Jun 17 22:08:06.263: INFO: (19) /api/v1/namespaces/proxy-2066/pods/http:proxy-service-d5zgr-v855d:160/proxy/: foo (200; 3.03652ms) Jun 17 22:08:06.264: INFO: (19) /api/v1/namespaces/proxy-2066/pods/http:proxy-service-d5zgr-v855d:1080/proxy/: ... (200; 3.208407ms) Jun 17 22:08:06.264: INFO: (19) /api/v1/namespaces/proxy-2066/pods/https:proxy-service-d5zgr-v855d:460/proxy/: tls baz (200; 3.183366ms) Jun 17 22:08:06.264: INFO: (19) /api/v1/namespaces/proxy-2066/services/http:proxy-service-d5zgr:portname2/proxy/: bar (200; 3.4766ms) Jun 17 22:08:06.264: INFO: (19) /api/v1/namespaces/proxy-2066/services/proxy-service-d5zgr:portname2/proxy/: bar (200; 3.781734ms) Jun 17 22:08:06.264: INFO: (19) /api/v1/namespaces/proxy-2066/services/http:proxy-service-d5zgr:portname1/proxy/: foo (200; 3.730409ms) Jun 17 22:08:06.264: INFO: (19) /api/v1/namespaces/proxy-2066/services/https:proxy-service-d5zgr:tlsportname1/proxy/: tls baz (200; 4.027992ms) Jun 17 22:08:06.264: INFO: (19) /api/v1/namespaces/proxy-2066/services/https:proxy-service-d5zgr:tlsportname2/proxy/: tls qux (200; 4.096642ms) STEP: deleting ReplicationController proxy-service-d5zgr in namespace proxy-2066, will wait for the garbage collector to delete the pods Jun 17 22:08:06.322: INFO: Deleting ReplicationController proxy-service-d5zgr took: 4.200425ms Jun 17 22:08:06.422: INFO: Terminating ReplicationController proxy-service-d5zgr pods took: 100.305188ms [AfterEach] version v1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 17 22:08:18.423: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "proxy-2066" for this suite. • [SLOW TEST:16.351 seconds] [sig-network] Proxy /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 version v1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:74 should proxy through a service and a pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-network] Proxy version v1 should proxy through a service and a pod [Conformance]","total":-1,"completed":35,"skipped":513,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 17 22:08:12.413: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's args [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test substitution in container's args Jun 17 22:08:12.461: INFO: Waiting up to 5m0s for pod "var-expansion-38453ab7-c3e6-4bad-b84d-b8b2716c9fec" in namespace "var-expansion-8821" to be "Succeeded or Failed" Jun 17 22:08:12.464: INFO: Pod "var-expansion-38453ab7-c3e6-4bad-b84d-b8b2716c9fec": Phase="Pending", Reason="", readiness=false. Elapsed: 2.841795ms Jun 17 22:08:14.468: INFO: Pod "var-expansion-38453ab7-c3e6-4bad-b84d-b8b2716c9fec": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007535255s Jun 17 22:08:16.473: INFO: Pod "var-expansion-38453ab7-c3e6-4bad-b84d-b8b2716c9fec": Phase="Pending", Reason="", readiness=false. Elapsed: 4.012245995s Jun 17 22:08:18.478: INFO: Pod "var-expansion-38453ab7-c3e6-4bad-b84d-b8b2716c9fec": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.016869093s STEP: Saw pod success Jun 17 22:08:18.478: INFO: Pod "var-expansion-38453ab7-c3e6-4bad-b84d-b8b2716c9fec" satisfied condition "Succeeded or Failed" Jun 17 22:08:18.480: INFO: Trying to get logs from node node2 pod var-expansion-38453ab7-c3e6-4bad-b84d-b8b2716c9fec container dapi-container: STEP: delete the pod Jun 17 22:08:18.493: INFO: Waiting for pod var-expansion-38453ab7-c3e6-4bad-b84d-b8b2716c9fec to disappear Jun 17 22:08:18.494: INFO: Pod var-expansion-38453ab7-c3e6-4bad-b84d-b8b2716c9fec no longer exists [AfterEach] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 17 22:08:18.495: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-8821" for this suite. • [SLOW TEST:6.088 seconds] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should allow substituting values in a container's args [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance]","total":-1,"completed":27,"skipped":408,"failed":2,"failures":["[sig-network] Services should be able to create a functioning NodePort service [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]"]} SSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-instrumentation] Events /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 17 22:08:18.538: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that an event can be fetched, patched, deleted, and listed [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating a test event STEP: listing all events in all namespaces STEP: patching the test event STEP: fetching the test event STEP: deleting the test event STEP: listing all events in all namespaces [AfterEach] [sig-instrumentation] Events /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 17 22:08:18.586: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-9343" for this suite. • ------------------------------ {"msg":"PASSED [sig-instrumentation] Events should ensure that an event can be fetched, patched, deleted, and listed [Conformance]","total":-1,"completed":28,"skipped":428,"failed":2,"failures":["[sig-network] Services should be able to create a functioning NodePort service [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]"]} SSSSSSSSSSSSSSSSSSSS ------------------------------ {"msg":"PASSED [sig-apps] Deployment should run the lifecycle of a Deployment [Conformance]","total":-1,"completed":31,"skipped":444,"failed":0} [BeforeEach] [sig-node] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 17 22:08:18.393: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via environment variable [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating configMap configmap-7550/configmap-test-6f229105-2206-4e46-9787-ef5ce762b705 STEP: Creating a pod to test consume configMaps Jun 17 22:08:18.428: INFO: Waiting up to 5m0s for pod "pod-configmaps-55119f92-a3d1-42cc-8db0-89bf904a988f" in namespace "configmap-7550" to be "Succeeded or Failed" Jun 17 22:08:18.430: INFO: Pod "pod-configmaps-55119f92-a3d1-42cc-8db0-89bf904a988f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.161802ms Jun 17 22:08:20.433: INFO: Pod "pod-configmaps-55119f92-a3d1-42cc-8db0-89bf904a988f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005010117s Jun 17 22:08:22.436: INFO: Pod "pod-configmaps-55119f92-a3d1-42cc-8db0-89bf904a988f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.008175061s Jun 17 22:08:24.440: INFO: Pod "pod-configmaps-55119f92-a3d1-42cc-8db0-89bf904a988f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.012215618s STEP: Saw pod success Jun 17 22:08:24.440: INFO: Pod "pod-configmaps-55119f92-a3d1-42cc-8db0-89bf904a988f" satisfied condition "Succeeded or Failed" Jun 17 22:08:24.443: INFO: Trying to get logs from node node2 pod pod-configmaps-55119f92-a3d1-42cc-8db0-89bf904a988f container env-test: STEP: delete the pod Jun 17 22:08:24.455: INFO: Waiting for pod pod-configmaps-55119f92-a3d1-42cc-8db0-89bf904a988f to disappear Jun 17 22:08:24.457: INFO: Pod pod-configmaps-55119f92-a3d1-42cc-8db0-89bf904a988f no longer exists [AfterEach] [sig-node] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 17 22:08:24.457: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-7550" for this suite. • [SLOW TEST:6.071 seconds] [sig-node] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should be consumable via environment variable [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance]","total":-1,"completed":32,"skipped":444,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-apps] ReplicaSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 17 22:08:18.630: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Jun 17 22:08:18.650: INFO: Creating ReplicaSet my-hostname-basic-d7646c39-3b01-460b-aae7-3a0d2ab6e6ba Jun 17 22:08:18.655: INFO: Pod name my-hostname-basic-d7646c39-3b01-460b-aae7-3a0d2ab6e6ba: Found 0 pods out of 1 Jun 17 22:08:23.661: INFO: Pod name my-hostname-basic-d7646c39-3b01-460b-aae7-3a0d2ab6e6ba: Found 1 pods out of 1 Jun 17 22:08:23.661: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-d7646c39-3b01-460b-aae7-3a0d2ab6e6ba" is running Jun 17 22:08:23.664: INFO: Pod "my-hostname-basic-d7646c39-3b01-460b-aae7-3a0d2ab6e6ba-hdk7x" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-17 22:08:18 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-17 22:08:23 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-17 22:08:23 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-17 22:08:18 +0000 UTC Reason: Message:}]) Jun 17 22:08:23.664: INFO: Trying to dial the pod Jun 17 22:08:28.673: INFO: Controller my-hostname-basic-d7646c39-3b01-460b-aae7-3a0d2ab6e6ba: Got expected result from replica 1 [my-hostname-basic-d7646c39-3b01-460b-aae7-3a0d2ab6e6ba-hdk7x]: "my-hostname-basic-d7646c39-3b01-460b-aae7-3a0d2ab6e6ba-hdk7x", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicaSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 17 22:08:28.673: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-8103" for this suite. • [SLOW TEST:10.053 seconds] [sig-apps] ReplicaSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance]","total":-1,"completed":29,"skipped":448,"failed":2,"failures":["[sig-network] Services should be able to create a functioning NodePort service [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]"]} SSSSSS ------------------------------ [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 17 22:08:28.696: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:241 [It] should check if v1 is in available api versions [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: validating api versions Jun 17 22:08:28.724: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-7652 api-versions' Jun 17 22:08:28.827: INFO: stderr: "" Jun 17 22:08:28.827: INFO: stdout: "admissionregistration.k8s.io/v1\nadmissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1\ncoordination.k8s.io/v1beta1\ncrd-publish-openapi-test-multi-to-single-ver.example.com/v5\ncrd-publish-openapi-test-multi-to-single-ver.example.com/v6alpha1\ncustom.metrics.k8s.io/v1beta1\ndiscovery.k8s.io/v1\ndiscovery.k8s.io/v1beta1\nevents.k8s.io/v1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nflowcontrol.apiserver.k8s.io/v1beta1\nintel.com/v1\nk8s.cni.cncf.io/v1\nmonitoring.coreos.com/v1\nmonitoring.coreos.com/v1alpha1\nnetworking.k8s.io/v1\nnetworking.k8s.io/v1beta1\nnode.k8s.io/v1\nnode.k8s.io/v1beta1\npolicy/v1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\ntelemetry.intel.com/v1alpha1\nv1\n" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 17 22:08:28.827: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7652" for this suite. • ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions [Conformance]","total":-1,"completed":30,"skipped":454,"failed":2,"failures":["[sig-network] Services should be able to create a functioning NodePort service [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 17 22:08:02.674: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a configMap. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ConfigMap STEP: Ensuring resource quota status captures configMap creation STEP: Deleting a ConfigMap STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 17 22:08:30.731: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-9519" for this suite. • [SLOW TEST:28.067 seconds] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a configMap. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance]","total":-1,"completed":53,"skipped":563,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 17 22:08:24.539: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46 [It] should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Jun 17 22:08:24.572: INFO: Waiting up to 5m0s for pod "busybox-privileged-false-b9c7645c-7e33-4e28-a94d-41f91d87166c" in namespace "security-context-test-2988" to be "Succeeded or Failed" Jun 17 22:08:24.574: INFO: Pod "busybox-privileged-false-b9c7645c-7e33-4e28-a94d-41f91d87166c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.062186ms Jun 17 22:08:26.578: INFO: Pod "busybox-privileged-false-b9c7645c-7e33-4e28-a94d-41f91d87166c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005618809s Jun 17 22:08:28.582: INFO: Pod "busybox-privileged-false-b9c7645c-7e33-4e28-a94d-41f91d87166c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.009984738s Jun 17 22:08:30.588: INFO: Pod "busybox-privileged-false-b9c7645c-7e33-4e28-a94d-41f91d87166c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.01535165s Jun 17 22:08:32.591: INFO: Pod "busybox-privileged-false-b9c7645c-7e33-4e28-a94d-41f91d87166c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.019105854s Jun 17 22:08:32.592: INFO: Pod "busybox-privileged-false-b9c7645c-7e33-4e28-a94d-41f91d87166c" satisfied condition "Succeeded or Failed" Jun 17 22:08:32.603: INFO: Got logs for pod "busybox-privileged-false-b9c7645c-7e33-4e28-a94d-41f91d87166c": "ip: RTNETLINK answers: Operation not permitted\n" [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 17 22:08:32.603: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-2988" for this suite. • [SLOW TEST:8.072 seconds] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 When creating a pod with privileged /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:232 should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":33,"skipped":477,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 17 22:08:28.915: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating configMap with name projected-configmap-test-volume-map-c94139d9-c3b9-4f20-81f8-bfb8e93f8adb STEP: Creating a pod to test consume configMaps Jun 17 22:08:28.969: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-373b5469-b3ed-4484-8fb9-c9b663e565c0" in namespace "projected-9263" to be "Succeeded or Failed" Jun 17 22:08:28.972: INFO: Pod "pod-projected-configmaps-373b5469-b3ed-4484-8fb9-c9b663e565c0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.556957ms Jun 17 22:08:30.976: INFO: Pod "pod-projected-configmaps-373b5469-b3ed-4484-8fb9-c9b663e565c0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006383683s Jun 17 22:08:32.980: INFO: Pod "pod-projected-configmaps-373b5469-b3ed-4484-8fb9-c9b663e565c0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010180255s STEP: Saw pod success Jun 17 22:08:32.980: INFO: Pod "pod-projected-configmaps-373b5469-b3ed-4484-8fb9-c9b663e565c0" satisfied condition "Succeeded or Failed" Jun 17 22:08:32.982: INFO: Trying to get logs from node node2 pod pod-projected-configmaps-373b5469-b3ed-4484-8fb9-c9b663e565c0 container agnhost-container: STEP: delete the pod Jun 17 22:08:32.997: INFO: Waiting for pod pod-projected-configmaps-373b5469-b3ed-4484-8fb9-c9b663e565c0 to disappear Jun 17 22:08:32.999: INFO: Pod pod-projected-configmaps-373b5469-b3ed-4484-8fb9-c9b663e565c0 no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 17 22:08:32.999: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9263" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":-1,"completed":31,"skipped":498,"failed":2,"failures":["[sig-network] Services should be able to create a functioning NodePort service [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]"]} SSSS ------------------------------ [BeforeEach] [sig-node] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 17 22:07:38.546: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/init_container.go:162 [It] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating the pod Jun 17 22:07:38.567: INFO: PodSpec: initContainers in spec.initContainers Jun 17 22:08:35.505: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-c865de1e-6c2e-40f1-b2a1-5e3b7358970c", GenerateName:"", Namespace:"init-container-1161", SelfLink:"", UID:"7cbd9bfe-94ac-41fa-8885-2d4ec21c63aa", ResourceVersion:"48987", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63791100458, loc:(*time.Location)(0x9e2e180)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"567590999"}, Annotations:map[string]string{"k8s.v1.cni.cncf.io/network-status":"[{\n \"name\": \"default-cni-network\",\n \"interface\": \"eth0\",\n \"ips\": [\n \"10.244.3.204\"\n ],\n \"mac\": \"4e:eb:76:64:ad:fe\",\n \"default\": true,\n \"dns\": {}\n}]", "k8s.v1.cni.cncf.io/networks-status":"[{\n \"name\": \"default-cni-network\",\n \"interface\": \"eth0\",\n \"ips\": [\n \"10.244.3.204\"\n ],\n \"mac\": \"4e:eb:76:64:ad:fe\",\n \"default\": true,\n \"dns\": {}\n}]", "kubernetes.io/psp":"collectd"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc002f24030), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc002f24048)}, v1.ManagedFieldsEntry{Manager:"multus", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc002f24060), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc002f24078)}, v1.ManagedFieldsEntry{Manager:"kubelet", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc002f24090), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc002f240a8)}}}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"kube-api-access-9qg47", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(0xc001e12000), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"k8s.gcr.io/e2e-test-images/busybox:1.29-1", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"kube-api-access-9qg47", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"Always", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"k8s.gcr.io/e2e-test-images/busybox:1.29-1", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"kube-api-access-9qg47", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"Always", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.4.1", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"kube-api-access-9qg47", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"Always", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc0057420e8), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"node2", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc001aca000), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc005742170)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc005742190)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc005742198), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc00574219c), PreemptionPolicy:(*v1.PreemptionPolicy)(0xc004b08030), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63791100458, loc:(*time.Location)(0x9e2e180)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63791100458, loc:(*time.Location)(0x9e2e180)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63791100458, loc:(*time.Location)(0x9e2e180)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63791100458, loc:(*time.Location)(0x9e2e180)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"10.10.190.208", PodIP:"10.244.3.204", PodIPs:[]v1.PodIP{v1.PodIP{IP:"10.244.3.204"}}, StartTime:(*v1.Time)(0xc002f240d8), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc001aca0e0)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc001aca150)}, Ready:false, RestartCount:3, Image:"k8s.gcr.io/e2e-test-images/busybox:1.29-1", ImageID:"docker-pullable://k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592", ContainerID:"docker://d84b5aa4787ce39d630beb05bf5470d47da9eb8cfb05dbe0eeaf93dd4bf2ad34", Started:(*bool)(nil)}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc001e121e0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/e2e-test-images/busybox:1.29-1", ImageID:"", ContainerID:"", Started:(*bool)(nil)}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc001e12140), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.4.1", ImageID:"", ContainerID:"", Started:(*bool)(0xc00574221f)}}, QOSClass:"Burstable", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}} [AfterEach] [sig-node] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 17 22:08:35.506: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-1161" for this suite. • [SLOW TEST:56.968 seconds] [sig-node] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should not start app containers if init containers fail on a RestartAlways pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance]","total":-1,"completed":26,"skipped":435,"failed":1,"failures":["[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ {"msg":"PASSED [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance]","total":-1,"completed":27,"skipped":609,"failed":1,"failures":["[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]"]} [BeforeEach] [sig-apps] Job /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 17 22:07:41.158: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should delete a job [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a job STEP: Ensuring active pods == parallelism STEP: delete a job STEP: deleting Job.batch foo in namespace job-5652, will wait for the garbage collector to delete the pods Jun 17 22:07:53.243: INFO: Deleting Job.batch foo took: 3.223916ms Jun 17 22:07:53.345: INFO: Terminating Job.batch foo pods took: 101.155267ms STEP: Ensuring job was deleted [AfterEach] [sig-apps] Job /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 17 22:08:38.448: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-5652" for this suite. • [SLOW TEST:57.299 seconds] [sig-apps] Job /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should delete a job [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] Job should delete a job [Conformance]","total":-1,"completed":28,"skipped":609,"failed":1,"failures":["[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 17 22:08:35.599: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating projection with secret that has name projected-secret-test-map-08e9b038-0670-4d13-9eac-4cb7ef40c90b STEP: Creating a pod to test consume secrets Jun 17 22:08:35.633: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-86141410-c984-419c-9f89-a301ede28eff" in namespace "projected-2982" to be "Succeeded or Failed" Jun 17 22:08:35.637: INFO: Pod "pod-projected-secrets-86141410-c984-419c-9f89-a301ede28eff": Phase="Pending", Reason="", readiness=false. Elapsed: 4.184829ms Jun 17 22:08:37.640: INFO: Pod "pod-projected-secrets-86141410-c984-419c-9f89-a301ede28eff": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007184016s Jun 17 22:08:39.643: INFO: Pod "pod-projected-secrets-86141410-c984-419c-9f89-a301ede28eff": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.009809308s STEP: Saw pod success Jun 17 22:08:39.643: INFO: Pod "pod-projected-secrets-86141410-c984-419c-9f89-a301ede28eff" satisfied condition "Succeeded or Failed" Jun 17 22:08:39.646: INFO: Trying to get logs from node node2 pod pod-projected-secrets-86141410-c984-419c-9f89-a301ede28eff container projected-secret-volume-test: STEP: delete the pod Jun 17 22:08:39.660: INFO: Waiting for pod pod-projected-secrets-86141410-c984-419c-9f89-a301ede28eff to disappear Jun 17 22:08:39.662: INFO: Pod pod-projected-secrets-86141410-c984-419c-9f89-a301ede28eff no longer exists [AfterEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 17 22:08:39.662: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2982" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":27,"skipped":478,"failed":1,"failures":["[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 17 22:08:18.493: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] removes definition from spec when one version gets changed to not be served [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: set up a multi version CRD Jun 17 22:08:18.513: INFO: >>> kubeConfig: /root/.kube/config STEP: mark a version not serverd STEP: check the unserved version gets removed STEP: check the other version is not changed [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 17 22:08:41.995: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-6286" for this suite. • [SLOW TEST:23.520 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 removes definition from spec when one version gets changed to not be served [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance]","total":-1,"completed":36,"skipped":545,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 17 22:08:32.689: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a replica set. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ReplicaSet STEP: Ensuring resource quota status captures replicaset creation STEP: Deleting a ReplicaSet STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 17 22:08:43.744: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-5783" for this suite. • [SLOW TEST:11.064 seconds] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a replica set. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance]","total":-1,"completed":34,"skipped":520,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 17 22:08:38.547: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication Jun 17 22:08:39.049: INFO: role binding webhook-auth-reader already exists STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Jun 17 22:08:39.063: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Jun 17 22:08:41.074: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63791100519, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63791100519, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63791100519, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63791100519, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jun 17 22:08:44.086: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate pod and apply defaults after mutation [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Registering the mutating pod webhook via the AdmissionRegistration API STEP: create a pod that should be updated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 17 22:08:44.124: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-8103" for this suite. STEP: Destroying namespace "webhook-8103-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:5.611 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate pod and apply defaults after mutation [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","total":-1,"completed":29,"skipped":660,"failed":1,"failures":["[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]"]} SSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 17 22:08:30.783: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Jun 17 22:08:31.252: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Jun 17 22:08:33.259: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63791100511, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63791100511, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63791100511, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63791100511, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jun 17 22:08:36.272: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny pod and configmap creation [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Registering the webhook via the AdmissionRegistration API STEP: create a pod that should be denied by the webhook STEP: create a pod that causes the webhook to hang STEP: create a configmap that should be denied by the webhook STEP: create a configmap that should be admitted by the webhook STEP: update (PUT) the admitted configmap to a non-compliant one should be rejected by the webhook STEP: update (PATCH) the admitted configmap to a non-compliant one should be rejected by the webhook STEP: create a namespace that bypass the webhook STEP: create a configmap that violates the webhook policy but is in a whitelisted namespace [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 17 22:08:46.364: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-2454" for this suite. STEP: Destroying namespace "webhook-2454-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:15.611 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny pod and configmap creation [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","total":-1,"completed":54,"skipped":583,"failed":0} SSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] server version /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 17 22:08:46.418: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename server-version STEP: Waiting for a default service account to be provisioned in namespace [It] should find the server version [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Request ServerVersion STEP: Confirm major version Jun 17 22:08:46.440: INFO: Major version: 1 STEP: Confirm minor version Jun 17 22:08:46.440: INFO: cleanMinorVersion: 21 Jun 17 22:08:46.440: INFO: Minor version: 21 [AfterEach] [sig-api-machinery] server version /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 17 22:08:46.440: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "server-version-1909" for this suite. • ------------------------------ {"msg":"PASSED [sig-api-machinery] server version should find the server version [Conformance]","total":-1,"completed":55,"skipped":593,"failed":0} SSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 17 22:02:30.103: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 [BeforeEach] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:105 STEP: Creating service test in namespace statefulset-2851 [It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Initializing watcher for selector baz=blah,foo=bar STEP: Creating stateful set ss in namespace statefulset-2851 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-2851 Jun 17 22:02:30.136: INFO: Found 0 stateful pods, waiting for 1 Jun 17 22:02:40.139: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Pending - Ready=false Jun 17 22:02:50.140: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod Jun 17 22:02:50.142: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-2851 exec ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Jun 17 22:02:51.282: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" Jun 17 22:02:51.282: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Jun 17 22:02:51.282: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Jun 17 22:02:51.287: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Jun 17 22:02:51.287: INFO: Waiting for statefulset status.replicas updated to 0 Jun 17 22:02:51.297: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999999472s Jun 17 22:02:52.300: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.997557994s Jun 17 22:02:53.303: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.994714376s Jun 17 22:02:54.306: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.991579372s Jun 17 22:02:55.310: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.988498978s Jun 17 22:02:56.313: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.984386033s Jun 17 22:02:57.316: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.981787808s Jun 17 22:02:58.319: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.978794094s Jun 17 22:02:59.322: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.97602258s Jun 17 22:03:00.325: INFO: Verifying statefulset ss doesn't scale past 1 for another 972.470052ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-2851 Jun 17 22:03:01.329: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-2851 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jun 17 22:03:01.603: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n" Jun 17 22:03:01.603: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Jun 17 22:03:01.603: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Jun 17 22:03:01.606: INFO: Found 1 stateful pods, waiting for 3 Jun 17 22:03:11.610: INFO: Found 2 stateful pods, waiting for 3 Jun 17 22:03:21.613: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Jun 17 22:03:21.613: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Jun 17 22:03:21.613: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Verifying that stateful set ss was scaled up in order STEP: Scale down will halt with unhealthy stateful pod Jun 17 22:03:21.619: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-2851 exec ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Jun 17 22:03:21.874: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" Jun 17 22:03:21.874: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Jun 17 22:03:21.874: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Jun 17 22:03:21.874: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-2851 exec ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Jun 17 22:03:22.422: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" Jun 17 22:03:22.422: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Jun 17 22:03:22.422: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Jun 17 22:03:22.422: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-2851 exec ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Jun 17 22:03:22.656: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" Jun 17 22:03:22.656: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Jun 17 22:03:22.656: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Jun 17 22:03:22.656: INFO: Waiting for statefulset status.replicas updated to 0 Jun 17 22:03:22.658: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 1 Jun 17 22:03:32.664: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Jun 17 22:03:32.664: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Jun 17 22:03:32.664: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Jun 17 22:03:32.674: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999999464s Jun 17 22:03:33.677: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.997151625s Jun 17 22:03:34.682: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.993659349s Jun 17 22:03:35.686: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.988475771s Jun 17 22:03:36.689: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.984778719s Jun 17 22:03:37.693: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.981847701s Jun 17 22:03:38.697: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.977579736s Jun 17 22:03:39.701: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.972709914s Jun 17 22:03:40.712: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.967668085s Jun 17 22:03:41.716: INFO: Verifying statefulset ss doesn't scale past 3 for another 959.110211ms STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-2851 Jun 17 22:03:42.720: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-2851 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jun 17 22:03:42.983: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n" Jun 17 22:03:42.983: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Jun 17 22:03:42.983: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Jun 17 22:03:42.983: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-2851 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jun 17 22:03:43.331: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n" Jun 17 22:03:43.331: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Jun 17 22:03:43.331: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Jun 17 22:03:43.331: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-2851 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jun 17 22:03:43.527: INFO: rc: 1 Jun 17 22:03:43.527: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-2851 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server: error: exit status 1 Jun 17 22:03:53.528: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-2851 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jun 17 22:03:53.681: INFO: rc: 1 Jun 17 22:03:53.682: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-2851 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jun 17 22:04:03.683: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-2851 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jun 17 22:04:03.844: INFO: rc: 1 Jun 17 22:04:03.844: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-2851 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jun 17 22:04:13.845: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-2851 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jun 17 22:04:14.014: INFO: rc: 1 Jun 17 22:04:14.014: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-2851 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jun 17 22:04:24.014: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-2851 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jun 17 22:04:24.185: INFO: rc: 1 Jun 17 22:04:24.185: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-2851 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jun 17 22:04:34.185: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-2851 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jun 17 22:04:34.331: INFO: rc: 1 Jun 17 22:04:34.331: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-2851 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jun 17 22:04:44.333: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-2851 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jun 17 22:04:44.505: INFO: rc: 1 Jun 17 22:04:44.505: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-2851 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jun 17 22:04:54.506: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-2851 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jun 17 22:04:54.652: INFO: rc: 1 Jun 17 22:04:54.652: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-2851 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jun 17 22:05:04.654: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-2851 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jun 17 22:05:04.811: INFO: rc: 1 Jun 17 22:05:04.811: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-2851 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jun 17 22:05:14.813: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-2851 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jun 17 22:05:14.972: INFO: rc: 1 Jun 17 22:05:14.972: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-2851 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jun 17 22:05:24.973: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-2851 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jun 17 22:05:25.118: INFO: rc: 1 Jun 17 22:05:25.118: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-2851 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jun 17 22:05:35.120: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-2851 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jun 17 22:05:35.259: INFO: rc: 1 Jun 17 22:05:35.259: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-2851 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jun 17 22:05:45.260: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-2851 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jun 17 22:05:45.410: INFO: rc: 1 Jun 17 22:05:45.410: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-2851 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jun 17 22:05:55.412: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-2851 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jun 17 22:05:55.566: INFO: rc: 1 Jun 17 22:05:55.566: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-2851 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jun 17 22:06:05.568: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-2851 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jun 17 22:06:05.729: INFO: rc: 1 Jun 17 22:06:05.729: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-2851 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jun 17 22:06:15.731: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-2851 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jun 17 22:06:15.886: INFO: rc: 1 Jun 17 22:06:15.887: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-2851 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jun 17 22:06:25.888: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-2851 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jun 17 22:06:26.026: INFO: rc: 1 Jun 17 22:06:26.026: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-2851 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jun 17 22:06:36.027: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-2851 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jun 17 22:06:36.173: INFO: rc: 1 Jun 17 22:06:36.173: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-2851 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jun 17 22:06:46.177: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-2851 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jun 17 22:06:46.313: INFO: rc: 1 Jun 17 22:06:46.313: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-2851 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jun 17 22:06:56.318: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-2851 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jun 17 22:06:56.474: INFO: rc: 1 Jun 17 22:06:56.474: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-2851 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jun 17 22:07:06.477: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-2851 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jun 17 22:07:06.639: INFO: rc: 1 Jun 17 22:07:06.639: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-2851 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jun 17 22:07:16.643: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-2851 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jun 17 22:07:16.795: INFO: rc: 1 Jun 17 22:07:16.795: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-2851 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jun 17 22:07:26.800: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-2851 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jun 17 22:07:26.961: INFO: rc: 1 Jun 17 22:07:26.962: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-2851 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jun 17 22:07:36.972: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-2851 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jun 17 22:07:37.120: INFO: rc: 1 Jun 17 22:07:37.120: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-2851 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jun 17 22:07:47.123: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-2851 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jun 17 22:07:47.277: INFO: rc: 1 Jun 17 22:07:47.277: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-2851 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jun 17 22:07:57.278: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-2851 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jun 17 22:07:57.432: INFO: rc: 1 Jun 17 22:07:57.433: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-2851 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jun 17 22:08:07.434: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-2851 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jun 17 22:08:07.585: INFO: rc: 1 Jun 17 22:08:07.585: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-2851 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jun 17 22:08:17.586: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-2851 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jun 17 22:08:17.752: INFO: rc: 1 Jun 17 22:08:17.753: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-2851 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jun 17 22:08:27.753: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-2851 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jun 17 22:08:27.907: INFO: rc: 1 Jun 17 22:08:27.907: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-2851 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jun 17 22:08:37.908: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-2851 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jun 17 22:08:38.048: INFO: rc: 1 Jun 17 22:08:38.048: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-2851 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jun 17 22:08:48.049: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-2851 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jun 17 22:08:48.203: INFO: rc: 1 Jun 17 22:08:48.203: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: Jun 17 22:08:48.203: INFO: Scaling statefulset ss to 0 STEP: Verifying that stateful set ss was scaled down in reverse order [AfterEach] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:116 Jun 17 22:08:48.224: INFO: Deleting all statefulset in ns statefulset-2851 Jun 17 22:08:48.226: INFO: Scaling statefulset ss to 0 Jun 17 22:08:48.234: INFO: Waiting for statefulset status.replicas updated to 0 Jun 17 22:08:48.236: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 17 22:08:48.245: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-2851" for this suite. • [SLOW TEST:378.151 seconds] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:95 Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]","total":-1,"completed":3,"skipped":31,"failed":0} SS ------------------------------ [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 17 22:08:44.187: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41 [It] should provide podname only [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test downward API volume plugin Jun 17 22:08:44.220: INFO: Waiting up to 5m0s for pod "downwardapi-volume-10fa23f7-5dc4-4a65-9012-541ccca72605" in namespace "projected-2635" to be "Succeeded or Failed" Jun 17 22:08:44.223: INFO: Pod "downwardapi-volume-10fa23f7-5dc4-4a65-9012-541ccca72605": Phase="Pending", Reason="", readiness=false. Elapsed: 2.626825ms Jun 17 22:08:46.228: INFO: Pod "downwardapi-volume-10fa23f7-5dc4-4a65-9012-541ccca72605": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007263793s Jun 17 22:08:48.230: INFO: Pod "downwardapi-volume-10fa23f7-5dc4-4a65-9012-541ccca72605": Phase="Pending", Reason="", readiness=false. Elapsed: 4.009650612s Jun 17 22:08:50.235: INFO: Pod "downwardapi-volume-10fa23f7-5dc4-4a65-9012-541ccca72605": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.014371992s STEP: Saw pod success Jun 17 22:08:50.235: INFO: Pod "downwardapi-volume-10fa23f7-5dc4-4a65-9012-541ccca72605" satisfied condition "Succeeded or Failed" Jun 17 22:08:50.238: INFO: Trying to get logs from node node1 pod downwardapi-volume-10fa23f7-5dc4-4a65-9012-541ccca72605 container client-container: STEP: delete the pod Jun 17 22:08:50.261: INFO: Waiting for pod downwardapi-volume-10fa23f7-5dc4-4a65-9012-541ccca72605 to disappear Jun 17 22:08:50.263: INFO: Pod downwardapi-volume-10fa23f7-5dc4-4a65-9012-541ccca72605 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 17 22:08:50.263: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2635" for this suite. • [SLOW TEST:6.084 seconds] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should provide podname only [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance]","total":-1,"completed":30,"skipped":673,"failed":1,"failures":["[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]"]} SSSSS ------------------------------ [BeforeEach] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 17 22:08:50.283: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should allow opting out of API token automount [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: getting the auto-created API token Jun 17 22:08:50.838: INFO: created pod pod-service-account-defaultsa Jun 17 22:08:50.838: INFO: pod pod-service-account-defaultsa service account token volume mount: true Jun 17 22:08:50.847: INFO: created pod pod-service-account-mountsa Jun 17 22:08:50.847: INFO: pod pod-service-account-mountsa service account token volume mount: true Jun 17 22:08:50.856: INFO: created pod pod-service-account-nomountsa Jun 17 22:08:50.856: INFO: pod pod-service-account-nomountsa service account token volume mount: false Jun 17 22:08:50.865: INFO: created pod pod-service-account-defaultsa-mountspec Jun 17 22:08:50.865: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true Jun 17 22:08:50.875: INFO: created pod pod-service-account-mountsa-mountspec Jun 17 22:08:50.875: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true Jun 17 22:08:50.884: INFO: created pod pod-service-account-nomountsa-mountspec Jun 17 22:08:50.884: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true Jun 17 22:08:50.892: INFO: created pod pod-service-account-defaultsa-nomountspec Jun 17 22:08:50.892: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false Jun 17 22:08:50.901: INFO: created pod pod-service-account-mountsa-nomountspec Jun 17 22:08:50.901: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false Jun 17 22:08:50.909: INFO: created pod pod-service-account-nomountsa-nomountspec Jun 17 22:08:50.909: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false [AfterEach] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 17 22:08:50.909: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-6877" for this suite. • ------------------------------ {"msg":"PASSED [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance]","total":-1,"completed":31,"skipped":678,"failed":1,"failures":["[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]"]} SSSSS ------------------------------ [BeforeEach] [sig-network] HostPort /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 17 22:08:33.013: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename hostport STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] HostPort /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/hostport.go:47 [It] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Trying to create a pod(pod1) with hostport 54323 and hostIP 127.0.0.1 and expect scheduled Jun 17 22:08:33.055: INFO: The status of Pod pod1 is Pending, waiting for it to be Running (with Ready = true) Jun 17 22:08:35.057: INFO: The status of Pod pod1 is Pending, waiting for it to be Running (with Ready = true) Jun 17 22:08:37.058: INFO: The status of Pod pod1 is Running (Ready = true) STEP: Trying to create another pod(pod2) with hostport 54323 but hostIP 10.10.190.208 on the node which pod1 resides and expect scheduled Jun 17 22:08:37.072: INFO: The status of Pod pod2 is Pending, waiting for it to be Running (with Ready = true) Jun 17 22:08:39.075: INFO: The status of Pod pod2 is Pending, waiting for it to be Running (with Ready = true) Jun 17 22:08:41.077: INFO: The status of Pod pod2 is Running (Ready = true) STEP: Trying to create a third pod(pod3) with hostport 54323, hostIP 10.10.190.208 but use UDP protocol on the node which pod2 resides Jun 17 22:08:41.089: INFO: The status of Pod pod3 is Pending, waiting for it to be Running (with Ready = true) Jun 17 22:08:43.091: INFO: The status of Pod pod3 is Pending, waiting for it to be Running (with Ready = true) Jun 17 22:08:45.092: INFO: The status of Pod pod3 is Running (Ready = true) Jun 17 22:08:45.104: INFO: The status of Pod e2e-host-exec is Pending, waiting for it to be Running (with Ready = true) Jun 17 22:08:47.109: INFO: The status of Pod e2e-host-exec is Running (Ready = true) STEP: checking connectivity from pod e2e-host-exec to serverIP: 127.0.0.1, port: 54323 Jun 17 22:08:47.111: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g --connect-timeout 5 --interface 10.10.190.208 http://127.0.0.1:54323/hostname] Namespace:hostport-8677 PodName:e2e-host-exec ContainerName:e2e-host-exec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 17 22:08:47.111: INFO: >>> kubeConfig: /root/.kube/config STEP: checking connectivity from pod e2e-host-exec to serverIP: 10.10.190.208, port: 54323 Jun 17 22:08:47.302: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g --connect-timeout 5 http://10.10.190.208:54323/hostname] Namespace:hostport-8677 PodName:e2e-host-exec ContainerName:e2e-host-exec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 17 22:08:47.302: INFO: >>> kubeConfig: /root/.kube/config STEP: checking connectivity from pod e2e-host-exec to serverIP: 10.10.190.208, port: 54323 UDP Jun 17 22:08:47.409: INFO: ExecWithOptions {Command:[/bin/sh -c nc -vuz -w 5 10.10.190.208 54323] Namespace:hostport-8677 PodName:e2e-host-exec ContainerName:e2e-host-exec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 17 22:08:47.409: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-network] HostPort /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 17 22:08:52.646: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "hostport-8677" for this suite. • [SLOW TEST:19.641 seconds] [sig-network] HostPort /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 validates that there is no conflict between pods with same hostPort but different hostIP and protocol [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-network] HostPort validates that there is no conflict between pods with same hostPort but different hostIP and protocol [LinuxOnly] [Conformance]","total":-1,"completed":32,"skipped":502,"failed":2,"failures":["[sig-network] Services should be able to create a functioning NodePort service [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]"]} S ------------------------------ [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 17 22:08:39.717: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should verify ResourceQuota with best effort scope. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a ResourceQuota with best effort scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a ResourceQuota with not best effort scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a best-effort pod STEP: Ensuring resource quota with best effort scope captures the pod usage STEP: Ensuring resource quota with not best effort ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage STEP: Creating a not best-effort pod STEP: Ensuring resource quota with not best effort scope captures the pod usage STEP: Ensuring resource quota with best effort scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 17 22:08:55.824: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-6512" for this suite. • [SLOW TEST:16.116 seconds] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should verify ResourceQuota with best effort scope. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance]","total":-1,"completed":28,"skipped":501,"failed":1,"failures":["[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]"]} [BeforeEach] [sig-api-machinery] Watchers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 17 22:08:55.836: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to restart watching from the last resource version observed by the previous watch [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating a watch on configmaps STEP: creating a new configmap STEP: modifying the configmap once STEP: closing the watch once it receives two notifications Jun 17 22:08:55.867: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-302 c2f400fa-4dc5-47b6-a148-78a31307d8c2 49605 0 2022-06-17 22:08:55 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] [{e2e.test Update v1 2022-06-17 22:08:55 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} Jun 17 22:08:55.867: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-302 c2f400fa-4dc5-47b6-a148-78a31307d8c2 49606 0 2022-06-17 22:08:55 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] [{e2e.test Update v1 2022-06-17 22:08:55 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: modifying the configmap a second time, while the watch is closed STEP: creating a new watch on configmaps from the last resource version observed by the first watch STEP: deleting the configmap STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed Jun 17 22:08:55.878: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-302 c2f400fa-4dc5-47b6-a148-78a31307d8c2 49607 0 2022-06-17 22:08:55 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] [{e2e.test Update v1 2022-06-17 22:08:55 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} Jun 17 22:08:55.878: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-302 c2f400fa-4dc5-47b6-a148-78a31307d8c2 49608 0 2022-06-17 22:08:55 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] [{e2e.test Update v1 2022-06-17 22:08:55 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 17 22:08:55.879: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-302" for this suite. • ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance]","total":-1,"completed":29,"skipped":501,"failed":1,"failures":["[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 17 22:08:52.659: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test emptydir 0666 on tmpfs Jun 17 22:08:52.694: INFO: Waiting up to 5m0s for pod "pod-3a758df0-751c-4c4f-93dc-e3e3098c311e" in namespace "emptydir-3470" to be "Succeeded or Failed" Jun 17 22:08:52.697: INFO: Pod "pod-3a758df0-751c-4c4f-93dc-e3e3098c311e": Phase="Pending", Reason="", readiness=false. Elapsed: 3.91878ms Jun 17 22:08:54.702: INFO: Pod "pod-3a758df0-751c-4c4f-93dc-e3e3098c311e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008439355s Jun 17 22:08:56.707: INFO: Pod "pod-3a758df0-751c-4c4f-93dc-e3e3098c311e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013087505s STEP: Saw pod success Jun 17 22:08:56.707: INFO: Pod "pod-3a758df0-751c-4c4f-93dc-e3e3098c311e" satisfied condition "Succeeded or Failed" Jun 17 22:08:56.710: INFO: Trying to get logs from node node1 pod pod-3a758df0-751c-4c4f-93dc-e3e3098c311e container test-container: STEP: delete the pod Jun 17 22:08:56.820: INFO: Waiting for pod pod-3a758df0-751c-4c4f-93dc-e3e3098c311e to disappear Jun 17 22:08:56.823: INFO: Pod pod-3a758df0-751c-4c4f-93dc-e3e3098c311e no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 17 22:08:56.823: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-3470" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":33,"skipped":503,"failed":2,"failures":["[sig-network] Services should be able to create a functioning NodePort service [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]"]} SSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 17 22:08:50.929: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating configMap configmap-2293/configmap-test-ca5373e1-c66f-4082-928f-a0450c3aaedf STEP: Creating a pod to test consume configMaps Jun 17 22:08:50.961: INFO: Waiting up to 5m0s for pod "pod-configmaps-546a4773-21d9-4a14-86d5-9aafdd147e95" in namespace "configmap-2293" to be "Succeeded or Failed" Jun 17 22:08:50.964: INFO: Pod "pod-configmaps-546a4773-21d9-4a14-86d5-9aafdd147e95": Phase="Pending", Reason="", readiness=false. Elapsed: 2.854582ms Jun 17 22:08:52.969: INFO: Pod "pod-configmaps-546a4773-21d9-4a14-86d5-9aafdd147e95": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00800361s Jun 17 22:08:54.975: INFO: Pod "pod-configmaps-546a4773-21d9-4a14-86d5-9aafdd147e95": Phase="Pending", Reason="", readiness=false. Elapsed: 4.013765368s Jun 17 22:08:56.979: INFO: Pod "pod-configmaps-546a4773-21d9-4a14-86d5-9aafdd147e95": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.017402129s STEP: Saw pod success Jun 17 22:08:56.979: INFO: Pod "pod-configmaps-546a4773-21d9-4a14-86d5-9aafdd147e95" satisfied condition "Succeeded or Failed" Jun 17 22:08:56.982: INFO: Trying to get logs from node node1 pod pod-configmaps-546a4773-21d9-4a14-86d5-9aafdd147e95 container env-test: STEP: delete the pod Jun 17 22:08:56.994: INFO: Waiting for pod pod-configmaps-546a4773-21d9-4a14-86d5-9aafdd147e95 to disappear Jun 17 22:08:56.996: INFO: Pod pod-configmaps-546a4773-21d9-4a14-86d5-9aafdd147e95 no longer exists [AfterEach] [sig-node] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 17 22:08:56.996: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-2293" for this suite. • [SLOW TEST:6.076 seconds] [sig-node] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should be consumable via the environment [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance]","total":-1,"completed":32,"skipped":683,"failed":1,"failures":["[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]"]} SS ------------------------------ [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 17 22:08:46.479: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:241 [BeforeEach] Kubectl logs /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1386 STEP: creating an pod Jun 17 22:08:46.500: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-3323 run logs-generator --image=k8s.gcr.io/e2e-test-images/agnhost:2.32 --restart=Never -- logs-generator --log-lines-total 100 --run-duration 20s' Jun 17 22:08:46.663: INFO: stderr: "" Jun 17 22:08:46.663: INFO: stdout: "pod/logs-generator created\n" [It] should be able to retrieve and filter logs [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Waiting for log generator to start. Jun 17 22:08:46.664: INFO: Waiting up to 5m0s for 1 pods to be running and ready, or succeeded: [logs-generator] Jun 17 22:08:46.664: INFO: Waiting up to 5m0s for pod "logs-generator" in namespace "kubectl-3323" to be "running and ready, or succeeded" Jun 17 22:08:46.666: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 2.462006ms Jun 17 22:08:48.671: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007018125s Jun 17 22:08:50.679: INFO: Pod "logs-generator": Phase="Running", Reason="", readiness=true. Elapsed: 4.015442574s Jun 17 22:08:50.679: INFO: Pod "logs-generator" satisfied condition "running and ready, or succeeded" Jun 17 22:08:50.679: INFO: Wanted all 1 pods to be running and ready, or succeeded. Result: true. Pods: [logs-generator] STEP: checking for a matching strings Jun 17 22:08:50.679: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-3323 logs logs-generator logs-generator' Jun 17 22:08:50.858: INFO: stderr: "" Jun 17 22:08:50.858: INFO: stdout: "I0617 22:08:49.096843 1 logs_generator.go:76] 0 GET /api/v1/namespaces/default/pods/4fl 471\nI0617 22:08:49.297103 1 logs_generator.go:76] 1 GET /api/v1/namespaces/ns/pods/5br 375\nI0617 22:08:49.497269 1 logs_generator.go:76] 2 PUT /api/v1/namespaces/ns/pods/zxh 207\nI0617 22:08:49.697557 1 logs_generator.go:76] 3 GET /api/v1/namespaces/ns/pods/hpt 322\nI0617 22:08:49.896925 1 logs_generator.go:76] 4 GET /api/v1/namespaces/default/pods/knp 512\nI0617 22:08:50.097381 1 logs_generator.go:76] 5 POST /api/v1/namespaces/kube-system/pods/v6g 468\nI0617 22:08:50.297750 1 logs_generator.go:76] 6 PUT /api/v1/namespaces/ns/pods/vfkv 482\nI0617 22:08:50.497169 1 logs_generator.go:76] 7 POST /api/v1/namespaces/ns/pods/sqx 586\nI0617 22:08:50.697775 1 logs_generator.go:76] 8 POST /api/v1/namespaces/default/pods/c4h 560\n" STEP: limiting log lines Jun 17 22:08:50.859: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-3323 logs logs-generator logs-generator --tail=1' Jun 17 22:08:51.037: INFO: stderr: "" Jun 17 22:08:51.037: INFO: stdout: "I0617 22:08:50.897579 1 logs_generator.go:76] 9 GET /api/v1/namespaces/ns/pods/mwlk 248\n" Jun 17 22:08:51.037: INFO: got output "I0617 22:08:50.897579 1 logs_generator.go:76] 9 GET /api/v1/namespaces/ns/pods/mwlk 248\n" STEP: limiting log bytes Jun 17 22:08:51.037: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-3323 logs logs-generator logs-generator --limit-bytes=1' Jun 17 22:08:51.203: INFO: stderr: "" Jun 17 22:08:51.203: INFO: stdout: "I" Jun 17 22:08:51.203: INFO: got output "I" STEP: exposing timestamps Jun 17 22:08:51.203: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-3323 logs logs-generator logs-generator --tail=1 --timestamps' Jun 17 22:08:51.372: INFO: stderr: "" Jun 17 22:08:51.372: INFO: stdout: "2022-06-17T22:08:51.297365096Z I0617 22:08:51.297230 1 logs_generator.go:76] 11 POST /api/v1/namespaces/kube-system/pods/dcl 228\n" Jun 17 22:08:51.372: INFO: got output "2022-06-17T22:08:51.297365096Z I0617 22:08:51.297230 1 logs_generator.go:76] 11 POST /api/v1/namespaces/kube-system/pods/dcl 228\n" STEP: restricting to a time range Jun 17 22:08:53.874: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-3323 logs logs-generator logs-generator --since=1s' Jun 17 22:08:54.038: INFO: stderr: "" Jun 17 22:08:54.038: INFO: stdout: "I0617 22:08:53.097642 1 logs_generator.go:76] 20 POST /api/v1/namespaces/ns/pods/9r6t 361\nI0617 22:08:53.296897 1 logs_generator.go:76] 21 GET /api/v1/namespaces/ns/pods/n6cp 409\nI0617 22:08:53.497340 1 logs_generator.go:76] 22 GET /api/v1/namespaces/kube-system/pods/9tj 515\nI0617 22:08:53.697852 1 logs_generator.go:76] 23 POST /api/v1/namespaces/default/pods/89p 316\nI0617 22:08:53.897788 1 logs_generator.go:76] 24 POST /api/v1/namespaces/default/pods/dflk 547\n" Jun 17 22:08:54.039: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-3323 logs logs-generator logs-generator --since=24h' Jun 17 22:08:54.211: INFO: stderr: "" Jun 17 22:08:54.211: INFO: stdout: "I0617 22:08:49.096843 1 logs_generator.go:76] 0 GET /api/v1/namespaces/default/pods/4fl 471\nI0617 22:08:49.297103 1 logs_generator.go:76] 1 GET /api/v1/namespaces/ns/pods/5br 375\nI0617 22:08:49.497269 1 logs_generator.go:76] 2 PUT /api/v1/namespaces/ns/pods/zxh 207\nI0617 22:08:49.697557 1 logs_generator.go:76] 3 GET /api/v1/namespaces/ns/pods/hpt 322\nI0617 22:08:49.896925 1 logs_generator.go:76] 4 GET /api/v1/namespaces/default/pods/knp 512\nI0617 22:08:50.097381 1 logs_generator.go:76] 5 POST /api/v1/namespaces/kube-system/pods/v6g 468\nI0617 22:08:50.297750 1 logs_generator.go:76] 6 PUT /api/v1/namespaces/ns/pods/vfkv 482\nI0617 22:08:50.497169 1 logs_generator.go:76] 7 POST /api/v1/namespaces/ns/pods/sqx 586\nI0617 22:08:50.697775 1 logs_generator.go:76] 8 POST /api/v1/namespaces/default/pods/c4h 560\nI0617 22:08:50.897579 1 logs_generator.go:76] 9 GET /api/v1/namespaces/ns/pods/mwlk 248\nI0617 22:08:51.097915 1 logs_generator.go:76] 10 POST /api/v1/namespaces/kube-system/pods/9lvd 331\nI0617 22:08:51.297230 1 logs_generator.go:76] 11 POST /api/v1/namespaces/kube-system/pods/dcl 228\nI0617 22:08:51.497553 1 logs_generator.go:76] 12 GET /api/v1/namespaces/ns/pods/nxw6 325\nI0617 22:08:51.697892 1 logs_generator.go:76] 13 PUT /api/v1/namespaces/kube-system/pods/5rcs 579\nI0617 22:08:51.896991 1 logs_generator.go:76] 14 POST /api/v1/namespaces/ns/pods/68g 558\nI0617 22:08:52.097282 1 logs_generator.go:76] 15 POST /api/v1/namespaces/kube-system/pods/gsf 472\nI0617 22:08:52.297671 1 logs_generator.go:76] 16 POST /api/v1/namespaces/ns/pods/96p2 320\nI0617 22:08:52.496898 1 logs_generator.go:76] 17 PUT /api/v1/namespaces/default/pods/xmv 334\nI0617 22:08:52.696991 1 logs_generator.go:76] 18 GET /api/v1/namespaces/ns/pods/qcz 570\nI0617 22:08:52.897291 1 logs_generator.go:76] 19 PUT /api/v1/namespaces/ns/pods/bbzn 592\nI0617 22:08:53.097642 1 logs_generator.go:76] 20 POST /api/v1/namespaces/ns/pods/9r6t 361\nI0617 22:08:53.296897 1 logs_generator.go:76] 21 GET /api/v1/namespaces/ns/pods/n6cp 409\nI0617 22:08:53.497340 1 logs_generator.go:76] 22 GET /api/v1/namespaces/kube-system/pods/9tj 515\nI0617 22:08:53.697852 1 logs_generator.go:76] 23 POST /api/v1/namespaces/default/pods/89p 316\nI0617 22:08:53.897788 1 logs_generator.go:76] 24 POST /api/v1/namespaces/default/pods/dflk 547\nI0617 22:08:54.097446 1 logs_generator.go:76] 25 PUT /api/v1/namespaces/default/pods/lskk 303\n" [AfterEach] Kubectl logs /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1391 Jun 17 22:08:54.212: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-3323 delete pod logs-generator' Jun 17 22:09:00.456: INFO: stderr: "" Jun 17 22:09:00.456: INFO: stdout: "pod \"logs-generator\" deleted\n" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 17 22:09:00.456: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3323" for this suite. • [SLOW TEST:13.985 seconds] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl logs /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1383 should be able to retrieve and filter logs [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]","total":-1,"completed":56,"skipped":607,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 17 22:08:43.808: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a secret. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Discovering how many secrets are in namespace by default STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Secret STEP: Ensuring resource quota status captures secret creation STEP: Deleting a secret STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 17 22:09:00.870: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-8057" for this suite. • [SLOW TEST:17.069 seconds] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a secret. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance]","total":-1,"completed":35,"skipped":546,"failed":0} SSS ------------------------------ [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 17 22:09:00.885: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to update and delete ResourceQuota. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a ResourceQuota STEP: Getting a ResourceQuota STEP: Updating a ResourceQuota STEP: Verifying a ResourceQuota was modified STEP: Deleting a ResourceQuota STEP: Verifying the deleted ResourceQuota [AfterEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 17 22:09:00.938: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-162" for this suite. • ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance]","total":-1,"completed":36,"skipped":549,"failed":0} S ------------------------------ [BeforeEach] [sig-instrumentation] Events API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 17 22:09:00.951: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-instrumentation] Events API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/instrumentation/events.go:81 [It] should delete a collection of events [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Create set of events STEP: get a list of Events with a label in the current namespace STEP: delete a list of events Jun 17 22:09:00.983: INFO: requesting DeleteCollection of events STEP: check that the list of events matches the requested quantity [AfterEach] [sig-instrumentation] Events API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 17 22:09:00.997: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-6178" for this suite. • ------------------------------ {"msg":"PASSED [sig-instrumentation] Events API should delete a collection of events [Conformance]","total":-1,"completed":37,"skipped":550,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 17 22:09:00.504: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete RS created by deployment when not orphaning [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for all rs to be garbage collected STEP: expected 0 rs, got 1 rs STEP: expected 0 pods, got 2 pods STEP: Gathering metrics Jun 17 22:09:01.587: INFO: The status of Pod kube-controller-manager-master3 is Running (Ready = true) Jun 17 22:09:01.651: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 17 22:09:01.651: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-9972" for this suite. • ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance]","total":-1,"completed":57,"skipped":627,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 17 22:08:57.012: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a volume subpath [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test substitution in volume subpath Jun 17 22:08:57.044: INFO: Waiting up to 5m0s for pod "var-expansion-38148aa7-7fd2-4bf6-aea9-1061077fee2c" in namespace "var-expansion-6770" to be "Succeeded or Failed" Jun 17 22:08:57.046: INFO: Pod "var-expansion-38148aa7-7fd2-4bf6-aea9-1061077fee2c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.222474ms Jun 17 22:08:59.050: INFO: Pod "var-expansion-38148aa7-7fd2-4bf6-aea9-1061077fee2c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006173787s Jun 17 22:09:01.054: INFO: Pod "var-expansion-38148aa7-7fd2-4bf6-aea9-1061077fee2c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.009686856s Jun 17 22:09:03.058: INFO: Pod "var-expansion-38148aa7-7fd2-4bf6-aea9-1061077fee2c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.014121007s Jun 17 22:09:05.061: INFO: Pod "var-expansion-38148aa7-7fd2-4bf6-aea9-1061077fee2c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.017607224s STEP: Saw pod success Jun 17 22:09:05.062: INFO: Pod "var-expansion-38148aa7-7fd2-4bf6-aea9-1061077fee2c" satisfied condition "Succeeded or Failed" Jun 17 22:09:05.064: INFO: Trying to get logs from node node1 pod var-expansion-38148aa7-7fd2-4bf6-aea9-1061077fee2c container dapi-container: STEP: delete the pod Jun 17 22:09:05.075: INFO: Waiting for pod var-expansion-38148aa7-7fd2-4bf6-aea9-1061077fee2c to disappear Jun 17 22:09:05.076: INFO: Pod var-expansion-38148aa7-7fd2-4bf6-aea9-1061077fee2c no longer exists [AfterEach] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 17 22:09:05.076: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-6770" for this suite. • [SLOW TEST:8.073 seconds] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should allow substituting values in a volume subpath [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ [BeforeEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 17 22:09:01.124: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating secret with name secret-test-03efd4ff-31d0-46ef-9483-8039aa24ab9b STEP: Creating a pod to test consume secrets Jun 17 22:09:01.158: INFO: Waiting up to 5m0s for pod "pod-secrets-ae384376-1b11-4ede-b507-e85121a18c10" in namespace "secrets-3053" to be "Succeeded or Failed" Jun 17 22:09:01.160: INFO: Pod "pod-secrets-ae384376-1b11-4ede-b507-e85121a18c10": Phase="Pending", Reason="", readiness=false. Elapsed: 2.092221ms Jun 17 22:09:03.164: INFO: Pod "pod-secrets-ae384376-1b11-4ede-b507-e85121a18c10": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005674203s Jun 17 22:09:05.167: INFO: Pod "pod-secrets-ae384376-1b11-4ede-b507-e85121a18c10": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.008691454s STEP: Saw pod success Jun 17 22:09:05.167: INFO: Pod "pod-secrets-ae384376-1b11-4ede-b507-e85121a18c10" satisfied condition "Succeeded or Failed" Jun 17 22:09:05.169: INFO: Trying to get logs from node node1 pod pod-secrets-ae384376-1b11-4ede-b507-e85121a18c10 container secret-volume-test: STEP: delete the pod Jun 17 22:09:05.200: INFO: Waiting for pod pod-secrets-ae384376-1b11-4ede-b507-e85121a18c10 to disappear Jun 17 22:09:05.202: INFO: Pod pod-secrets-ae384376-1b11-4ede-b507-e85121a18c10 no longer exists [AfterEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 17 22:09:05.202: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-3053" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance]","total":-1,"completed":38,"skipped":619,"failed":0} SSSS ------------------------------ [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 17 22:09:01.729: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating configMap with name configmap-test-volume-0afd7b33-86e6-42ba-ac62-09ff40f093f5 STEP: Creating a pod to test consume configMaps Jun 17 22:09:01.767: INFO: Waiting up to 5m0s for pod "pod-configmaps-0b100279-f355-4882-b9c4-f2734404de9d" in namespace "configmap-8784" to be "Succeeded or Failed" Jun 17 22:09:01.771: INFO: Pod "pod-configmaps-0b100279-f355-4882-b9c4-f2734404de9d": Phase="Pending", Reason="", readiness=false. Elapsed: 3.908476ms Jun 17 22:09:03.775: INFO: Pod "pod-configmaps-0b100279-f355-4882-b9c4-f2734404de9d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007810421s Jun 17 22:09:05.779: INFO: Pod "pod-configmaps-0b100279-f355-4882-b9c4-f2734404de9d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.012550804s Jun 17 22:09:07.783: INFO: Pod "pod-configmaps-0b100279-f355-4882-b9c4-f2734404de9d": Phase="Pending", Reason="", readiness=false. Elapsed: 6.016396009s Jun 17 22:09:09.787: INFO: Pod "pod-configmaps-0b100279-f355-4882-b9c4-f2734404de9d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.020540162s STEP: Saw pod success Jun 17 22:09:09.788: INFO: Pod "pod-configmaps-0b100279-f355-4882-b9c4-f2734404de9d" satisfied condition "Succeeded or Failed" Jun 17 22:09:09.790: INFO: Trying to get logs from node node2 pod pod-configmaps-0b100279-f355-4882-b9c4-f2734404de9d container agnhost-container: STEP: delete the pod Jun 17 22:09:09.802: INFO: Waiting for pod pod-configmaps-0b100279-f355-4882-b9c4-f2734404de9d to disappear Jun 17 22:09:09.804: INFO: Pod pod-configmaps-0b100279-f355-4882-b9c4-f2734404de9d no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 17 22:09:09.804: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-8784" for this suite. • [SLOW TEST:8.083 seconds] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":-1,"completed":58,"skipped":668,"failed":0} SS ------------------------------ [BeforeEach] [sig-node] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 17 22:09:09.819: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should run through a ConfigMap lifecycle [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating a ConfigMap STEP: fetching the ConfigMap STEP: patching the ConfigMap STEP: listing all ConfigMaps in all namespaces with a label selector STEP: deleting the ConfigMap by collection with a label selector STEP: listing all ConfigMaps in test namespace [AfterEach] [sig-node] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 17 22:09:09.905: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-3043" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] ConfigMap should run through a ConfigMap lifecycle [Conformance]","total":-1,"completed":59,"skipped":670,"failed":0} SSSSSSS ------------------------------ [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 17 22:09:05.222: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test emptydir 0644 on node default medium Jun 17 22:09:05.252: INFO: Waiting up to 5m0s for pod "pod-728b2042-8c8e-4fd5-8d45-a123c0d8d52a" in namespace "emptydir-6471" to be "Succeeded or Failed" Jun 17 22:09:05.254: INFO: Pod "pod-728b2042-8c8e-4fd5-8d45-a123c0d8d52a": Phase="Pending", Reason="", readiness=false. Elapsed: 1.835022ms Jun 17 22:09:07.259: INFO: Pod "pod-728b2042-8c8e-4fd5-8d45-a123c0d8d52a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007037124s Jun 17 22:09:09.261: INFO: Pod "pod-728b2042-8c8e-4fd5-8d45-a123c0d8d52a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.009598975s Jun 17 22:09:11.267: INFO: Pod "pod-728b2042-8c8e-4fd5-8d45-a123c0d8d52a": Phase="Pending", Reason="", readiness=false. Elapsed: 6.015667129s Jun 17 22:09:13.271: INFO: Pod "pod-728b2042-8c8e-4fd5-8d45-a123c0d8d52a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.019671029s STEP: Saw pod success Jun 17 22:09:13.271: INFO: Pod "pod-728b2042-8c8e-4fd5-8d45-a123c0d8d52a" satisfied condition "Succeeded or Failed" Jun 17 22:09:13.274: INFO: Trying to get logs from node node2 pod pod-728b2042-8c8e-4fd5-8d45-a123c0d8d52a container test-container: STEP: delete the pod Jun 17 22:09:13.287: INFO: Waiting for pod pod-728b2042-8c8e-4fd5-8d45-a123c0d8d52a to disappear Jun 17 22:09:13.289: INFO: Pod pod-728b2042-8c8e-4fd5-8d45-a123c0d8d52a no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 17 22:09:13.289: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-6471" for this suite. • [SLOW TEST:8.075 seconds] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":39,"skipped":623,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/sysctl.go:35 [BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 17 22:09:13.357: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sysctl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/sysctl.go:64 [It] should reject invalid sysctls [MinimumKubeletVersion:1.21] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod with one valid and two invalid sysctls [AfterEach] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 17 22:09:13.389: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sysctl-1797" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] should reject invalid sysctls [MinimumKubeletVersion:1.21] [Conformance]","total":-1,"completed":40,"skipped":657,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ {"msg":"PASSED [sig-node] Variable Expansion should allow substituting values in a volume subpath [Conformance]","total":-1,"completed":33,"skipped":685,"failed":1,"failures":["[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]"]} [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 17 22:09:05.087: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41 [It] should update annotations on modification [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating the pod Jun 17 22:09:05.121: INFO: The status of Pod annotationupdate88fda588-b8b2-475e-901c-c885b402c171 is Pending, waiting for it to be Running (with Ready = true) Jun 17 22:09:07.127: INFO: The status of Pod annotationupdate88fda588-b8b2-475e-901c-c885b402c171 is Pending, waiting for it to be Running (with Ready = true) Jun 17 22:09:09.123: INFO: The status of Pod annotationupdate88fda588-b8b2-475e-901c-c885b402c171 is Running (Ready = true) Jun 17 22:09:09.644: INFO: Successfully updated pod "annotationupdate88fda588-b8b2-475e-901c-c885b402c171" [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 17 22:09:13.667: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3778" for this suite. • [SLOW TEST:8.590 seconds] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should update annotations on modification [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance]","total":-1,"completed":34,"skipped":685,"failed":1,"failures":["[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]"]} SSS ------------------------------ [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 17 22:09:13.478: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:241 [It] should check if Kubernetes control plane services is included in cluster-info [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: validating cluster-info Jun 17 22:09:13.500: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-7768 cluster-info' Jun 17 22:09:13.674: INFO: stderr: "" Jun 17 22:09:13.674: INFO: stdout: "\x1b[0;32mKubernetes control plane\x1b[0m is running at \x1b[0;33mhttps://10.10.190.202:6443\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 17 22:09:13.674: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7768" for this suite. •SS ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes control plane services is included in cluster-info [Conformance]","total":-1,"completed":41,"skipped":698,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 17 22:09:09.929: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating configMap with name projected-configmap-test-volume-map-001c29f1-ae07-41f2-afee-e8ce46e077f5 STEP: Creating a pod to test consume configMaps Jun 17 22:09:09.966: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-78af384e-226f-46fe-9da7-6019aef0f1f5" in namespace "projected-3196" to be "Succeeded or Failed" Jun 17 22:09:09.968: INFO: Pod "pod-projected-configmaps-78af384e-226f-46fe-9da7-6019aef0f1f5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.411051ms Jun 17 22:09:11.972: INFO: Pod "pod-projected-configmaps-78af384e-226f-46fe-9da7-6019aef0f1f5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005765878s Jun 17 22:09:13.975: INFO: Pod "pod-projected-configmaps-78af384e-226f-46fe-9da7-6019aef0f1f5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.009213799s STEP: Saw pod success Jun 17 22:09:13.975: INFO: Pod "pod-projected-configmaps-78af384e-226f-46fe-9da7-6019aef0f1f5" satisfied condition "Succeeded or Failed" Jun 17 22:09:13.978: INFO: Trying to get logs from node node2 pod pod-projected-configmaps-78af384e-226f-46fe-9da7-6019aef0f1f5 container agnhost-container: STEP: delete the pod Jun 17 22:09:13.992: INFO: Waiting for pod pod-projected-configmaps-78af384e-226f-46fe-9da7-6019aef0f1f5 to disappear Jun 17 22:09:13.994: INFO: Pod pod-projected-configmaps-78af384e-226f-46fe-9da7-6019aef0f1f5 no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 17 22:09:13.994: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3196" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":-1,"completed":60,"skipped":677,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 17 22:08:48.262: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 [BeforeEach] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:105 STEP: Creating service test in namespace statefulset-1154 [It] should have a working scale subresource [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating statefulset ss in namespace statefulset-1154 Jun 17 22:08:48.291: INFO: Found 0 stateful pods, waiting for 1 Jun 17 22:08:58.295: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: getting scale subresource STEP: updating a scale subresource STEP: verifying the statefulset Spec.Replicas was modified STEP: Patch a scale subresource STEP: verifying the statefulset Spec.Replicas was modified [AfterEach] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:116 Jun 17 22:08:58.319: INFO: Deleting all statefulset in ns statefulset-1154 Jun 17 22:08:58.321: INFO: Scaling statefulset ss to 0 Jun 17 22:09:18.334: INFO: Waiting for statefulset status.replicas updated to 0 Jun 17 22:09:18.337: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 17 22:09:18.347: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-1154" for this suite. • [SLOW TEST:30.093 seconds] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:95 should have a working scale subresource [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance]","total":-1,"completed":4,"skipped":33,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 17 22:09:18.417: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:241 [It] should check if kubectl diff finds a difference for Deployments [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: create deployment with httpd image Jun 17 22:09:18.438: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-7282 create -f -' Jun 17 22:09:18.842: INFO: stderr: "" Jun 17 22:09:18.843: INFO: stdout: "deployment.apps/httpd-deployment created\n" STEP: verify diff finds difference between live and declared image Jun 17 22:09:18.843: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-7282 diff -f -' Jun 17 22:09:19.201: INFO: rc: 1 Jun 17 22:09:19.201: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-7282 delete -f -' Jun 17 22:09:19.340: INFO: stderr: "" Jun 17 22:09:19.340: INFO: stdout: "deployment.apps \"httpd-deployment\" deleted\n" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 17 22:09:19.340: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7282" for this suite. • ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl diff should check if kubectl diff finds a difference for Deployments [Conformance]","total":-1,"completed":5,"skipped":66,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 17 22:09:14.139: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46 [It] should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Jun 17 22:09:14.172: INFO: Waiting up to 5m0s for pod "busybox-readonly-false-02596af6-ef8d-41b7-b5e3-6b2094336a7f" in namespace "security-context-test-1627" to be "Succeeded or Failed" Jun 17 22:09:14.175: INFO: Pod "busybox-readonly-false-02596af6-ef8d-41b7-b5e3-6b2094336a7f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.455997ms Jun 17 22:09:16.179: INFO: Pod "busybox-readonly-false-02596af6-ef8d-41b7-b5e3-6b2094336a7f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007167602s Jun 17 22:09:18.183: INFO: Pod "busybox-readonly-false-02596af6-ef8d-41b7-b5e3-6b2094336a7f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.011078222s Jun 17 22:09:20.188: INFO: Pod "busybox-readonly-false-02596af6-ef8d-41b7-b5e3-6b2094336a7f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.016118508s Jun 17 22:09:20.188: INFO: Pod "busybox-readonly-false-02596af6-ef8d-41b7-b5e3-6b2094336a7f" satisfied condition "Succeeded or Failed" [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 17 22:09:20.188: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-1627" for this suite. • [SLOW TEST:6.060 seconds] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 When creating a pod with readOnlyRootFilesystem /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:171 should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]","total":-1,"completed":61,"skipped":757,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 17 22:09:20.262: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] custom resource defaulting for requests and from storage works [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Jun 17 22:09:20.284: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 17 22:09:28.391: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-3680" for this suite. • [SLOW TEST:8.137 seconds] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 custom resource defaulting for requests and from storage works [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works [Conformance]","total":-1,"completed":62,"skipped":791,"failed":0} SSSSSSSSSSS ------------------------------ [BeforeEach] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 17 22:09:28.424: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should guarantee kube-root-ca.crt exist in any namespace [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Jun 17 22:09:28.450: INFO: Got root ca configmap in namespace "svcaccounts-5650" Jun 17 22:09:28.453: INFO: Deleted root ca configmap in namespace "svcaccounts-5650" STEP: waiting for a new root ca configmap created Jun 17 22:09:28.957: INFO: Recreated root ca configmap in namespace "svcaccounts-5650" Jun 17 22:09:28.960: INFO: Updated root ca configmap in namespace "svcaccounts-5650" STEP: waiting for the root ca configmap reconciled Jun 17 22:09:29.465: INFO: Reconciled root ca configmap in namespace "svcaccounts-5650" [AfterEach] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 17 22:09:29.465: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-5650" for this suite. • ------------------------------ {"msg":"PASSED [sig-auth] ServiceAccounts should guarantee kube-root-ca.crt exist in any namespace [Conformance]","total":-1,"completed":63,"skipped":802,"failed":0} SSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 17 22:09:19.377: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a pod. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Pod that fits quota STEP: Ensuring ResourceQuota status captures the pod usage STEP: Not allowing a pod to be created that exceeds remaining quota STEP: Not allowing a pod to be created that exceeds remaining quota(validation on extended resources) STEP: Ensuring a pod cannot update its resource requirements STEP: Ensuring attempts to update pod resource requirements did not change quota usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 17 22:09:32.467: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-7678" for this suite. • [SLOW TEST:13.100 seconds] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a pod. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance]","total":-1,"completed":6,"skipped":81,"failed":0} SSSSS ------------------------------ [BeforeEach] [sig-storage] EmptyDir wrapper volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 17 22:09:29.503: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not conflict [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Jun 17 22:09:29.554: INFO: The status of Pod pod-secrets-11d8b7cd-d35b-412a-9e93-e77b99be00df is Pending, waiting for it to be Running (with Ready = true) Jun 17 22:09:31.560: INFO: The status of Pod pod-secrets-11d8b7cd-d35b-412a-9e93-e77b99be00df is Pending, waiting for it to be Running (with Ready = true) Jun 17 22:09:33.558: INFO: The status of Pod pod-secrets-11d8b7cd-d35b-412a-9e93-e77b99be00df is Running (Ready = true) STEP: Cleaning up the secret STEP: Cleaning up the configmap STEP: Cleaning up the pod [AfterEach] [sig-storage] EmptyDir wrapper volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 17 22:09:33.575: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-1671" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance]","total":-1,"completed":64,"skipped":816,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 17 22:09:33.632: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test emptydir 0777 on tmpfs Jun 17 22:09:33.664: INFO: Waiting up to 5m0s for pod "pod-c841984b-5a57-423e-aeb1-67e1b0aac1fa" in namespace "emptydir-8690" to be "Succeeded or Failed" Jun 17 22:09:33.666: INFO: Pod "pod-c841984b-5a57-423e-aeb1-67e1b0aac1fa": Phase="Pending", Reason="", readiness=false. Elapsed: 2.114564ms Jun 17 22:09:35.670: INFO: Pod "pod-c841984b-5a57-423e-aeb1-67e1b0aac1fa": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006426791s Jun 17 22:09:37.673: INFO: Pod "pod-c841984b-5a57-423e-aeb1-67e1b0aac1fa": Phase="Pending", Reason="", readiness=false. Elapsed: 4.009309923s Jun 17 22:09:39.678: INFO: Pod "pod-c841984b-5a57-423e-aeb1-67e1b0aac1fa": Phase="Pending", Reason="", readiness=false. Elapsed: 6.014238281s Jun 17 22:09:41.682: INFO: Pod "pod-c841984b-5a57-423e-aeb1-67e1b0aac1fa": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.017762439s STEP: Saw pod success Jun 17 22:09:41.682: INFO: Pod "pod-c841984b-5a57-423e-aeb1-67e1b0aac1fa" satisfied condition "Succeeded or Failed" Jun 17 22:09:41.684: INFO: Trying to get logs from node node1 pod pod-c841984b-5a57-423e-aeb1-67e1b0aac1fa container test-container: STEP: delete the pod Jun 17 22:09:41.696: INFO: Waiting for pod pod-c841984b-5a57-423e-aeb1-67e1b0aac1fa to disappear Jun 17 22:09:41.698: INFO: Pod pod-c841984b-5a57-423e-aeb1-67e1b0aac1fa no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 17 22:09:41.698: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-8690" for this suite. • [SLOW TEST:8.077 seconds] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":65,"skipped":843,"failed":0} SSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] PreStop /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 17 22:09:32.489: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename prestop STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] PreStop /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:157 [It] should call prestop when killing a pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating server pod server in namespace prestop-4822 STEP: Waiting for pods to come up. STEP: Creating tester pod tester in namespace prestop-4822 STEP: Deleting pre-stop pod Jun 17 22:09:45.569: INFO: Saw: { "Hostname": "server", "Sent": null, "Received": { "prestop": 1 }, "Errors": null, "Log": [ "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up." ], "StillContactingPeers": true } STEP: Deleting the server pod [AfterEach] [sig-node] PreStop /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 17 22:09:45.579: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "prestop-4822" for this suite. • [SLOW TEST:13.098 seconds] [sig-node] PreStop /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 should call prestop when killing a pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] PreStop should call prestop when killing a pod [Conformance]","total":-1,"completed":7,"skipped":86,"failed":0} Jun 17 22:09:45.589: INFO: Running AfterSuite actions on all nodes [BeforeEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 17 22:08:42.088: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-watch STEP: Waiting for a default service account to be provisioned in namespace [It] watch on custom resource definition objects [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Jun 17 22:08:42.107: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating first CR Jun 17 22:08:49.654: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2022-06-17T22:08:49Z generation:1 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2022-06-17T22:08:49Z]] name:name1 resourceVersion:49381 uid:5239c032-69a3-4b1b-8595-4ff4875ca170] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Creating second CR Jun 17 22:08:59.660: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2022-06-17T22:08:59Z generation:1 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2022-06-17T22:08:59Z]] name:name2 resourceVersion:49729 uid:31c33b84-88d8-4e84-8f85-0f9a576b3c66] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Modifying first CR Jun 17 22:09:09.665: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2022-06-17T22:08:49Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2022-06-17T22:09:09Z]] name:name1 resourceVersion:50030 uid:5239c032-69a3-4b1b-8595-4ff4875ca170] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Modifying second CR Jun 17 22:09:19.669: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2022-06-17T22:08:59Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2022-06-17T22:09:19Z]] name:name2 resourceVersion:50298 uid:31c33b84-88d8-4e84-8f85-0f9a576b3c66] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Deleting first CR Jun 17 22:09:29.675: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2022-06-17T22:08:49Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2022-06-17T22:09:09Z]] name:name1 resourceVersion:50458 uid:5239c032-69a3-4b1b-8595-4ff4875ca170] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Deleting second CR Jun 17 22:09:39.683: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2022-06-17T22:08:59Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2022-06-17T22:09:19Z]] name:name2 resourceVersion:50592 uid:31c33b84-88d8-4e84-8f85-0f9a576b3c66] num:map[num1:9223372036854775807 num2:1000000]]} [AfterEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 17 22:09:50.195: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-watch-9015" for this suite. • [SLOW TEST:68.115 seconds] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 CustomResourceDefinition Watch /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_watch.go:42 watch on custom resource definition objects [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance]","total":-1,"completed":37,"skipped":580,"failed":0} Jun 17 22:09:50.205: INFO: Running AfterSuite actions on all nodes [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 17 22:07:09.781: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 [BeforeEach] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:105 STEP: Creating service test in namespace statefulset-4343 [It] should perform rolling updates and roll backs of template modifications [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a new StatefulSet Jun 17 22:07:09.816: INFO: Found 0 stateful pods, waiting for 3 Jun 17 22:07:19.820: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Jun 17 22:07:19.820: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Jun 17 22:07:19.820: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false Jun 17 22:07:29.821: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Jun 17 22:07:29.821: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Jun 17 22:07:29.821: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true Jun 17 22:07:29.829: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-4343 exec ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Jun 17 22:07:30.091: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" Jun 17 22:07:30.092: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Jun 17 22:07:30.092: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' STEP: Updating StatefulSet template: update image from k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 to k8s.gcr.io/e2e-test-images/httpd:2.4.39-1 Jun 17 22:07:40.121: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Updating Pods in reverse ordinal order Jun 17 22:07:50.135: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-4343 exec ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jun 17 22:07:50.400: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n" Jun 17 22:07:50.400: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Jun 17 22:07:50.400: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Jun 17 22:08:00.420: INFO: Waiting for StatefulSet statefulset-4343/ss2 to complete update Jun 17 22:08:00.420: INFO: Waiting for Pod statefulset-4343/ss2-0 to have revision ss2-5bbbc9fc94 update revision ss2-677d6db895 Jun 17 22:08:00.420: INFO: Waiting for Pod statefulset-4343/ss2-1 to have revision ss2-5bbbc9fc94 update revision ss2-677d6db895 Jun 17 22:08:00.420: INFO: Waiting for Pod statefulset-4343/ss2-2 to have revision ss2-5bbbc9fc94 update revision ss2-677d6db895 Jun 17 22:08:10.431: INFO: Waiting for StatefulSet statefulset-4343/ss2 to complete update Jun 17 22:08:10.431: INFO: Waiting for Pod statefulset-4343/ss2-0 to have revision ss2-5bbbc9fc94 update revision ss2-677d6db895 Jun 17 22:08:10.431: INFO: Waiting for Pod statefulset-4343/ss2-1 to have revision ss2-5bbbc9fc94 update revision ss2-677d6db895 Jun 17 22:08:20.426: INFO: Waiting for StatefulSet statefulset-4343/ss2 to complete update Jun 17 22:08:20.427: INFO: Waiting for Pod statefulset-4343/ss2-0 to have revision ss2-5bbbc9fc94 update revision ss2-677d6db895 Jun 17 22:08:30.427: INFO: Waiting for StatefulSet statefulset-4343/ss2 to complete update STEP: Rolling back to a previous revision Jun 17 22:08:40.427: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-4343 exec ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Jun 17 22:08:40.820: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" Jun 17 22:08:40.820: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Jun 17 22:08:40.820: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Jun 17 22:08:50.853: INFO: Updating stateful set ss2 STEP: Rolling back update in reverse ordinal order Jun 17 22:09:00.870: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-4343 exec ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jun 17 22:09:01.132: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n" Jun 17 22:09:01.132: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Jun 17 22:09:01.132: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Jun 17 22:09:11.152: INFO: Waiting for StatefulSet statefulset-4343/ss2 to complete update Jun 17 22:09:11.152: INFO: Waiting for Pod statefulset-4343/ss2-0 to have revision ss2-677d6db895 update revision ss2-5bbbc9fc94 Jun 17 22:09:11.152: INFO: Waiting for Pod statefulset-4343/ss2-1 to have revision ss2-677d6db895 update revision ss2-5bbbc9fc94 Jun 17 22:09:11.152: INFO: Waiting for Pod statefulset-4343/ss2-2 to have revision ss2-677d6db895 update revision ss2-5bbbc9fc94 Jun 17 22:09:21.161: INFO: Waiting for StatefulSet statefulset-4343/ss2 to complete update Jun 17 22:09:21.161: INFO: Waiting for Pod statefulset-4343/ss2-0 to have revision ss2-677d6db895 update revision ss2-5bbbc9fc94 [AfterEach] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:116 Jun 17 22:09:31.165: INFO: Deleting all statefulset in ns statefulset-4343 Jun 17 22:09:31.167: INFO: Scaling statefulset ss2 to 0 Jun 17 22:09:51.183: INFO: Waiting for statefulset status.replicas updated to 0 Jun 17 22:09:51.185: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 17 22:09:51.193: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-4343" for this suite. • [SLOW TEST:161.420 seconds] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:95 should perform rolling updates and roll backs of template modifications [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance]","total":-1,"completed":41,"skipped":746,"failed":0} Jun 17 22:09:51.204: INFO: Running AfterSuite actions on all nodes [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 17 22:08:56.862: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating configMap with name configmap-test-upd-8355123e-1f15-44f7-b0cd-a313cee872e0 STEP: Creating the pod Jun 17 22:08:56.908: INFO: The status of Pod pod-configmaps-8f609259-7c7d-4b2a-9420-75a25fb02818 is Pending, waiting for it to be Running (with Ready = true) Jun 17 22:08:58.913: INFO: The status of Pod pod-configmaps-8f609259-7c7d-4b2a-9420-75a25fb02818 is Pending, waiting for it to be Running (with Ready = true) Jun 17 22:09:00.915: INFO: The status of Pod pod-configmaps-8f609259-7c7d-4b2a-9420-75a25fb02818 is Pending, waiting for it to be Running (with Ready = true) Jun 17 22:09:02.913: INFO: The status of Pod pod-configmaps-8f609259-7c7d-4b2a-9420-75a25fb02818 is Pending, waiting for it to be Running (with Ready = true) Jun 17 22:09:04.915: INFO: The status of Pod pod-configmaps-8f609259-7c7d-4b2a-9420-75a25fb02818 is Running (Ready = true) STEP: Updating configmap configmap-test-upd-8355123e-1f15-44f7-b0cd-a313cee872e0 STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 17 22:10:21.483: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-7880" for this suite. • [SLOW TEST:84.635 seconds] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance]","total":-1,"completed":34,"skipped":520,"failed":2,"failures":["[sig-network] Services should be able to create a functioning NodePort service [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]"]} Jun 17 22:10:21.499: INFO: Running AfterSuite actions on all nodes [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 17 22:05:31.704: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 [BeforeEach] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:105 STEP: Creating service test in namespace statefulset-4153 [It] Should recreate evicted statefulset [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Looking for a node to schedule stateful set and pod STEP: Creating pod with conflicting port in namespace statefulset-4153 STEP: Creating statefulset with conflicting port in namespace statefulset-4153 STEP: Waiting until pod test-pod will start running in namespace statefulset-4153 STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace statefulset-4153 Jun 17 22:10:35.761: FAIL: Pod ss-0 expected to be re-created at least once Full Stack Trace k8s.io/kubernetes/test/e2e.RunE2ETests(0xc001a8ca80) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x36c k8s.io/kubernetes/test/e2e.TestE2E(0xc001a8ca80) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:144 +0x2b testing.tRunner(0xc001a8ca80, 0x70f99e8) /usr/local/go/src/testing/testing.go:1193 +0xef created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1238 +0x2b3 [AfterEach] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:116 Jun 17 22:10:35.767: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-4153 describe po test-pod' Jun 17 22:10:35.966: INFO: stderr: "" Jun 17 22:10:35.967: INFO: stdout: "Name: test-pod\nNamespace: statefulset-4153\nPriority: 0\nNode: node2/10.10.190.208\nStart Time: Fri, 17 Jun 2022 22:05:31 +0000\nLabels: \nAnnotations: k8s.v1.cni.cncf.io/network-status:\n [{\n \"name\": \"default-cni-network\",\n \"interface\": \"eth0\",\n \"ips\": [\n \"10.244.3.149\"\n ],\n \"mac\": \"22:d0:1f:40:83:a0\",\n \"default\": true,\n \"dns\": {}\n }]\n k8s.v1.cni.cncf.io/networks-status:\n [{\n \"name\": \"default-cni-network\",\n \"interface\": \"eth0\",\n \"ips\": [\n \"10.244.3.149\"\n ],\n \"mac\": \"22:d0:1f:40:83:a0\",\n \"default\": true,\n \"dns\": {}\n }]\n kubernetes.io/psp: privileged\nStatus: Running\nIP: 10.244.3.149\nIPs:\n IP: 10.244.3.149\nContainers:\n webserver:\n Container ID: docker://0a494778fd8af2ec0c899426777324f40ea406dfabe01b26996ed88e55556555\n Image: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1\n Image ID: docker-pullable://k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50\n Port: 21017/TCP\n Host Port: 21017/TCP\n State: Running\n Started: Fri, 17 Jun 2022 22:05:35 +0000\n Ready: True\n Restart Count: 0\n Environment: \n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-r5n5b (ro)\nConditions:\n Type Status\n Initialized True \n Ready True \n ContainersReady True \n PodScheduled True \nVolumes:\n kube-api-access-r5n5b:\n Type: Projected (a volume that contains injected data from multiple sources)\n TokenExpirationSeconds: 3607\n ConfigMapName: kube-root-ca.crt\n ConfigMapOptional: \n DownwardAPI: true\nQoS Class: BestEffort\nNode-Selectors: \nTolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s\n node.kubernetes.io/unreachable:NoExecute op=Exists for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Pulling 5m1s kubelet Pulling image \"k8s.gcr.io/e2e-test-images/httpd:2.4.38-1\"\n Normal Pulled 5m1s kubelet Successfully pulled image \"k8s.gcr.io/e2e-test-images/httpd:2.4.38-1\" in 271.564204ms\n Normal Created 5m kubelet Created container webserver\n Normal Started 5m kubelet Started container webserver\n" Jun 17 22:10:35.967: INFO: Output of kubectl describe test-pod: Name: test-pod Namespace: statefulset-4153 Priority: 0 Node: node2/10.10.190.208 Start Time: Fri, 17 Jun 2022 22:05:31 +0000 Labels: Annotations: k8s.v1.cni.cncf.io/network-status: [{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.3.149" ], "mac": "22:d0:1f:40:83:a0", "default": true, "dns": {} }] k8s.v1.cni.cncf.io/networks-status: [{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.3.149" ], "mac": "22:d0:1f:40:83:a0", "default": true, "dns": {} }] kubernetes.io/psp: privileged Status: Running IP: 10.244.3.149 IPs: IP: 10.244.3.149 Containers: webserver: Container ID: docker://0a494778fd8af2ec0c899426777324f40ea406dfabe01b26996ed88e55556555 Image: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 Image ID: docker-pullable://k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 Port: 21017/TCP Host Port: 21017/TCP State: Running Started: Fri, 17 Jun 2022 22:05:35 +0000 Ready: True Restart Count: 0 Environment: Mounts: /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-r5n5b (ro) Conditions: Type Status Initialized True Ready True ContainersReady True PodScheduled True Volumes: kube-api-access-r5n5b: Type: Projected (a volume that contains injected data from multiple sources) TokenExpirationSeconds: 3607 ConfigMapName: kube-root-ca.crt ConfigMapOptional: DownwardAPI: true QoS Class: BestEffort Node-Selectors: Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s node.kubernetes.io/unreachable:NoExecute op=Exists for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Pulling 5m1s kubelet Pulling image "k8s.gcr.io/e2e-test-images/httpd:2.4.38-1" Normal Pulled 5m1s kubelet Successfully pulled image "k8s.gcr.io/e2e-test-images/httpd:2.4.38-1" in 271.564204ms Normal Created 5m kubelet Created container webserver Normal Started 5m kubelet Started container webserver Jun 17 22:10:35.967: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-4153 logs test-pod --tail=100' Jun 17 22:10:36.147: INFO: stderr: "" Jun 17 22:10:36.147: INFO: stdout: "AH00558: httpd: Could not reliably determine the server's fully qualified domain name, using 10.244.3.149. Set the 'ServerName' directive globally to suppress this message\nAH00558: httpd: Could not reliably determine the server's fully qualified domain name, using 10.244.3.149. Set the 'ServerName' directive globally to suppress this message\n[Fri Jun 17 22:05:35.139649 2022] [mpm_event:notice] [pid 1:tid 140530685459304] AH00489: Apache/2.4.38 (Unix) configured -- resuming normal operations\n[Fri Jun 17 22:05:35.139684 2022] [core:notice] [pid 1:tid 140530685459304] AH00094: Command line: 'httpd -D FOREGROUND'\n" Jun 17 22:10:36.147: INFO: Last 100 log lines of test-pod: AH00558: httpd: Could not reliably determine the server's fully qualified domain name, using 10.244.3.149. Set the 'ServerName' directive globally to suppress this message AH00558: httpd: Could not reliably determine the server's fully qualified domain name, using 10.244.3.149. Set the 'ServerName' directive globally to suppress this message [Fri Jun 17 22:05:35.139649 2022] [mpm_event:notice] [pid 1:tid 140530685459304] AH00489: Apache/2.4.38 (Unix) configured -- resuming normal operations [Fri Jun 17 22:05:35.139684 2022] [core:notice] [pid 1:tid 140530685459304] AH00094: Command line: 'httpd -D FOREGROUND' Jun 17 22:10:36.147: INFO: Deleting all statefulset in ns statefulset-4153 Jun 17 22:10:36.151: INFO: Scaling statefulset ss to 0 Jun 17 22:10:36.162: INFO: Waiting for statefulset status.replicas updated to 0 Jun 17 22:10:46.171: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 STEP: Collecting events from namespace "statefulset-4153". STEP: Found 7 events. Jun 17 22:10:46.184: INFO: At 2022-06-17 22:05:31 +0000 UTC - event for ss: {statefulset-controller } FailedCreate: create Pod ss-0 in StatefulSet ss failed error: pods "ss-0" is forbidden: PodSecurityPolicy: unable to admit pod: [spec.containers[0].hostPort: Invalid value: 21017: Host port 21017 is not allowed to be used. Allowed ports: [] spec.containers[0].hostPort: Invalid value: 21017: Host port 21017 is not allowed to be used. Allowed ports: [9103-9104] spec.containers[0].hostPort: Invalid value: 21017: Host port 21017 is not allowed to be used. Allowed ports: [9100]] Jun 17 22:10:46.184: INFO: At 2022-06-17 22:05:31 +0000 UTC - event for ss: {statefulset-controller } FailedCreate: create Pod ss-0 in StatefulSet ss failed error: pods "ss-0" is forbidden: PodSecurityPolicy: unable to admit pod: [spec.containers[0].hostPort: Invalid value: 21017: Host port 21017 is not allowed to be used. Allowed ports: [9103-9104] spec.containers[0].hostPort: Invalid value: 21017: Host port 21017 is not allowed to be used. Allowed ports: [9100] spec.containers[0].hostPort: Invalid value: 21017: Host port 21017 is not allowed to be used. Allowed ports: []] Jun 17 22:10:46.184: INFO: At 2022-06-17 22:05:34 +0000 UTC - event for test-pod: {kubelet node2} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/httpd:2.4.38-1" Jun 17 22:10:46.184: INFO: At 2022-06-17 22:05:34 +0000 UTC - event for test-pod: {kubelet node2} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/httpd:2.4.38-1" in 271.564204ms Jun 17 22:10:46.184: INFO: At 2022-06-17 22:05:35 +0000 UTC - event for test-pod: {kubelet node2} Created: Created container webserver Jun 17 22:10:46.184: INFO: At 2022-06-17 22:05:35 +0000 UTC - event for test-pod: {kubelet node2} Started: Started container webserver Jun 17 22:10:46.184: INFO: At 2022-06-17 22:05:42 +0000 UTC - event for ss: {statefulset-controller } FailedCreate: create Pod ss-0 in StatefulSet ss failed error: pods "ss-0" is forbidden: PodSecurityPolicy: unable to admit pod: [spec.containers[0].hostPort: Invalid value: 21017: Host port 21017 is not allowed to be used. Allowed ports: [9100] spec.containers[0].hostPort: Invalid value: 21017: Host port 21017 is not allowed to be used. Allowed ports: [] spec.containers[0].hostPort: Invalid value: 21017: Host port 21017 is not allowed to be used. Allowed ports: [9103-9104]] Jun 17 22:10:46.187: INFO: POD NODE PHASE GRACE CONDITIONS Jun 17 22:10:46.187: INFO: test-pod node2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-06-17 22:05:31 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2022-06-17 22:05:35 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-06-17 22:05:35 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-06-17 22:05:31 +0000 UTC }] Jun 17 22:10:46.187: INFO: Jun 17 22:10:46.191: INFO: Logging node info for node master1 Jun 17 22:10:46.194: INFO: Node Info: &Node{ObjectMeta:{master1 47691bb2-4ee9-4386-8bec-0f9db1917afd 50941 0 2022-06-17 19:59:00 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master1 kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.202 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2022-06-17 19:59:03 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}}} {flanneld Update v1 2022-06-17 20:01:43 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2022-06-17 20:01:45 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.0.0/24\"":{}},"f:taints":{}}}} {kubelet Update v1 2022-06-17 20:06:30 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{451201003520 0} {} 440625980Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234739200 0} {} 196518300Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{406080902496 0} {} 406080902496 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324575232 0} {} 195629468Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-06-17 20:04:36 +0000 UTC,LastTransitionTime:2022-06-17 20:04:36 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-06-17 22:10:43 +0000 UTC,LastTransitionTime:2022-06-17 19:58:57 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-06-17 22:10:43 +0000 UTC,LastTransitionTime:2022-06-17 19:58:57 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-06-17 22:10:43 +0000 UTC,LastTransitionTime:2022-06-17 19:58:57 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-06-17 22:10:43 +0000 UTC,LastTransitionTime:2022-06-17 20:01:45 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.202,},NodeAddress{Type:Hostname,Address:master1,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:f59e69c8e0cc41ff966b02f015e9cf30,SystemUUID:00ACFB60-0631-E711-906E-0017A4403562,BootID:81e1dc93-cb0d-4bf9-b7c4-28e0b4aef603,KernelVersion:3.10.0-1160.66.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.17,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:727740703,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 sirot/netperf-latest:latest],SizeBytes:282025213,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e k8s.gcr.io/coredns/coredns:v1.8.0],SizeBytes:42454755,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[localhost:30500/tasextender@sha256:a226a9c613b9eeed89115dd78ba697306e50d1b4466033c8415371714720c861 tasextender:latest localhost:30500/tasextender:0.4],SizeBytes:28910791,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[registry@sha256:1cd9409a311350c3072fe510b52046f104416376c126a479cef9a4dfe692cf57 registry:2.7.0],SizeBytes:24191168,},ContainerImage{Names:[nginx@sha256:b92d3b942c8b84da889ac3dc6e83bd20ffb8cd2d8298eba92c8b0bf88d52f03e nginx:1.20.1-alpine],SizeBytes:22721538,},ContainerImage{Names:[@ :],SizeBytes:5577654,},ContainerImage{Names:[alpine@sha256:c0e9560cda118f9ec63ddefb4a173a2b2a0347082d7dff7dc14272e7841a5b5a alpine:3.12.1],SizeBytes:5573013,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jun 17 22:10:46.195: INFO: Logging kubelet events for node master1 Jun 17 22:10:46.198: INFO: Logging pods the kubelet thinks is on node master1 Jun 17 22:10:46.229: INFO: kube-scheduler-master1 started at 2022-06-17 20:08:08 +0000 UTC (0+1 container statuses recorded) Jun 17 22:10:46.230: INFO: Container kube-scheduler ready: true, restart count 0 Jun 17 22:10:46.230: INFO: kube-proxy-b2xlr started at 2022-06-17 20:00:43 +0000 UTC (0+1 container statuses recorded) Jun 17 22:10:46.230: INFO: Container kube-proxy ready: true, restart count 2 Jun 17 22:10:46.230: INFO: container-registry-65d7c44b96-hq7rp started at 2022-06-17 20:06:17 +0000 UTC (0+2 container statuses recorded) Jun 17 22:10:46.230: INFO: Container docker-registry ready: true, restart count 0 Jun 17 22:10:46.230: INFO: Container nginx ready: true, restart count 0 Jun 17 22:10:46.230: INFO: node-exporter-bts5h started at 2022-06-17 20:14:54 +0000 UTC (0+2 container statuses recorded) Jun 17 22:10:46.230: INFO: Container kube-rbac-proxy ready: true, restart count 0 Jun 17 22:10:46.230: INFO: Container node-exporter ready: true, restart count 0 Jun 17 22:10:46.230: INFO: kube-apiserver-master1 started at 2022-06-17 20:00:04 +0000 UTC (0+1 container statuses recorded) Jun 17 22:10:46.230: INFO: Container kube-apiserver ready: true, restart count 0 Jun 17 22:10:46.230: INFO: kube-controller-manager-master1 started at 2022-06-17 20:08:08 +0000 UTC (0+1 container statuses recorded) Jun 17 22:10:46.230: INFO: Container kube-controller-manager ready: true, restart count 2 Jun 17 22:10:46.230: INFO: kube-flannel-z9nqz started at 2022-06-17 20:01:38 +0000 UTC (1+1 container statuses recorded) Jun 17 22:10:46.230: INFO: Init container install-cni ready: true, restart count 2 Jun 17 22:10:46.230: INFO: Container kube-flannel ready: true, restart count 2 Jun 17 22:10:46.230: INFO: kube-multus-ds-amd64-rqb4r started at 2022-06-17 20:01:47 +0000 UTC (0+1 container statuses recorded) Jun 17 22:10:46.230: INFO: Container kube-multus ready: true, restart count 1 Jun 17 22:10:46.320: INFO: Latency metrics for node master1 Jun 17 22:10:46.320: INFO: Logging node info for node master2 Jun 17 22:10:46.323: INFO: Node Info: &Node{ObjectMeta:{master2 71ab7827-6f85-4ecf-82ce-5b27d8ba1a11 50928 0 2022-06-17 19:59:29 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master2 kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.203 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/master.version:v0.8.2 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2022-06-17 19:59:30 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}}} {flanneld Update v1 2022-06-17 20:01:42 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2022-06-17 20:01:45 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}},"f:taints":{}}}} {nfd-master Update v1 2022-06-17 20:09:34 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/master.version":{}}}}} {kubelet Update v1 2022-06-17 20:09:40 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:capacity":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.1.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{451201003520 0} {} 440625980Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234743296 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{406080902496 0} {} 406080902496 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324579328 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-06-17 20:04:35 +0000 UTC,LastTransitionTime:2022-06-17 20:04:35 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-06-17 22:10:39 +0000 UTC,LastTransitionTime:2022-06-17 19:59:29 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-06-17 22:10:39 +0000 UTC,LastTransitionTime:2022-06-17 19:59:29 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-06-17 22:10:39 +0000 UTC,LastTransitionTime:2022-06-17 19:59:29 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-06-17 22:10:39 +0000 UTC,LastTransitionTime:2022-06-17 20:01:45 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.203,},NodeAddress{Type:Hostname,Address:master2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:ba0363db4fd2476098c500989c8b1fd5,SystemUUID:00A0DE53-E51D-E711-906E-0017A4403562,BootID:cafb2298-e9e8-4bc9-82ab-0feb6c416066,KernelVersion:3.10.0-1160.66.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.17,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:727740703,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 sirot/netperf-latest:latest],SizeBytes:282025213,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[k8s.gcr.io/nfd/node-feature-discovery@sha256:74a1cbd82354f148277e20cdce25d57816e355a896bc67f67a0f722164b16945 k8s.gcr.io/nfd/node-feature-discovery:v0.8.2],SizeBytes:108486428,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e k8s.gcr.io/coredns/coredns:v1.8.0],SizeBytes:42454755,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 aquasec/kube-bench:0.3.1],SizeBytes:19301876,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jun 17 22:10:46.324: INFO: Logging kubelet events for node master2 Jun 17 22:10:46.326: INFO: Logging pods the kubelet thinks is on node master2 Jun 17 22:10:46.343: INFO: kube-controller-manager-master2 started at 2022-06-17 20:08:05 +0000 UTC (0+1 container statuses recorded) Jun 17 22:10:46.343: INFO: Container kube-controller-manager ready: true, restart count 2 Jun 17 22:10:46.343: INFO: kube-scheduler-master2 started at 2022-06-17 20:00:05 +0000 UTC (0+1 container statuses recorded) Jun 17 22:10:46.343: INFO: Container kube-scheduler ready: true, restart count 2 Jun 17 22:10:46.343: INFO: kube-flannel-kmc7f started at 2022-06-17 20:01:38 +0000 UTC (1+1 container statuses recorded) Jun 17 22:10:46.343: INFO: Init container install-cni ready: true, restart count 2 Jun 17 22:10:46.343: INFO: Container kube-flannel ready: true, restart count 2 Jun 17 22:10:46.343: INFO: node-feature-discovery-controller-cff799f9f-zlzkd started at 2022-06-17 20:09:28 +0000 UTC (0+1 container statuses recorded) Jun 17 22:10:46.343: INFO: Container nfd-controller ready: true, restart count 0 Jun 17 22:10:46.343: INFO: node-exporter-ccmb2 started at 2022-06-17 20:14:54 +0000 UTC (0+2 container statuses recorded) Jun 17 22:10:46.343: INFO: Container kube-rbac-proxy ready: true, restart count 0 Jun 17 22:10:46.343: INFO: Container node-exporter ready: true, restart count 0 Jun 17 22:10:46.343: INFO: kube-apiserver-master2 started at 2022-06-17 20:00:05 +0000 UTC (0+1 container statuses recorded) Jun 17 22:10:46.343: INFO: Container kube-apiserver ready: true, restart count 0 Jun 17 22:10:46.343: INFO: kube-proxy-52p78 started at 2022-06-17 20:00:43 +0000 UTC (0+1 container statuses recorded) Jun 17 22:10:46.343: INFO: Container kube-proxy ready: true, restart count 1 Jun 17 22:10:46.343: INFO: kube-multus-ds-amd64-spg7h started at 2022-06-17 20:01:47 +0000 UTC (0+1 container statuses recorded) Jun 17 22:10:46.343: INFO: Container kube-multus ready: true, restart count 1 Jun 17 22:10:46.343: INFO: coredns-8474476ff8-55pd7 started at 2022-06-17 20:02:14 +0000 UTC (0+1 container statuses recorded) Jun 17 22:10:46.343: INFO: Container coredns ready: true, restart count 1 Jun 17 22:10:46.343: INFO: dns-autoscaler-7df78bfcfb-ml447 started at 2022-06-17 20:02:16 +0000 UTC (0+1 container statuses recorded) Jun 17 22:10:46.343: INFO: Container autoscaler ready: true, restart count 1 Jun 17 22:10:46.442: INFO: Latency metrics for node master2 Jun 17 22:10:46.442: INFO: Logging node info for node master3 Jun 17 22:10:46.445: INFO: Node Info: &Node{ObjectMeta:{master3 4495d2b3-3dc7-45fa-93e4-2ad5ef91730e 50925 0 2022-06-17 19:59:37 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master3 kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.204 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2022-06-17 19:59:39 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}}} {kube-controller-manager Update v1 2022-06-17 20:00:32 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.2.0/24\"":{}},"f:taints":{}}}} {flanneld Update v1 2022-06-17 20:01:42 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kubelet Update v1 2022-06-17 20:12:12 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:capacity":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.2.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{451201003520 0} {} 440625980Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234743296 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{406080902496 0} {} 406080902496 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324579328 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-06-17 20:04:33 +0000 UTC,LastTransitionTime:2022-06-17 20:04:33 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-06-17 22:10:37 +0000 UTC,LastTransitionTime:2022-06-17 19:59:37 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-06-17 22:10:37 +0000 UTC,LastTransitionTime:2022-06-17 19:59:37 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-06-17 22:10:37 +0000 UTC,LastTransitionTime:2022-06-17 19:59:37 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-06-17 22:10:37 +0000 UTC,LastTransitionTime:2022-06-17 20:01:45 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.204,},NodeAddress{Type:Hostname,Address:master3,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:e420146228b341cbbaf470c338ef023e,SystemUUID:008B1444-141E-E711-906E-0017A4403562,BootID:88e9c5d2-4324-4e63-8acf-ee80e9511e70,KernelVersion:3.10.0-1160.66.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.17,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:727740703,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[quay.io/prometheus-operator/prometheus-operator@sha256:850c86bfeda4389bc9c757a9fd17ca5a090ea6b424968178d4467492cfa13921 quay.io/prometheus-operator/prometheus-operator:v0.44.1],SizeBytes:42617274,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e k8s.gcr.io/coredns/coredns:v1.8.0],SizeBytes:42454755,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jun 17 22:10:46.445: INFO: Logging kubelet events for node master3 Jun 17 22:10:46.448: INFO: Logging pods the kubelet thinks is on node master3 Jun 17 22:10:46.471: INFO: coredns-8474476ff8-plfdq started at 2022-06-17 20:02:18 +0000 UTC (0+1 container statuses recorded) Jun 17 22:10:46.471: INFO: Container coredns ready: true, restart count 1 Jun 17 22:10:46.471: INFO: prometheus-operator-585ccfb458-kz9ss started at 2022-06-17 20:14:47 +0000 UTC (0+2 container statuses recorded) Jun 17 22:10:46.471: INFO: Container kube-rbac-proxy ready: true, restart count 0 Jun 17 22:10:46.471: INFO: Container prometheus-operator ready: true, restart count 0 Jun 17 22:10:46.471: INFO: kube-controller-manager-master3 started at 2022-06-17 20:08:07 +0000 UTC (0+1 container statuses recorded) Jun 17 22:10:46.471: INFO: Container kube-controller-manager ready: true, restart count 2 Jun 17 22:10:46.471: INFO: kube-scheduler-master3 started at 2022-06-17 20:08:07 +0000 UTC (0+1 container statuses recorded) Jun 17 22:10:46.471: INFO: Container kube-scheduler ready: true, restart count 2 Jun 17 22:10:46.471: INFO: kube-proxy-qw2lh started at 2022-06-17 20:00:43 +0000 UTC (0+1 container statuses recorded) Jun 17 22:10:46.471: INFO: Container kube-proxy ready: true, restart count 1 Jun 17 22:10:46.471: INFO: kube-flannel-7sp2w started at 2022-06-17 20:01:38 +0000 UTC (1+1 container statuses recorded) Jun 17 22:10:46.471: INFO: Init container install-cni ready: true, restart count 0 Jun 17 22:10:46.471: INFO: Container kube-flannel ready: true, restart count 2 Jun 17 22:10:46.472: INFO: kube-multus-ds-amd64-vtvhp started at 2022-06-17 20:01:47 +0000 UTC (0+1 container statuses recorded) Jun 17 22:10:46.472: INFO: Container kube-multus ready: true, restart count 1 Jun 17 22:10:46.472: INFO: node-exporter-tv8q4 started at 2022-06-17 20:14:54 +0000 UTC (0+2 container statuses recorded) Jun 17 22:10:46.472: INFO: Container kube-rbac-proxy ready: true, restart count 0 Jun 17 22:10:46.472: INFO: Container node-exporter ready: true, restart count 0 Jun 17 22:10:46.472: INFO: kube-apiserver-master3 started at 2022-06-17 20:00:05 +0000 UTC (0+1 container statuses recorded) Jun 17 22:10:46.472: INFO: Container kube-apiserver ready: true, restart count 0 Jun 17 22:10:46.554: INFO: Latency metrics for node master3 Jun 17 22:10:46.554: INFO: Logging node info for node node1 Jun 17 22:10:46.557: INFO: Node Info: &Node{ObjectMeta:{node1 2db3a28c-448f-4511-9db8-4ef739b681b1 50933 0 2022-06-17 20:00:39 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux cmk.intel.com/cmk-node:true feature.node.kubernetes.io/cpu-cpuid.ADX:true feature.node.kubernetes.io/cpu-cpuid.AESNI:true feature.node.kubernetes.io/cpu-cpuid.AVX:true feature.node.kubernetes.io/cpu-cpuid.AVX2:true feature.node.kubernetes.io/cpu-cpuid.AVX512BW:true feature.node.kubernetes.io/cpu-cpuid.AVX512CD:true feature.node.kubernetes.io/cpu-cpuid.AVX512DQ:true feature.node.kubernetes.io/cpu-cpuid.AVX512F:true feature.node.kubernetes.io/cpu-cpuid.AVX512VL:true feature.node.kubernetes.io/cpu-cpuid.FMA3:true feature.node.kubernetes.io/cpu-cpuid.HLE:true feature.node.kubernetes.io/cpu-cpuid.IBPB:true feature.node.kubernetes.io/cpu-cpuid.MPX:true feature.node.kubernetes.io/cpu-cpuid.RTM:true feature.node.kubernetes.io/cpu-cpuid.SSE4:true feature.node.kubernetes.io/cpu-cpuid.SSE42:true feature.node.kubernetes.io/cpu-cpuid.STIBP:true feature.node.kubernetes.io/cpu-cpuid.VMX:true feature.node.kubernetes.io/cpu-cstate.enabled:true feature.node.kubernetes.io/cpu-hardware_multithreading:true feature.node.kubernetes.io/cpu-pstate.status:active feature.node.kubernetes.io/cpu-pstate.turbo:true feature.node.kubernetes.io/cpu-rdt.RDTCMT:true feature.node.kubernetes.io/cpu-rdt.RDTL3CA:true feature.node.kubernetes.io/cpu-rdt.RDTMBA:true feature.node.kubernetes.io/cpu-rdt.RDTMBM:true feature.node.kubernetes.io/cpu-rdt.RDTMON:true feature.node.kubernetes.io/kernel-config.NO_HZ:true feature.node.kubernetes.io/kernel-config.NO_HZ_FULL:true feature.node.kubernetes.io/kernel-selinux.enabled:true feature.node.kubernetes.io/kernel-version.full:3.10.0-1160.66.1.el7.x86_64 feature.node.kubernetes.io/kernel-version.major:3 feature.node.kubernetes.io/kernel-version.minor:10 feature.node.kubernetes.io/kernel-version.revision:0 feature.node.kubernetes.io/memory-numa:true feature.node.kubernetes.io/network-sriov.capable:true feature.node.kubernetes.io/network-sriov.configured:true feature.node.kubernetes.io/pci-0300_1a03.present:true feature.node.kubernetes.io/storage-nonrotationaldisk:true feature.node.kubernetes.io/system-os_release.ID:centos feature.node.kubernetes.io/system-os_release.VERSION_ID:7 feature.node.kubernetes.io/system-os_release.VERSION_ID.major:7 kubernetes.io/arch:amd64 kubernetes.io/hostname:node1 kubernetes.io/os:linux] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.207 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/extended-resources: nfd.node.kubernetes.io/feature-labels:cpu-cpuid.ADX,cpu-cpuid.AESNI,cpu-cpuid.AVX,cpu-cpuid.AVX2,cpu-cpuid.AVX512BW,cpu-cpuid.AVX512CD,cpu-cpuid.AVX512DQ,cpu-cpuid.AVX512F,cpu-cpuid.AVX512VL,cpu-cpuid.FMA3,cpu-cpuid.HLE,cpu-cpuid.IBPB,cpu-cpuid.MPX,cpu-cpuid.RTM,cpu-cpuid.SSE4,cpu-cpuid.SSE42,cpu-cpuid.STIBP,cpu-cpuid.VMX,cpu-cstate.enabled,cpu-hardware_multithreading,cpu-pstate.status,cpu-pstate.turbo,cpu-rdt.RDTCMT,cpu-rdt.RDTL3CA,cpu-rdt.RDTMBA,cpu-rdt.RDTMBM,cpu-rdt.RDTMON,kernel-config.NO_HZ,kernel-config.NO_HZ_FULL,kernel-selinux.enabled,kernel-version.full,kernel-version.major,kernel-version.minor,kernel-version.revision,memory-numa,network-sriov.capable,network-sriov.configured,pci-0300_1a03.present,storage-nonrotationaldisk,system-os_release.ID,system-os_release.VERSION_ID,system-os_release.VERSION_ID.major nfd.node.kubernetes.io/worker.version:v0.8.2 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2022-06-17 20:00:39 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.4.0/24\"":{}}}}} {kubeadm Update v1 2022-06-17 20:00:40 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {flanneld Update v1 2022-06-17 20:01:43 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {nfd-master Update v1 2022-06-17 20:09:35 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/extended-resources":{},"f:nfd.node.kubernetes.io/feature-labels":{},"f:nfd.node.kubernetes.io/worker.version":{}},"f:labels":{"f:feature.node.kubernetes.io/cpu-cpuid.ADX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AESNI":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX2":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512BW":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512CD":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512DQ":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512F":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512VL":{},"f:feature.node.kubernetes.io/cpu-cpuid.FMA3":{},"f:feature.node.kubernetes.io/cpu-cpuid.HLE":{},"f:feature.node.kubernetes.io/cpu-cpuid.IBPB":{},"f:feature.node.kubernetes.io/cpu-cpuid.MPX":{},"f:feature.node.kubernetes.io/cpu-cpuid.RTM":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE4":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE42":{},"f:feature.node.kubernetes.io/cpu-cpuid.STIBP":{},"f:feature.node.kubernetes.io/cpu-cpuid.VMX":{},"f:feature.node.kubernetes.io/cpu-cstate.enabled":{},"f:feature.node.kubernetes.io/cpu-hardware_multithreading":{},"f:feature.node.kubernetes.io/cpu-pstate.status":{},"f:feature.node.kubernetes.io/cpu-pstate.turbo":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTCMT":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTL3CA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBM":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMON":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ_FULL":{},"f:feature.node.kubernetes.io/kernel-selinux.enabled":{},"f:feature.node.kubernetes.io/kernel-version.full":{},"f:feature.node.kubernetes.io/kernel-version.major":{},"f:feature.node.kubernetes.io/kernel-version.minor":{},"f:feature.node.kubernetes.io/kernel-version.revision":{},"f:feature.node.kubernetes.io/memory-numa":{},"f:feature.node.kubernetes.io/network-sriov.capable":{},"f:feature.node.kubernetes.io/network-sriov.configured":{},"f:feature.node.kubernetes.io/pci-0300_1a03.present":{},"f:feature.node.kubernetes.io/storage-nonrotationaldisk":{},"f:feature.node.kubernetes.io/system-os_release.ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID.major":{}}}}} {Swagger-Codegen Update v1 2022-06-17 20:13:06 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:cmk.intel.com/cmk-node":{}}},"f:status":{"f:capacity":{"f:cmk.intel.com/exclusive-cores":{}}}}} {kubelet Update v1 2022-06-17 20:13:08 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:cmk.intel.com/exclusive-cores":{},"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:capacity":{"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.4.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.4.0/24],},Status:NodeStatus{Capacity:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{451201003520 0} {} 440625980Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{201269608448 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{77 0} {} 77 DecimalSI},ephemeral-storage: {{406080902496 0} {} 406080902496 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{178884608000 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-06-17 20:04:34 +0000 UTC,LastTransitionTime:2022-06-17 20:04:34 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-06-17 22:10:42 +0000 UTC,LastTransitionTime:2022-06-17 20:00:39 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-06-17 22:10:42 +0000 UTC,LastTransitionTime:2022-06-17 20:00:39 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-06-17 22:10:42 +0000 UTC,LastTransitionTime:2022-06-17 20:00:39 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-06-17 22:10:42 +0000 UTC,LastTransitionTime:2022-06-17 20:01:49 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.207,},NodeAddress{Type:Hostname,Address:node1,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:b4b206100a5d45e9959c4a79c836676a,SystemUUID:00CDA902-D022-E711-906E-0017A4403562,BootID:5a19e1a7-8d9a-4724-83a4-bd77b1a0f8f4,KernelVersion:3.10.0-1160.66.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.17,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[opnfv/barometer-collectd@sha256:f30e965aa6195e6ac4ca2410f5a15e3704c92e4afa5208178ca22a7911975d66],SizeBytes:1075575763,},ContainerImage{Names:[@ :],SizeBytes:1007077455,},ContainerImage{Names:[localhost:30500/cmk@sha256:7227e64d78c2a9dd290de0ec1cbbaf536dad977fc3efca629dc87d6ffb97071e cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:727740703,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[golang@sha256:db2475a1dbb2149508e5db31d7d77a75e6600d54be645f37681f03f2762169ba golang:alpine3.12],SizeBytes:301186719,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[k8s.gcr.io/etcd@sha256:4ad90a11b55313b182afc186b9876c8e891531b8db4c9bf1541953021618d0e2 k8s.gcr.io/etcd:3.4.13-0],SizeBytes:253392289,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[grafana/grafana@sha256:ba39bf5131dcc0464134a3ff0e26e8c6380415249fa725e5f619176601255172 grafana/grafana:7.5.4],SizeBytes:203572842,},ContainerImage{Names:[quay.io/prometheus/prometheus@sha256:b899dbd1b9017b9a379f76ce5b40eead01a62762c4f2057eacef945c3c22d210 quay.io/prometheus/prometheus:v2.22.1],SizeBytes:168344243,},ContainerImage{Names:[nginx@sha256:a05b0cdd4fc1be3b224ba9662ebdf98fe44c09c0c9215b45f84344c12867002e nginx:1.21.1],SizeBytes:133175493,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:716d2f68314c5c4ddd5ecdb45183fcb4ed8019015982c1321571f863989b70b0 k8s.gcr.io/e2e-test-images/httpd:2.4.39-1],SizeBytes:126894770,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:125930239,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:1f36a24cfb5e0c3f725d7565a867c2384282fcbeccc77b07b423c9da95763a9a k8s.gcr.io/e2e-test-images/nautilus:1.4],SizeBytes:121748345,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[k8s.gcr.io/nfd/node-feature-discovery@sha256:74a1cbd82354f148277e20cdce25d57816e355a896bc67f67a0f722164b16945 k8s.gcr.io/nfd/node-feature-discovery:v0.8.2],SizeBytes:108486428,},ContainerImage{Names:[directxman12/k8s-prometheus-adapter@sha256:2b09a571757a12c0245f2f1a74db4d1b9386ff901cf57f5ce48a0a682bd0e3af directxman12/k8s-prometheus-adapter:v0.8.2],SizeBytes:68230450,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/sample-apiserver@sha256:e7fddbaac4c3451da2365ab90bad149d32f11409738034e41e0f460927f7c276 k8s.gcr.io/e2e-test-images/sample-apiserver:1.17.4],SizeBytes:58172101,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[localhost:30500/sriov-device-plugin@sha256:21d7abd21ac65aac7d19aaa2b1b05a71e496b7bf6251c76df58855be9c3aaa59 nfvpe/sriov-device-plugin:latest localhost:30500/sriov-device-plugin:v3.3.2],SizeBytes:42676189,},ContainerImage{Names:[localhost:30500/tasextender@sha256:a226a9c613b9eeed89115dd78ba697306e50d1b4466033c8415371714720c861 localhost:30500/tasextender:0.4],SizeBytes:28910791,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[prom/collectd-exporter@sha256:73fbda4d24421bff3b741c27efc36f1b6fbe7c57c378d56d4ff78101cd556654],SizeBytes:17463681,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:16032814,},ContainerImage{Names:[quay.io/prometheus-operator/prometheus-config-reloader@sha256:4dee0fcf1820355ddd6986c1317b555693776c731315544a99d6cc59a7e34ce9 quay.io/prometheus-operator/prometheus-config-reloader:v0.44.1],SizeBytes:13433274,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:11443478,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonewprivs@sha256:8ac1264691820febacf3aea5d152cbde6d10685731ec14966a9401c6f47a68ac k8s.gcr.io/e2e-test-images/nonewprivs:1.3],SizeBytes:7107254,},ContainerImage{Names:[appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172cead6502fec989198b98d8bb appropriate/curl:edge],SizeBytes:5654234,},ContainerImage{Names:[alpine@sha256:c75ac27b49326926b803b9ed43bf088bc220d22556de1bc5f72d742c91398f69 alpine:3.12],SizeBytes:5581590,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:1154361,},ContainerImage{Names:[busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 busybox:1.28],SizeBytes:1146369,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jun 17 22:10:46.559: INFO: Logging kubelet events for node node1 Jun 17 22:10:46.562: INFO: Logging pods the kubelet thinks is on node node1 Jun 17 22:10:46.577: INFO: kube-flannel-wqcwq started at 2022-06-17 20:01:38 +0000 UTC (1+1 container statuses recorded) Jun 17 22:10:46.577: INFO: Init container install-cni ready: true, restart count 2 Jun 17 22:10:46.577: INFO: Container kube-flannel ready: true, restart count 2 Jun 17 22:10:46.577: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-whtq2 started at 2022-06-17 20:10:41 +0000 UTC (0+1 container statuses recorded) Jun 17 22:10:46.577: INFO: Container kube-sriovdp ready: true, restart count 0 Jun 17 22:10:46.577: INFO: cmk-init-discover-node1-bvmrv started at 2022-06-17 20:13:02 +0000 UTC (0+3 container statuses recorded) Jun 17 22:10:46.577: INFO: Container discover ready: false, restart count 0 Jun 17 22:10:46.577: INFO: Container init ready: false, restart count 0 Jun 17 22:10:46.577: INFO: Container install ready: false, restart count 0 Jun 17 22:10:46.577: INFO: node-exporter-8ftgl started at 2022-06-17 20:14:54 +0000 UTC (0+2 container statuses recorded) Jun 17 22:10:46.577: INFO: Container kube-rbac-proxy ready: true, restart count 0 Jun 17 22:10:46.577: INFO: Container node-exporter ready: true, restart count 0 Jun 17 22:10:46.577: INFO: cmk-webhook-6c9d5f8578-qcmrd started at 2022-06-17 20:13:52 +0000 UTC (0+1 container statuses recorded) Jun 17 22:10:46.577: INFO: Container cmk-webhook ready: true, restart count 0 Jun 17 22:10:46.577: INFO: forbid-27591726-blkjb started at 2022-06-17 22:06:00 +0000 UTC (0+1 container statuses recorded) Jun 17 22:10:46.577: INFO: Container c ready: true, restart count 0 Jun 17 22:10:46.577: INFO: kube-proxy-t4lqk started at 2022-06-17 20:00:43 +0000 UTC (0+1 container statuses recorded) Jun 17 22:10:46.577: INFO: Container kube-proxy ready: true, restart count 2 Jun 17 22:10:46.577: INFO: cmk-xh247 started at 2022-06-17 20:13:51 +0000 UTC (0+2 container statuses recorded) Jun 17 22:10:46.577: INFO: Container nodereport ready: true, restart count 0 Jun 17 22:10:46.577: INFO: Container reconcile ready: true, restart count 0 Jun 17 22:10:46.577: INFO: nginx-proxy-node1 started at 2022-06-17 20:00:39 +0000 UTC (0+1 container statuses recorded) Jun 17 22:10:46.577: INFO: Container nginx-proxy ready: true, restart count 2 Jun 17 22:10:46.577: INFO: kube-multus-ds-amd64-m6vf8 started at 2022-06-17 20:01:47 +0000 UTC (0+1 container statuses recorded) Jun 17 22:10:46.577: INFO: Container kube-multus ready: true, restart count 1 Jun 17 22:10:46.577: INFO: kubernetes-dashboard-785dcbb76d-26kg6 started at 2022-06-17 20:02:19 +0000 UTC (0+1 container statuses recorded) Jun 17 22:10:46.577: INFO: Container kubernetes-dashboard ready: true, restart count 2 Jun 17 22:10:46.577: INFO: tas-telemetry-aware-scheduling-84ff454dfb-tbvjv started at 2022-06-17 20:17:57 +0000 UTC (0+1 container statuses recorded) Jun 17 22:10:46.577: INFO: Container tas-extender ready: true, restart count 0 Jun 17 22:10:46.577: INFO: node-feature-discovery-worker-dgp4b started at 2022-06-17 20:09:28 +0000 UTC (0+1 container statuses recorded) Jun 17 22:10:46.577: INFO: Container nfd-worker ready: true, restart count 0 Jun 17 22:10:46.577: INFO: prometheus-k8s-0 started at 2022-06-17 20:14:56 +0000 UTC (0+4 container statuses recorded) Jun 17 22:10:46.577: INFO: Container config-reloader ready: true, restart count 0 Jun 17 22:10:46.577: INFO: Container custom-metrics-apiserver ready: true, restart count 0 Jun 17 22:10:46.577: INFO: Container grafana ready: true, restart count 0 Jun 17 22:10:46.577: INFO: Container prometheus ready: true, restart count 1 Jun 17 22:10:46.577: INFO: collectd-5src2 started at 2022-06-17 20:18:47 +0000 UTC (0+3 container statuses recorded) Jun 17 22:10:46.577: INFO: Container collectd ready: true, restart count 0 Jun 17 22:10:46.577: INFO: Container collectd-exporter ready: true, restart count 0 Jun 17 22:10:46.577: INFO: Container rbac-proxy ready: true, restart count 0 Jun 17 22:10:46.795: INFO: Latency metrics for node node1 Jun 17 22:10:46.795: INFO: Logging node info for node node2 Jun 17 22:10:46.798: INFO: Node Info: &Node{ObjectMeta:{node2 467d2582-10f7-475b-9f20-5b7c2e46267a 50932 0 2022-06-17 20:00:37 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux cmk.intel.com/cmk-node:true feature.node.kubernetes.io/cpu-cpuid.ADX:true feature.node.kubernetes.io/cpu-cpuid.AESNI:true feature.node.kubernetes.io/cpu-cpuid.AVX:true feature.node.kubernetes.io/cpu-cpuid.AVX2:true feature.node.kubernetes.io/cpu-cpuid.AVX512BW:true feature.node.kubernetes.io/cpu-cpuid.AVX512CD:true feature.node.kubernetes.io/cpu-cpuid.AVX512DQ:true feature.node.kubernetes.io/cpu-cpuid.AVX512F:true feature.node.kubernetes.io/cpu-cpuid.AVX512VL:true feature.node.kubernetes.io/cpu-cpuid.FMA3:true feature.node.kubernetes.io/cpu-cpuid.HLE:true feature.node.kubernetes.io/cpu-cpuid.IBPB:true feature.node.kubernetes.io/cpu-cpuid.MPX:true feature.node.kubernetes.io/cpu-cpuid.RTM:true feature.node.kubernetes.io/cpu-cpuid.SSE4:true feature.node.kubernetes.io/cpu-cpuid.SSE42:true feature.node.kubernetes.io/cpu-cpuid.STIBP:true feature.node.kubernetes.io/cpu-cpuid.VMX:true feature.node.kubernetes.io/cpu-cstate.enabled:true feature.node.kubernetes.io/cpu-hardware_multithreading:true feature.node.kubernetes.io/cpu-pstate.status:active feature.node.kubernetes.io/cpu-pstate.turbo:true feature.node.kubernetes.io/cpu-rdt.RDTCMT:true feature.node.kubernetes.io/cpu-rdt.RDTL3CA:true feature.node.kubernetes.io/cpu-rdt.RDTMBA:true feature.node.kubernetes.io/cpu-rdt.RDTMBM:true feature.node.kubernetes.io/cpu-rdt.RDTMON:true feature.node.kubernetes.io/kernel-config.NO_HZ:true feature.node.kubernetes.io/kernel-config.NO_HZ_FULL:true feature.node.kubernetes.io/kernel-selinux.enabled:true feature.node.kubernetes.io/kernel-version.full:3.10.0-1160.66.1.el7.x86_64 feature.node.kubernetes.io/kernel-version.major:3 feature.node.kubernetes.io/kernel-version.minor:10 feature.node.kubernetes.io/kernel-version.revision:0 feature.node.kubernetes.io/memory-numa:true feature.node.kubernetes.io/network-sriov.capable:true feature.node.kubernetes.io/network-sriov.configured:true feature.node.kubernetes.io/pci-0300_1a03.present:true feature.node.kubernetes.io/storage-nonrotationaldisk:true feature.node.kubernetes.io/system-os_release.ID:centos feature.node.kubernetes.io/system-os_release.VERSION_ID:7 feature.node.kubernetes.io/system-os_release.VERSION_ID.major:7 kubernetes.io/arch:amd64 kubernetes.io/hostname:node2 kubernetes.io/os:linux] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.208 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/extended-resources: nfd.node.kubernetes.io/feature-labels:cpu-cpuid.ADX,cpu-cpuid.AESNI,cpu-cpuid.AVX,cpu-cpuid.AVX2,cpu-cpuid.AVX512BW,cpu-cpuid.AVX512CD,cpu-cpuid.AVX512DQ,cpu-cpuid.AVX512F,cpu-cpuid.AVX512VL,cpu-cpuid.FMA3,cpu-cpuid.HLE,cpu-cpuid.IBPB,cpu-cpuid.MPX,cpu-cpuid.RTM,cpu-cpuid.SSE4,cpu-cpuid.SSE42,cpu-cpuid.STIBP,cpu-cpuid.VMX,cpu-cstate.enabled,cpu-hardware_multithreading,cpu-pstate.status,cpu-pstate.turbo,cpu-rdt.RDTCMT,cpu-rdt.RDTL3CA,cpu-rdt.RDTMBA,cpu-rdt.RDTMBM,cpu-rdt.RDTMON,kernel-config.NO_HZ,kernel-config.NO_HZ_FULL,kernel-selinux.enabled,kernel-version.full,kernel-version.major,kernel-version.minor,kernel-version.revision,memory-numa,network-sriov.capable,network-sriov.configured,pci-0300_1a03.present,storage-nonrotationaldisk,system-os_release.ID,system-os_release.VERSION_ID,system-os_release.VERSION_ID.major nfd.node.kubernetes.io/worker.version:v0.8.2 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2022-06-17 20:00:37 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.3.0/24\"":{}}}}} {kubeadm Update v1 2022-06-17 20:00:40 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {flanneld Update v1 2022-06-17 20:01:43 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {nfd-master Update v1 2022-06-17 20:09:34 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/extended-resources":{},"f:nfd.node.kubernetes.io/feature-labels":{},"f:nfd.node.kubernetes.io/worker.version":{}},"f:labels":{"f:feature.node.kubernetes.io/cpu-cpuid.ADX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AESNI":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX2":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512BW":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512CD":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512DQ":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512F":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512VL":{},"f:feature.node.kubernetes.io/cpu-cpuid.FMA3":{},"f:feature.node.kubernetes.io/cpu-cpuid.HLE":{},"f:feature.node.kubernetes.io/cpu-cpuid.IBPB":{},"f:feature.node.kubernetes.io/cpu-cpuid.MPX":{},"f:feature.node.kubernetes.io/cpu-cpuid.RTM":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE4":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE42":{},"f:feature.node.kubernetes.io/cpu-cpuid.STIBP":{},"f:feature.node.kubernetes.io/cpu-cpuid.VMX":{},"f:feature.node.kubernetes.io/cpu-cstate.enabled":{},"f:feature.node.kubernetes.io/cpu-hardware_multithreading":{},"f:feature.node.kubernetes.io/cpu-pstate.status":{},"f:feature.node.kubernetes.io/cpu-pstate.turbo":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTCMT":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTL3CA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBM":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMON":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ_FULL":{},"f:feature.node.kubernetes.io/kernel-selinux.enabled":{},"f:feature.node.kubernetes.io/kernel-version.full":{},"f:feature.node.kubernetes.io/kernel-version.major":{},"f:feature.node.kubernetes.io/kernel-version.minor":{},"f:feature.node.kubernetes.io/kernel-version.revision":{},"f:feature.node.kubernetes.io/memory-numa":{},"f:feature.node.kubernetes.io/network-sriov.capable":{},"f:feature.node.kubernetes.io/network-sriov.configured":{},"f:feature.node.kubernetes.io/pci-0300_1a03.present":{},"f:feature.node.kubernetes.io/storage-nonrotationaldisk":{},"f:feature.node.kubernetes.io/system-os_release.ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID.major":{}}}}} {Swagger-Codegen Update v1 2022-06-17 20:13:31 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:cmk.intel.com/cmk-node":{}}},"f:status":{"f:capacity":{"f:cmk.intel.com/exclusive-cores":{}}}}} {kubelet Update v1 2022-06-17 20:13:37 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:cmk.intel.com/exclusive-cores":{},"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:capacity":{"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.3.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.3.0/24],},Status:NodeStatus{Capacity:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{451201003520 0} {} 440625980Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{201269604352 0} {} 196552348Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{77 0} {} 77 DecimalSI},ephemeral-storage: {{406080902496 0} {} 406080902496 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{178884603904 0} {} 174691996Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-06-17 20:04:33 +0000 UTC,LastTransitionTime:2022-06-17 20:04:33 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-06-17 22:10:41 +0000 UTC,LastTransitionTime:2022-06-17 20:00:37 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-06-17 22:10:41 +0000 UTC,LastTransitionTime:2022-06-17 20:00:37 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-06-17 22:10:41 +0000 UTC,LastTransitionTime:2022-06-17 20:00:37 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-06-17 22:10:41 +0000 UTC,LastTransitionTime:2022-06-17 20:04:30 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.208,},NodeAddress{Type:Hostname,Address:node2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:3b9e31fbb30d4e48b9ac063755a76deb,SystemUUID:80B3CD56-852F-E711-906E-0017A4403562,BootID:5cd4c1a7-c6ca-496c-9122-4f944da708e6,KernelVersion:3.10.0-1160.66.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.17,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[opnfv/barometer-collectd@sha256:f30e965aa6195e6ac4ca2410f5a15e3704c92e4afa5208178ca22a7911975d66],SizeBytes:1075575763,},ContainerImage{Names:[cmk:v1.5.1],SizeBytes:727740703,},ContainerImage{Names:[localhost:30500/cmk@sha256:7227e64d78c2a9dd290de0ec1cbbaf536dad977fc3efca629dc87d6ffb97071e localhost:30500/cmk:v1.5.1],SizeBytes:727740703,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[aquasec/kube-hunter@sha256:2be6820bc1d7e0f57193a9a27d5a3e16b2fd93c53747b03ce8ca48c6fc323781 aquasec/kube-hunter:0.3.1],SizeBytes:347611549,},ContainerImage{Names:[sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 sirot/netperf-latest:latest],SizeBytes:282025213,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:702a992280fb7c3303e84a5801acbb4c9c7fcf48cffe0e9c8be3f0c60f74cf89 k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.4],SizeBytes:253371792,},ContainerImage{Names:[nginx@sha256:a05b0cdd4fc1be3b224ba9662ebdf98fe44c09c0c9215b45f84344c12867002e nginx:1.21.1],SizeBytes:133175493,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:716d2f68314c5c4ddd5ecdb45183fcb4ed8019015982c1321571f863989b70b0 k8s.gcr.io/e2e-test-images/httpd:2.4.39-1],SizeBytes:126894770,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:125930239,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:1f36a24cfb5e0c3f725d7565a867c2384282fcbeccc77b07b423c9da95763a9a k8s.gcr.io/e2e-test-images/nautilus:1.4],SizeBytes:121748345,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[k8s.gcr.io/nfd/node-feature-discovery@sha256:74a1cbd82354f148277e20cdce25d57816e355a896bc67f67a0f722164b16945 k8s.gcr.io/nfd/node-feature-discovery:v0.8.2],SizeBytes:108486428,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[localhost:30500/sriov-device-plugin@sha256:21d7abd21ac65aac7d19aaa2b1b05a71e496b7bf6251c76df58855be9c3aaa59 localhost:30500/sriov-device-plugin:v3.3.2],SizeBytes:42676189,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 aquasec/kube-bench:0.3.1],SizeBytes:19301876,},ContainerImage{Names:[prom/collectd-exporter@sha256:73fbda4d24421bff3b741c27efc36f1b6fbe7c57c378d56d4ff78101cd556654],SizeBytes:17463681,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:16032814,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:11443478,},ContainerImage{Names:[appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172cead6502fec989198b98d8bb appropriate/curl:edge],SizeBytes:5654234,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:1154361,},ContainerImage{Names:[busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 busybox:1.28],SizeBytes:1146369,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jun 17 22:10:46.799: INFO: Logging kubelet events for node node2 Jun 17 22:10:46.801: INFO: Logging pods the kubelet thinks is on node node2 Jun 17 22:10:46.821: INFO: kube-flannel-plbl8 started at 2022-06-17 20:01:38 +0000 UTC (1+1 container statuses recorded) Jun 17 22:10:46.821: INFO: Init container install-cni ready: true, restart count 2 Jun 17 22:10:46.821: INFO: Container kube-flannel ready: true, restart count 2 Jun 17 22:10:46.821: INFO: cmk-init-discover-node2-z2vgz started at 2022-06-17 20:13:25 +0000 UTC (0+3 container statuses recorded) Jun 17 22:10:46.821: INFO: Container discover ready: false, restart count 0 Jun 17 22:10:46.821: INFO: Container init ready: false, restart count 0 Jun 17 22:10:46.821: INFO: Container install ready: false, restart count 0 Jun 17 22:10:46.821: INFO: concurrent-27591730-glr9q started at 2022-06-17 22:10:00 +0000 UTC (0+1 container statuses recorded) Jun 17 22:10:46.821: INFO: Container c ready: true, restart count 0 Jun 17 22:10:46.821: INFO: node-feature-discovery-worker-82r46 started at 2022-06-17 20:09:28 +0000 UTC (0+1 container statuses recorded) Jun 17 22:10:46.821: INFO: Container nfd-worker ready: true, restart count 0 Jun 17 22:10:46.821: INFO: liveness-bb7dc452-d11d-4de1-b708-ad1a8783fa84 started at 2022-06-17 22:09:13 +0000 UTC (0+1 container statuses recorded) Jun 17 22:10:46.821: INFO: Container agnhost-container ready: true, restart count 0 Jun 17 22:10:46.821: INFO: kube-proxy-pvtj6 started at 2022-06-17 20:00:43 +0000 UTC (0+1 container statuses recorded) Jun 17 22:10:46.821: INFO: Container kube-proxy ready: true, restart count 2 Jun 17 22:10:46.821: INFO: kube-multus-ds-amd64-hblk4 started at 2022-06-17 20:01:47 +0000 UTC (0+1 container statuses recorded) Jun 17 22:10:46.821: INFO: Container kube-multus ready: true, restart count 1 Jun 17 22:10:46.821: INFO: cmk-5gtjq started at 2022-06-17 20:13:52 +0000 UTC (0+2 container statuses recorded) Jun 17 22:10:46.821: INFO: Container nodereport ready: true, restart count 0 Jun 17 22:10:46.821: INFO: Container reconcile ready: true, restart count 0 Jun 17 22:10:46.821: INFO: collectd-6bcqz started at 2022-06-17 20:18:47 +0000 UTC (0+3 container statuses recorded) Jun 17 22:10:46.821: INFO: Container collectd ready: true, restart count 0 Jun 17 22:10:46.821: INFO: Container collectd-exporter ready: true, restart count 0 Jun 17 22:10:46.821: INFO: Container rbac-proxy ready: true, restart count 0 Jun 17 22:10:46.821: INFO: nginx-proxy-node2 started at 2022-06-17 20:00:37 +0000 UTC (0+1 container statuses recorded) Jun 17 22:10:46.821: INFO: Container nginx-proxy ready: true, restart count 2 Jun 17 22:10:46.821: INFO: test-pod started at 2022-06-17 22:05:31 +0000 UTC (0+1 container statuses recorded) Jun 17 22:10:46.821: INFO: Container webserver ready: true, restart count 0 Jun 17 22:10:46.821: INFO: kubernetes-metrics-scraper-5558854cb-w4nk8 started at 2022-06-17 20:02:19 +0000 UTC (0+1 container statuses recorded) Jun 17 22:10:46.821: INFO: Container kubernetes-metrics-scraper ready: true, restart count 1 Jun 17 22:10:46.821: INFO: node-exporter-xgz6d started at 2022-06-17 20:14:54 +0000 UTC (0+2 container statuses recorded) Jun 17 22:10:46.821: INFO: Container kube-rbac-proxy ready: true, restart count 0 Jun 17 22:10:46.821: INFO: Container node-exporter ready: true, restart count 0 Jun 17 22:10:46.821: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-xr9c5 started at 2022-06-17 20:10:41 +0000 UTC (0+1 container statuses recorded) Jun 17 22:10:46.821: INFO: Container kube-sriovdp ready: true, restart count 0 Jun 17 22:10:46.990: INFO: Latency metrics for node node2 Jun 17 22:10:46.990: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-4153" for this suite. • Failure [315.295 seconds] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:95 Should recreate evicted statefulset [Conformance] [It] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Jun 17 22:10:35.761: Pod ss-0 expected to be re-created at least once /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes/runner.go:113 ------------------------------ {"msg":"FAILED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","total":-1,"completed":7,"skipped":186,"failed":1,"failures":["[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} Jun 17 22:10:47.004: INFO: Running AfterSuite actions on all nodes [BeforeEach] [sig-apps] CronJob /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 17 22:05:21.531: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename cronjob STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] CronJob /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/cronjob.go:63 W0617 22:05:21.554952 29 warnings.go:70] batch/v1beta1 CronJob is deprecated in v1.21+, unavailable in v1.25+; use batch/v1 CronJob [It] should not schedule new jobs when ForbidConcurrent [Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a ForbidConcurrent cronjob STEP: Ensuring a job is scheduled STEP: Ensuring exactly one is scheduled STEP: Ensuring exactly one running job exists by listing jobs explicitly STEP: Ensuring no more jobs are scheduled STEP: Removing cronjob [AfterEach] [sig-apps] CronJob /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 17 22:11:01.581: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "cronjob-497" for this suite. • [SLOW TEST:340.060 seconds] [sig-apps] CronJob /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should not schedule new jobs when ForbidConcurrent [Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","total":-1,"completed":16,"skipped":550,"failed":1,"failures":["[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]"]} Jun 17 22:11:01.593: INFO: Running AfterSuite actions on all nodes [BeforeEach] [sig-apps] CronJob /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 17 22:09:13.711: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename cronjob STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] CronJob /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/cronjob.go:63 W0617 22:09:13.734128 40 warnings.go:70] batch/v1beta1 CronJob is deprecated in v1.21+, unavailable in v1.25+; use batch/v1 CronJob [It] should schedule multiple jobs concurrently [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a cronjob STEP: Ensuring more than one job is running at a time STEP: Ensuring at least two running jobs exists by listing jobs explicitly STEP: Removing cronjob [AfterEach] [sig-apps] CronJob /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 17 22:11:01.750: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "cronjob-6829" for this suite. • [SLOW TEST:108.048 seconds] [sig-apps] CronJob /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should schedule multiple jobs concurrently [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","total":-1,"completed":42,"skipped":710,"failed":0} Jun 17 22:11:01.761: INFO: Running AfterSuite actions on all nodes [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 17 22:09:13.702: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:54 [It] should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating pod liveness-bb7dc452-d11d-4de1-b708-ad1a8783fa84 in namespace container-probe-6546 Jun 17 22:09:19.741: INFO: Started pod liveness-bb7dc452-d11d-4de1-b708-ad1a8783fa84 in namespace container-probe-6546 STEP: checking the pod's current state and verifying that restartCount is present Jun 17 22:09:19.743: INFO: Initial restart count of pod liveness-bb7dc452-d11d-4de1-b708-ad1a8783fa84 is 0 STEP: deleting the pod [AfterEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 17 22:13:20.316: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-6546" for this suite. • [SLOW TEST:246.623 seconds] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Probing container should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance]","total":-1,"completed":35,"skipped":696,"failed":1,"failures":["[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]"]} Jun 17 22:13:20.327: INFO: Running AfterSuite actions on all nodes [BeforeEach] [sig-apps] CronJob /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 17 22:08:55.934: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename cronjob STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] CronJob /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/cronjob.go:63 W0617 22:08:55.962877 32 warnings.go:70] batch/v1beta1 CronJob is deprecated in v1.21+, unavailable in v1.25+; use batch/v1 CronJob [It] should not schedule jobs when suspended [Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a suspended cronjob STEP: Ensuring no jobs are scheduled STEP: Ensuring no job exists by listing jobs explicitly STEP: Removing cronjob [AfterEach] [sig-apps] CronJob /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 17 22:13:55.984: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "cronjob-6468" for this suite. • [SLOW TEST:300.058 seconds] [sig-apps] CronJob /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should not schedule jobs when suspended [Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] CronJob should not schedule jobs when suspended [Slow] [Conformance]","total":-1,"completed":30,"skipped":525,"failed":1,"failures":["[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]"]} Jun 17 22:13:55.995: INFO: Running AfterSuite actions on all nodes Jun 17 22:09:41.733: INFO: Running AfterSuite actions on all nodes Jun 17 22:13:56.049: INFO: Running AfterSuite actions on node 1 Jun 17 22:13:56.049: INFO: Skipping dumping logs from cluster Summarizing 6 Failures: [Fail] [sig-network] Services [It] should be able to change the type from ExternalName to NodePort [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1351 [Fail] [sig-network] Services [It] should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:2497 [Fail] [sig-network] Services [It] should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:2576 [Fail] [sig-network] Services [It] should be able to create a functioning NodePort service [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1169 [Fail] [sig-network] Services [It] should have session affinity work for NodePort service [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:2576 [Fail] [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] [It] Should recreate evicted statefulset [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes/runner.go:113 Ran 320 of 5773 Specs in 887.163 seconds FAIL! -- 314 Passed | 6 Failed | 0 Pending | 5453 Skipped Ginkgo ran 1 suite in 14m48.840432042s Test Suite Failed